id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
117803594
pes2o/s2orc
v3-fos-license
Naive Dimensional Analysis and Irrelevant Operators We derive a set of easy rules to follow when estimating the coefficients of operators in an effective Lagrangian. In particular, we emphasize how to estimate the size of coefficients originating from irrelevant interactions in the underlying Lagrangian. Effective field theory constitutes an essential method in modern theoretical physics, see e.g. [1]. In elementary particle physics some of the well known examples are heavy quark effective theory [2] and, likely, the entire Standard Model itself. In typical applications of effective theory one constructs the model Lagrangian based only on the knowledge of the global symmetries and relevant degrees of freedom. Then, in the absence of matching onto a more microscopic theory (or data), it is important to be able to estimate the size of the dimensionless coefficients of the terms arising in the effective Lagrangian. For illustration, consider QCD with a four-fermion interaction Here Q is an n f component vector, m is the diagonal matrix of masses and Q L,R = P L,R Q in terms of the usual projectors P L,R = (1 ∓ γ 5 )/2. We assume that the theory becomes strongly interacting, confines and breaks the chiral symmetry spontaneously according to the known pattern at the scale Λ ≪ 1/ √ G. The dynamical degrees of freedom at low energies are the pions, represented via the matrix U = exp(iπ/f ), where f is the Goldstone boson decay constant. Their low energy effective theory contains the kinetic theory given by the lowest order chiral perturbation theory and the mass and interaction terms constructed with the spurion method: The mass scale of light non-Goldstone states, Λ, is the cutoff in the effective theory and also the matching scale between the effective Lagrangian and the high energy Lagrangian. We write this scale explicitly in the effective Lagrangian to make the a priori unknown coefficients c i dimensionless. The task now is to estimate these coefficients, and capture the relative importance of the terms in the effective Lagrangian. From now on we will work in units where Λ = 1. We also ignore any O(1) factors, i.e. we only keep track of powers of Λ/f ≡ g and coupling constants. The first constant, c 0 , can be fixed by requiring the pion fields to have a canonical kinetic term. This leads to c 0 = 1/g 2 . To estimate the constants c 1 and c 2 , we need other methods. In QCD a robust prediction is made by naive dimensional analysis (NDA) [3], which was generalized by Georgi [4] to apply to more general theories than QCD. According to Georgi, the coefficients of an operator in an effective Lagrangian for a strongly interacting theory depend on Λ and g ≡ Λ/f in the following simple way: 1. Divide each term by g 2 2. Each strongly interacting field is accompanied by a factor of g Fix the overall dimension by multiplying with Λ The item two in the above list is taken into account automatically by writing U = exp(igπ). With this in mind, application of the rules yields c 1 = c 2 = 1/g 2 , and hence the generalized NDA results in However, from the underlying Lagrangian (1) we expect and comparing with (3) we see that the correct dependency on g in both terms cannot be obtained with generalized NDA: either QQ ∼ 1/g 2 or QQ ∼ 1/g, but in both cases the resulting total dependence on g is wrong. Since numerically in QCD QQ ∼ Λ 3 /g 2 we assume the first is more correct, and that the rules of generalized NDA should be improved to properly take into account the suppression of irrelevant operators. In this paper we derive such rules. The basic assumption of NDA is that in a strongly interacting system, diagrams of each loop order give the same size contributions. From momentum conservation the number of loops in a diagram is L = P − V + 1, where P is the number of propagators and V is the number of vertices. In units Λ = 1, each loop is associated with a factor 1/16π 2 . Then, if the propagator is of the order α and all vertices are assumed to be of similar size, β, the following should hold: The solution valid for any P and V is β = 1/α = 1/16π 2 . Therefore NDA requires that in some non canonical field normalization, each operator coefficient is O(1/16π 2 ). We generalize and explicate this statement in two ways. First, we write the factor 1/16π 2 as 1/g 2 , as was already done by Georgi [4]. By allowing g 4π one can include effects of small parameters present in e.g. large-N field theories. Second, we apply these principles at the matching scale Λ to both the fundamental and effective Lagrangians, also including any explicit symmetry breaking or irrelevant operators, as was done in [5,6]. More specifically, let φ and M collectively denote all the strongly interacting fields present in the high and low energy Lagrangian, and let the hatted fields denote corresponding non canonically normalized fields. Then the principles above are equivalent to the matching where the hatted Lagrangians include only O(1) coefficients. To clarify the idea, we now reconsider the previous example. We require that the fundamental Lagrangian (1) takes the form This matches with (1) if Now we apply NDA to the effective Lagrangian as well, whereÛ = exp iπ. To restore the usual kinetic term for π we expand the exponential and find thatπ = gπ and U = exp igπ. We now reinstate the spurion parameters m and G. In the limit these parameters are zero, the second and third terms in (9) vanish by symmetry arguments, so the coefficients must be proportional to m and G. The proportionality is fixed by requiring that in the limit m = I and G = g 2 , one should obtain (9). Therefore Hence we obtain c 1 = 1/g 2 , but c 2 = 1/g 4 , and Thus we find behavior compatible with (4) if QQ ∼ 1/g 2 . As an outcome of this treatment, the irrelevant interactions in the underlying Lagrangian have become more suppressed in the effective Lagrangian than in generalized NDA. Also, we find the maximum values of the spurion parameters for which the effective Lagrangian makes sense: clearly, the elements of the matrix m should be less than one, and G < g 2 ; otherwise terms of higher order in the spurions will be larger than these lowest order terms. Our result is based on a simple scaling argument, and, as already noted, similar considerations have appeared in literature [5,6]. However, one would expect a general description akin to the rules of the generalized NDA, and we are not aware that they have been presented elsewhere. Therefore we now generalize the above discussion and derive simple rules for applications and also compare to the rules of NDA presented in [4]. In the notation of (6), it is difficult to account for operator coefficients such as G, which might be suppressed by additional powers of g in the effective Lagrangian. It is also difficult to account for any dependence on g in the high energy Lagrangian. We thus rewrite the condition in two steps. In step one we require (6) for the kinetic terms, and in step two we generalize (6) for other terms. Let the canonical kinetic terms in the high and low energy Lagrangians be O KE (φ, g) and O eff,KE (M, g), where we have explicated the dependence on g and fields. We assume that the kinetic energy operators do not depend on parameters other than g. Therefore O KE (φ, 1) and O eff,KE (M, 1) have O(1) coefficients, and step one is to require From these equations we can solve φ(φ, g) andM (M, g). We now assume these relations are known. Step two of the new matching is to require where now instead of requiring thatL(φ) andL eff (M ) have O(1) coefficients, we require thatL eff (M , g) is built with operators that have the same dependence on g and other couplings as operators inL(φ, g). To clarify this condition, consider a general operator O(φ, g) in L that can contain a small parameter. The high energy matching condition in (14) becomes This equation defines O(φ, g), and especially its dependence on g. Now we will transition to the effective Lagrangian. It is written in terms of effective operators The correspondence means that the symmetry breaking structure, dependence on couplings, and dependence on g, is the same for both operators. From the low energy condition in (14) we directly find that any term in the effective Lagrangian is of the form The generalization to many operators O i is straightforward. However, now we require that all fields are in a linear representation and all operators are polynomial in the fields. Therefore we write where all the dependence on g and other couplings is in the coefficient c i (g). We write a general Lagrangian, where the operators c k h k (φ) enumerate all terms in the Lagrangian and we have omitted the dependence on g in the coefficient. We first solve the field rescalingφ(φ, g) andM (M, g) from (13). For any type of field (scalar, gauge, or fermion), the kinetic term is comprised of two fields so we findφ = gφ andM = gM . Next we apply the high energy condition (14). If s k gives the number of strongly interacting fields in the operator h k (φ), we find where we used the fact that h k is polynomial in φ for the second equivalence. This equation is solved forĉ k = g −χ k c k where χ k = s k − 2. Continuing, according to (16) and (17) the effective Lagrangian is built from operators corresponding tô c k h k (φ), and is therefore of the form where n k,m gives the number of strongly interacting effective fields in the operator. Combining these results allows us to explicate the powers of g in each term of (20). The result is where h k,m ≡ h k,m (M ). Now we have access to all elements required to estimate the coefficient of each term arising in the effective Lagrangian. The result can be encapsulated into a form of simple rules which are as follows: 1. Divide each term by g 2 2. Each coupling is accompanied by a factor of g 2−n , where n is the number of strongly interacting fields in the corresponding high energy operator 3. Each strongly interacting field is accompanied by a factor of g Fix the overall dimension by multiplying with Λ The only difference to Georgi's rules is the addition of item two. In QCD applications of NDA the high energy Lagrangian consists of the kinetic term and the mass term. Incidentally, for these operators n − 2 = 0 in item two above, and both analyses give the same result at each order in the spurions. For terms with more than two strongly interacting fields, like the NJL term which we exhibited in the introduction, the two analyses will give different results. It should be noted, that if one uses the nonlinear representation for the Goldstone boson matrix U , the item three in the rules is taken into account by writing U = exp(igπ). Also we found that the effective Lagrangian is effectively an expansion in the parametersĉ k . These should be less than one for the expansion to make sense. For example, in chiral perturbation theory with quark masses, c mQ = m Q /Λ where m Q is the mass of the quark and Λ ≈ 1 GeV. In this paper we have reconsidered NDA. Our results are compatible with the ones obtained earlier in e.g. [5,6], our new output is the derivation of a general set of simple rules which are also able to account for irrelevant interactions. Our results are fully general and can be applied to supersymmetric as well as nonsupersymmetric field theories. We expect our results to be useful in the construction of effective field theories and their application in various contexts.
2011-05-16T19:16:44.000Z
2011-05-16T00:00:00.000
{ "year": 2011, "sha1": "d91288b23cb7f132a57eaf6bb6699a2d7073292d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d91288b23cb7f132a57eaf6bb6699a2d7073292d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
256709713
pes2o/s2orc
v3-fos-license
Direct immunofluorescence cannot be used solely to differentiate among oral lichen planus, oral lichenoid lesion, and oral epithelial dysplasia Background/purpose Some red and white lesions may have similar manifestations, making them difficult to be diagnosed. A direct immunofluorescence (DIF) assay can assist in making a final diagnosis of oral lichen planus (OLP). The aim of this study was to evaluate and compare the DIF profile in patients who had the clinical presentations of OLP and were histopathologically diagnosed with OLP, OLL (oral lichenoid lesion), or OED (oral epithelial dysplasia). Materials and methods The data were obtained from the medical records of 136 patients with the clinical presentations of OLP. Demographic information, histopathological diagnosis, malignant transformation, and DIF results were collected and analyzed. Results In this study, 117 patients (86.0%) were DIF-positive, while 19 patients (14.0%) were DIF-negative. The highest DIF-positivity rate was in the OLP group (88.9%) followed by the OLL (83.7%), and the OED groups (81%). There were no significant differences in DIF-positivity rate, type of immunoreactants, location, or interpretation among these groups. Shaggy fibrinogen at the basement membrane zone (BMZ) was the most common DIF pattern in all groups. Conclusion The DIF assay alone cannot be regarded as sufficient evidence for OLP, OLL, and OED differentiation. A histopathological examination is required to determine the presence of epithelial dysplasia or malignancy. To diagnose dysplastic lesions with the clinical manifestations of OLP, careful clinicopathologic correlation is mandatory. Due to the lack of scientific evidence to identify the primary pathology and the ongoing malignancy risk of epithelial dysplasia, meticulous long-term follow-up plays a crucial role in patient management. Introduction Oral lichen planus (OLP) is a chronic inflammatory disorder that frequently presents as bilateral symmetrical white reticular lesions with or without atrophic, erosive and ulcerative, and/or plaque-like areas. 1 OLP can manifest as multifocal, usually bilateral with a roughly symmetrical distribution of red and white lesions or as desquamative gingivitis. 2,3The disease affects about 1% of the global population, with most cases occurring in middle-aged women. 4,5The most common complaint in Thai patients is a burning sensation, followed by pain, no symptoms, roughness, and other symptoms; such as bleeding, dysgeusia, and xerostomia. 5Although the etiology of OLP remains unclear, immune dysregulation of cell-mediated immunity was suggested as the pathogenesis of OLP. 2,6ral lichenoid lesions (OLL) are lesions that do not exhibit the typical clinical or histopathological features of OLP, but are compatible with the clinical or histopathological features of OLP, such as being asymmetrical and unilateral, along with lesions considered to be reactions to a dental restoration or drug-induced. 1ral epithelial dysplasia (OED) is a term that describes the histological changes in the oral epithelium caused by genetic instability that indicate a risk of malignant transformation. 7,8The typical clinical features of OED are a single isolated lesion, that can also present as multifocal lesions, with white plaque (leukoplakia), red plaque (erythroplakia), and ulceration, however, multifocal manifestations, such as advanced tobacco-related mucosal injury and proliferative verrucous leukoplakia, can also be found. 2The histopathological features of OED are characterized by various degrees of atypical cellular changes (e.g., hyperchromatism, pleomorphism, increased nuclear/ cytoplasmic ratio).Furthermore, a band-like mostly lymphocytic inflammatory infiltrate can occasionally be seen, however, it is more commonly seen as a mix of inflammatory cell types. 2,3Three or more lichenoid features (bandlike inflammatory infiltrate immediately adjacent to the epithelium consisting mainly of lymphocytes, sawtooth rete ridge formation, interface stomatitis or lymphocyte infiltration of the basal layer of the epithelium, formation of Civatte (colloid) bodies, and basal layer degeneration) have been found in approximately 29% of OED and oral squamous carcinoma cases. 9ost experts consider the presence of epithelial dysplasia to be a criterion for excluding a diagnosis of OLP. 8 However, overlapping features of OLP, OLL, and dysplastic lesions with lichenoid inflammatory infiltration make predicting the malignant potential of OLP challenging.The data is difficult to interpret because many studies did not use strict diagnostic criteria. 10According to a systematic review, the overall mean rate of malignant transformation of OLP was 1.14%. 11ed and white lesions in the oral mucosa can be OLP, OLL, lupus erythematosus (LE), chronic ulcerative stomatitis (CUS), other autoimmune diseases, or malignant conditions.These oral lesions can have similar manifestations, making it difficult to make a differential diagnosis based solely on their clinical appearance. 12irect immunofluorescence (DIF) detects immunoactivity in tissue specimens by adding specific fluoresceinated antibodies against target antigens. 13ibrinogen deposition in a shaggy pattern at the basement membrane zone (BMZ) with or without deposition of multiple immunoglobulins at the colloid bodies (CBs), particularly immunoglobulin M (IgM), is seen in OLP.13e16 In cases of aberrant clinical and/or histopathological features, a combination of clinical findings, histopathology, and direct immunofluorescence assay can assist in making a final diagnosis of OLP. 13,15However, there are few studies that have evaluated DIF in epithelial dysplasia have a multifocal presentation. The aim of this study was to evaluate and compare the DIF profile in patients who had the clinical presentation of OLP (bilateral lesions with some degree of reticular component) and were histopathologically diagnosed with OLP, OLL, or OED. Materials and methods This retrospective study protocol was approved by the Ethics Committee of the Faculty of Dentistry, Chulalongkorn University (HREC-DCU 2022-020).The data were obtained from the medical records of patients with the clinical presentations of OLP (bilateral lesions with some reticular components) who attended the Oral Medicine Clinic, Faculty of Dentistry, Chulalongkorn University, Thailand from 2012 to 2021.Fig. 1 presents the diagram of the protocol design (see Fig. 2). Two biopsy specimens were taken; one was preserved in 10% buffered formalin and sent for histopathological diagnosis by oral pathologists at the Department of Oral Pathology, Faculty of Dentistry, Chulalongkorn University; and the other was preserved in Michel's solution and submitted for a DIF assay to characterize the immunoglobulin G (IgG), immunoglobulin A (IgA), IgM, complement 3 (C3), and fibrinogen deposits at the Dermatoimmunology laboratory, Department of Dermatology, Faculty of Medicine Siriraj Hospital, Mahidol University. The patients were divided into three groups based on their histopathology report (OLP, OLL, or OED).The modified WHO criteria were used to determine the histopathologic diagnosis for OLP and OLL. 17 OED was diagnosed using histological evidence of epithelial dysplasia.To rule out oral lichenoid drug reactions, patients who took systemic medications regularly prior to biopsy were excluded from the OLP group.The collected data comprised the patient's age at the time of diagnosis, sex, medical history and medications, biopsy site, histopathological diagnosis, malignant transformation, and direct immunofluorescence assay, i.e., immunoreactants (IgG, IgA, IgM, C3, fibrinogen), N. Korkitpoonpol and P. Kanjanabuch patterns (e.g., shaggy, granular, linear), and interpretation.Descriptive statistics were used to analyze the data.The age differences between groups were determined using the Kruskal-Wallis H test.The Chi-square test was used to compare qualitative variables.The level of significance was set at 0.05.All statistical analyses were performed using IBM SPSS Statistics for Windows (version 28.0;IBM Corp., Armonk, NY, USA). Results Based on the histological findings, 136 patients (21 males and 115 females) with the clinical presentations of OLP were divided into three groups for DIF analysis.These groups comprised 72 patients in the OLP group, 43 patients in the OLL group, and 21 patients in the OED group.The median age at the time of diagnosis was 53 years old, with a range of 19e86 years old.The buccal mucosa was the most frequent biopsy site, followed by the gingiva, mucobuccal fold, tongue, labial mucosa, and hard palate.There were no significant differences in age, sex, or biopsy site among the OLP, OLL, and OED groups (Table 1). In the OLL group, 19 cases (44.2%) demonstrated signs of basal cell degeneration, 6 cases (14%) had lymphocytic band infiltration, and the other cases had a diffuse lymphocytic infiltration or mixed inflammatory cell infiltration.Table 2 summarizes the clinical and histopathological diagnosis of the OLP, OLL, and OED groups. In this study, 117 patients (86.0%) were DIF-positive, while 19 patients (14.0%) were DIF-negative.DIF positivity was detected in 88.9% of the OLP, 83.7% of the OLL, and 81% of the OED groups, respectively.There were no significant differences between these groups in DIF positivity rate, type of immunoreactants, location, or interpretation.Table 3 provides additional details of the direct immunofluorescence findings. The most common DIF pattern in all groups was shaggy fibrinogen at the BMZ, which was seen in 49 cases (76.6%) in the OLP group, 28 cases (77.8%) in the OLL group, and 14 cases (82.4%) in the OED group.Granular C3 at BMZ was the next most common immunoreactant in 34 cases (53.1%),21 cases (58.3%), and 11 cases (64.7%), followed by IgM at CBs in 29 cases (45.3%), 12 (33.3%),and 6 cases (35.3%) in the OLP, OLL, and OED groups, respectively.Table 4 presents further details of the direct immunofluorescence patterns based on the type of immunoreactant. In the OLP and OLL groups, fibrinogen deposition in combination with C3 was the most common immunoreactant combination, while fibrinogen alone was the most common immunoreactant in the OED group.Additional information is displayed in Table 5. One case of malignant transformation found in the OLP group.The details of this case have been published. 18In summary, this patient was initially diagnosed with OLP.The DIF result was negative.During the follow-up period, a small, irregular-shaped ulcer appeared on top of the white plaque at right buccal mucosa.The biopsy revealed superficially invasive squamous cell carcinoma.Another case of malignant transformation was reported, however, the patient did not meet the inclusion criteria of this study. Discussion Clinical examination alone can be subjective, resulting in overdiagnosis of OLP due to the clinical similarities between some lesions and OLP, or misdiagnosis of malignancy or a dysplastic lesion. 19,20The present study demonstrated that several dysplastic lesions clinically resemble OLP.Interestingly, almost all of these cases (20/21; 95.2%) had histopathological features that resembled OLP/OLL with some degree of inflammation.The presence of epithelial dysplasia along with the histopathological features of OLP sparked questions about predicting the malignant transformation of OLP. 8 This transformation can be difficult to distinguish between OLP and OED with lichenoid immune responses, because lymphocytic infiltration and basal cell destruction can act as confounding factors, masking mild degrees of dysplasia; about 26% of OED lesions had a lichenoid histological appearance. 7,9Mild epithelial dysplastic changes are not uncommon in oral inflammatory lesions. 21Chronic inflammation is typically involved in developing dysplasia and cancer.Long-term chronic inflammation can promote genomic instability, increasing the risk of cancer. 22In contrast, OED can exhibit stromal inflammation as a response to dysplasia, resembling an inflammatory lichenoid infiltrate. 8,21ecent molecular data revealed that OED with lichenoid features had transcriptomic and immunophenotypic profiles that are similar to OLP, but distinct from OED, indicating that OED with lichenoid features is likely to represent an oral lichenoid inflammatory condition with a reactive dysplasia-associated inflammatory infiltrate. 23nother study discovered that the proportion of loss of heterozygosity, the most predictive marker of malignant transformation, in OED with lichenoid features is similar to that found in other OED, but different from that found in OLP. 24,25Furthermore, OED with lichenoid features frequently contains genetic changes associated with malignancy risk. 24he history of medications taken was the only significant difference in patient demographics; however, the differences occurred as a consequence of the exclusion of patients who routinely take medications from the OLP group.Our study found no significant differences in the DIF profiles among OLP, OLL, and OED.In mild and moderate epithelial dysplasia, the proportion of fibrinogen positivity in OED is 72.7% (8/11) and 60% (6/10), respectively.A 2012 study of direct immunofluorescence in cases of premalignant and malignant oral lesions demonstrated fibrinogen positivity in 39.6% (21/53) of mild epithelial dysplasia, 47.8% (11/23) of moderate to severe dysplasia, and 55.6% (10/18) of squamous cell carcinoma (SCC). 26ibrinogen positivity on DIF cannot be regarded as a pathognomonic finding for OLP.Fibrinogen is a plasma protein that is involved in the wound healing process. 26It is suggested that the deposition is the result of damage to the basal cell layer and basement membrane zone.The possibility of a relationship between fibrinogen positivity with keratinocytes maturation disarray should be clarified. 14ecause of the high percentage of fibrinogen positivity in OED, a DIF assay cannot be used solely to differentiate between OLP, OLL, and OED.Although the DIF patterns of OLP are not specific, they can be used as an adjunctive tool to distinguish OLP from other red and white oral lesions, such as LE, CUS, and lichen planus pemphigoid (LPP), especially when the clinicopathological features of the lesion are overlapping or ambiguous.13e16,27 In addition to the case in the OLP group, malignant transformation was observed in another case that did not meet the criteria, but had completed the clinical, histopathological, and DIF evaluations.The 85-year-old female presented with a generalized erythematous atrophic area with white striae at the upper and lower gingiva.Her primary diagnosis was OLP while taking antihypertensive and hypolipidemic drugs.The DIF result was negative.After two years of follow-up and topical steroid treatment, a welldefined erythematous area with an irregular surface at the palatal gingiva of tooth 26e27 was observed.The biopsy result indicated well-differentiated SCC.However, the primary and subsequent biopsies were performed at different sites.This finding suggests that DIF negativity cases are more likely to undergo malignant transformation.Due to the small number of cases and the short follow-up period, we are unable to state this conclusively.Therefore, larger sample sizes of malignant transformation cases with DIF analysis are required. The molecular events and carcinogenesis of OLP have been investigated.Although there is no evidence of the premalignant potential that characterizes epithelial dysplasia, 2 however, it has been suggested that inflammatory cell infiltration within an OLP lesion is a mechanism of malignant transformation.Hypothetically, inflammatory cells within the lesion might produce nitric oxide, which then reacts with O 2 to form 8-dihydro-2¢-deoxyguanosine, resulting in a G-T nucleotide transversion.Another possible explanation is DNA mutation is caused by the activity of the cyclo-oxygenase II enzyme within infiltrating inflammatory cells, resulting in the increased release of inflammatory cytokines and malondialdehyde, a carcinogenic metabolite that causes DNA damage. 28Thus, the premalignant potential of OLP remains unresolved. 2 There is currently inadequate evidence to determine whether OLP and OLL have different risks of malignant transformation.Although patients who had dysplasia on primary diagnosis were excluded, a small subset of patients with OLP or OLL eventually developed SCC.Clinicians should arrange for careful and regular long-term follow-ups. 29t has been proposed that a dysplastic lesion undergoes malignant transformation by progressively accumulating key molecular changes in tumor suppressor genes and oncogenes, possibly in a specific sequence, until the last of the required changes completes the necessary cancer genotype and triggers invasion. 7n OED with lichenoid features, it is unknown whether the dysplasia is the primary pathology that triggers a protective inflammatory response or a secondary event that occurs as a reaction to the intense inflammation. 30lthough the fibrinogen deposition in OED in the present study (bilateral lesions with some reticular component) was higher than that found in Montague et al., 26 this cannot be used to infer that it is the primary pathology of the lesion.The primary pathology is crucial for identifying a true case of malignant transformation in OLP and making appropriate management decisions. 21,23As long as diagnostic uncertainty continues, clinicians should carefully correlate the clinical presentations and histopathological evaluations to achieve a final diagnosis. 7,23lthough most dysplastic lesions never transform, the presence and severity of epithelial dysplasia are the main factors for determining the risk of malignant transformation. 1,7,8Clinicians must keep in mind that oral epithelial dysplasia, regardless of its primary pathology, increases the risk of malignancy and necessarily requires vigilant follow-up. 21,30n conclusion, the DIF assay cannot be regarded solely as evidence for OLP, OLL, and OED differentiation.A histopathological examination is required to determine the presence of epithelial dysplasia or malignancy.To diagnose dysplastic lesions with the clinical manifestations of OLP, careful clinicopathologic correlation is mandatory.Due to the lack of scientific evidence to identify the primary pathology and the ongoing malignancy risk of epithelial dysplasia, meticulous long-term follow-up plays a crucial role in patient management. Figure 2 Figure 2 Direct immunofluorescence revealed positive staining of fibrinogen at basement membrane zone in A. oral lichen planus; B. oral lichenoid lesion; and C. oral epithelial dysplasia cases. Table 1 Patients' demographic data and biopsy site. Table 4 Direct immunofluorescence patterns by type of immunoreactants in direct immunofluorescence positive cases. Table 5 Immunoreactant combinations in the direct immunofluorescence positive cases.
2023-02-10T16:02:29.951Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "cbea9b9c659c2eb3e71b989a44025dd4be93fc90", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jds.2023.01.025", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e23652ceb6a69f8797206a87ccaa265ab55e0e9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
235773487
pes2o/s2orc
v3-fos-license
Sexual Activity in Adolescents and Young Adults through COVID-19 Pandemic During the COVID-19 pandemic, it has been postulated that the sexual life of adolescent and young adults has been impacted in various aspects, potentially affecting their well-being. Our aim is to investigate the potential changes in the sexual activity and relationships of adolescents and young adults during the COVID-19 pandemic. In general, a decrease in sexual desire was reported during the COVID-19 pandemic, in both genders. Fewer sexual intercourses and bonding behaviors between partners were associated with loneliness and depressive symptoms. On the contrary, an increase in sexual desire was expressed in a few people, with masturbation to be the most preferable means of satisfaction. The present paper highlights the multifaceted impact of COVID-19 upon the sexual life of adolescents and young adults during the ongoing pandemic. The changes observed in their sexual activity and relationships, could provide the basis of future preventive and educational programs. Introduction In December 2019, the first cases of COVID-19 were reported in Wuhan, China, raising concerns due to rapid spreading. The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is the underlying cause of the coronavirus disease (COVID- 19), which has spread worldwide, forcing the World Health Organization (WHO) to declare it as a pandemic on 11 March 2020 [1,2]. Many countries imposed restrictive measures, such as lockdowns, social distancing, wearing masks, and frequent washing of hands through the day [3]. The fear of contacting the virus, as well as the public health measures, led people to diminish their visits to crowded places and thus many have faced difficulties in personal relationships, especially adolescents and young adults [4][5][6]. The lack of peer-to-peer contacts, due to home confinement and school closures, changed many aspects of life for youths. As a result, sexual activity, intimate relationships, access to contraception, protection from HIV, or other sexually transmitted infections (STIs) and physical, mental, or emotional well-being were adversely affected [7,8]. The social distancing that was imposed consequently led to the restriction of contacts between individuals, and therefore has affected the frequency of sexual activity, number of sexual partners, sexual desire and satisfaction, and the use of pornographic content [9]. According to the available literature, all these consequences seem to concern mainly younger individuals, such as adolescents and young adults [4][5][6][7][8][9]. The sexual life and activity of the above-mentioned age groups plays an important role in forming healthy sexual relationships and expressing their sexuality [8,9]. Sexuality constitutes a meaningful aspect of human development. The impact of COVID-19 in the sexual life of adolescents and young adults may significantly affect their sexuality. Thorough study of those effects will highlight the needs of youths during the pandemic and will provide substantial information in order to maintain their well-being. Thus, the purpose of this paper was to investigate the potential impacts and changes in the sexual activity and relationships of adolescents and young adults caused by the measures imposed due to the ongoing COVID-19 pandemic and the potential consequences to their sexuality in general. Methods A research of the literature was conducted in the following databases: Google Scholar, PsycInfo, PubMed, SCOPUS, and ERIC up to 14 June 2021. The algorithm used included: (sexual OR sex OR sexuality OR intimate) AND (health OR wellness OR life OR wellbeing OR well-being) AND (teens OR teenagers OR youngster OR youngsters OR adolescence OR adolescent OR adolescents OR "young adults" OR "younger adults" OR "Generation Z" OR juvenile OR juveniles OR youthful) AND (COVID-19 OR SARS-CoV-19 OR SARS-CoV-2 OR "2019-nCoV" OR "novel coronavirus"). The references of the eligible studies were searched through a snowballing technique, along with the relevant reviews. The inclusion criteria were the following: Studies that examined changes in the sexual activity and relationships of adolescents and young adults (11-24 years old) during the ongoing COVID-19 pandemic were considered eligible. Studies that incorporated adolescents and young adults in a significant percentage were also included. Sexual activity must refer to sexual desire, frequency of intercourses, number of sexual partners, sexual satisfaction or dissatisfaction, and sexual preferences among adolescents and young adults. Concerning the type of study, case reports, cohort studies, cross-sectional studies, case series, and case-control studies, were included, while no language restrictions were imposed. The selection of studies was conducted by two authors (C.S. and A.S.) who worked independently. The following variables were used to extract data for each study: Title of the article, name of first author and year of publication, region/country where the survey was conducted, language, study period, study design, sample, sample size, age range and selection of sample, ascertainment and/or association with COVID-19 pandemic, outcomes, statistical analysis, and main findings. Quality assessment was performed in order to present accurate results. For this purpose the Newcastle-Ottawa Scale for cross-sectional studies [10] and cohort studies [11] were used. Selection of Studies The research in the database retrieved 20,246 publications, while 1050 of them were duplicates; 16 studies (49,078 individuals) were finally considered eligible following the inclusion criteria imposed as mentioned above [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27]. The eligible studies included data from various countries, three of them from China [12][13][14], four from the USA [15][16][17]25], while two presented data from UK [18,19], three from Italy [20,21,26], the remaining three from various countries, namely, France [22], Poland [23], and Turkey [24], while one included data from various countries [27]. The majority of them were cross-sectional studies (n = 10) and the remaining five were cohort studies, and only one study included mixed methods. Eleven studies referred to the COVID-19 pandemic or outbreak in general, whereas five studies referred to lockdowns (Table 1). A reduction in sexual desire was experienced (25%) while 44% reported a decrease in the number of sexual partners, with men reporting more (53% vs. 30%). Lower frequency in sexual intercourse was reported by 37%, including 26% married men and 28% married women. From married men (49%) and married women (29%) a decrease in the number of sexual partners was reported. A total of 32% of men and 39% of women experienced a reduction in sexual satisfaction and in risky sexual behaviors. Only 18% of men and 8% of women experienced increased sexual desire. Multinomial logistic regression In the past month of sexual behavior, half of all adults reported change, mostly a decrease. The younger the age of their children (under the age of five), the greater the likelihood of stability and/or increased partnered behaviors, whereas having elementary-aged children was often associated with a decrease in these behaviors. Past month of depressive symptoms and loneliness were associated with both reduced partnered bonding behaviors and partnered sexual behaviors. The greater the perceived risk of COVID-19, the lesser the reported solo and partnered sexual behaviors. Stability in partnered sexual behaviors was reported in people with greater COVID-19 knowledge. and lower PSSS and SCS compared to rural participants, while only COVID-19-related stress (p < 0.001) and SCS (p < 0.05) differences were significant. Therefore, these results showed that differences in COVID-19 stress, PSSS, and SCS were significant for gender, years, and urban/rural status. Most couples responded that they did not perceive any differences in their sexuality, despite the pandemic's consequences. Some female participants reported a decrease in pleasure, satisfaction, desire, and arousal. The main reasons for this seemed to be worry, lack of privacy, and stress. Even though participants seemed to show high levels of resilience, the negative aspects of lockdown could affect their quality of sexual life. There was statistically significant association between the workplace and the change of FSFI scores before and during the COVID-19 pandemic (p < 0.01). The largest decrease in FSFI score was noticed in the group of women who did not work at all (5.2 ± 9.9). A total of 39.9% of the sample reported engaging in sexual activity at least once per week on average, was classified as sexually active during lockdown. The mean number of sexual activities was 1.75 in the overall population, and was significantly higher in men than women. The prevalence of sexual activity significantly increased from 33.5% in people who were self-isolated for 0-5 days to 47.0% in those who were self-isolated for 11 days. Adults with sexual activity were mostly male and of a younger age; married/in a domestic relationship, employed, having high annual household income, and consuming alcohol, while the number of chronic physical conditions was significantly lower in the sexually active than in the non-sexually active group. Sexual Activity and Behavior According to the studies, a general decrease in sexual desire during the COVID-19 pandemic was reported from individuals [12,13,15,16,[19][20][21][22][23][24]. More specifically, in a Chinese study conducted on youth (age range: 15-35 years old), 41% reported fewer intercourses, while 20% also reported a decrease in alcohol consumption before or during sex [12]. Men expressed more often a decrease in sexual partners (53%) compared to women (30%), while sexual satisfaction was reduced in both genders (32% of men and 39% of women), according to another Chinese study, in which the majority of the sample was 15-30 years old [13]. Gender and age differences could predict sexual dissatisfaction, along with COVID-19 potential infection during contact and occurrence of depressive symptoms, as stated by an Italian study by Cocci A et al. (mean age 21 years old) [20]. Following the above, another study in Italy (age range 18-40 years old (33.4% were 18-30 years)) presented a decrease in sexual intercourse by individuals during quarantine, mainly due to a lack of privacy and psychological stimuli [26]. According to Luetke et al., in the US, during home isolation and social distancing, conflicts were inevitable, leading to lower levels of sexual satisfaction mostly in men (a significant percentage (36.7%) in ages 18-39) [16]. Loneliness and depressive symptoms were associated with fewer sexual intercourses and bonding behaviors between partners in the US study by Hensel et al. (38.3% participants of ages 18-39) [15]. According to the Chinese study by Jianjun et al., women reported higher scores in the Sexual Compulsivity Scale (SCS) than men, while older counterparts also presented higher scores in SCS from younger ones (age range between 17 and 24 years), and individuals who live in urban areas reported lower scores in SCS than those living in rural areas [14]. The study by Fuchs et al. in Polish women (mean age: 25.1 years) reported a decrease in Female Sexual Function Index (FSFI) scores that was associated with the lack of work before and during the COVID-19 pandemic [23]. On the other hand, a small increase in sexual desire was expressed from men and women in China, Turkey, and France [13,22,24]. Masturbation was a preferable mean of satisfaction, through pornographic content during quarantine [12,20,22,25]. People who self-isolated from 0-5 days increased their sexual activity (33.5%) more than people who self-isolated for 11 days or more (47%) (32.4% were 18-34 years) according to a UK study. [19]. Higher knowledge of COVID-19 consequences was associated with more stable sexual behaviors among partners (a significant percentage (38.3%) in ages 18-39) [15]. According to the above, a significant number of individuals in an Italian study did not report a reduction in their sexual desire (71.3%, a large proportion of the sample being 18-30 years) [26]. Concerning couples, a decrease in sexual desire was reported in 10.4% of females and 9% of males in a French study (the majority of the sample were 15-30 years old) [22] while a Chinese study reported fewer instances of sexual intercourse by both genders (which also had a majority of the sample in the 15-30-year age group) [13]. Having children under five led to greater instances of sexual release, while having older children (elementary aged) lead to a decrease in those behaviors, as evidenced by Hensel et al. in the USA [15]. Although, many couples reported no difference in their sexual activities in general, in an Italian cross-sectional study (61.5% were under 34 years old), 12.1% of men and 18.7% of women stated that there was an increase in their sexual desire [21]. The use of technology, including dating or hooking-up apps, decreased, as evidenced in a study on 15-40+-year-old US men having sex with men [17]. Those apps were used by young people, in order to stay in touch, but not in face-to-face interactions. Thus, the opportunities for sexual intercourse were limited. The reduction of sexual activity among young people (under 17 and over 18 years old), could be reflected in the reduction in demand for sexual health services (SHS) as stated by Thomson-Glover et al. in the UK [18]. Findings support the above data were also presented in a study including Adolescents Sexual Minority Men (ASMM) (14-17 years old), who used social media and virtual means of communications to stay in touch with their sexual partners [25]. In addition, in the same context, in a study including Gay, Bisexual, and Other Men Who Have Sex With Men (GBMSM) (age range 18-35 years old), the use of social media for communication was reported, while an impact on sexual life during the ongoing COVID-19 pandemic was expressed [27]. Risk of Bias The majority of studies were cross-sectional (n = 10) and six of them scored high (either 9 or 8) in the Newcastle-Ottawa scale. In some cases, the selection of the sample was detailed, but in six of them, the non-responder rate was not justified. The ascertainment of the exposure was implemented through online questionnaires, due to COVID-19 restrictions, and thus were not always validated. Although the control of confounders was performed through appropriate statistical analysis, the outcome was mainly assessed through self-report questionnaires and could not be totally reliable. Furthermore, the five cohort studies provided good (n = 2) and fair quality (n = 3). Discussion The ongoing COVID-19 pandemic seems to have multiple effects in the sexual life of youth. Adolescents and young adults are the age groups who might be less vulnerable to the virus, but seem to suffer greatly from psychosocial consequences [28]. Social distancing, school closure, and restriction of activities lead to a reduction of any kind of social contact. Additionally, as they were obliged to stay most of the time inside, adolescents and young adults were subjected to increased parental monitoring, which reduced independence, physical interaction with peers, and privacy [29]. Subsequently, according to available reports, a decrease was observed in age groups concerning partnered sex, sexual behaviors, and relationships [4,29]. On the other hand, an increase was recorded in online social connections, as 65% of teenagers used texting or interaction via social media more often than usual [29]. Additionally, according to our findings, the knowledge of COVID-19 consequences was associated with more stable sexual behaviors among partners in ages 18-39 [15]. Our study also detected that the use of dating or hooking-up apps decreased during COVID-19 restrictions, according to a study on men having sex with men (age range 15-40 years old) [17]. Nevertheless, online connections seem to be rather important, as they offer options to connect despite the social distancing and stay-at-home orders [29]. Another finding was the reduction in demand for sexual health services, mainly due to fear of infection, among young people [18], possibly associated with problems in accessing condoms, HIV and STI testing, and treatment services, leading to increased rates of sexually transmitted infections and unintended pregnancy among youth [4]. For adolescents and young adults considered to be vulnerable, especially those who are part of the LGBTQ community, who had to face discrimination, violence, and lack of access to health, the COVID-19 pandemic created a more hostile environment for them. Thus, deterioration in their mental health and well-being was noted, including their sex life [30,31]. On the other hand, during COVID-19 and due to the circumstances, methods such as telehealth, home-based sexually transmitted infections screening, and contraceptive delivery were developed [4]. According to the literature, these methods were considered safe and effective and acceptable to youth. Thus, some of them could be adopted afterwards in order to provide better health care concerning the sexual health of these age groups [4]. Inaccessibility of sexual and reproductive healthcare services seems to be one of the main causes of increased rates of sexually transmitted infections among adolescents and young adults, a fact that was also highlighted during the pandemic [4,18]. Furthermore, it seems that the COVID-19 pandemic has exposed adolescents and mostly girls to multiplied risks concerning their sexual health, such as sexually transmitted infections including HIV and Human Papilloma Virus, as well as unintended pregnancies [30]. According to reports, an increase was also observed in sexual and gender-based violence, a fact that is probably the result of the difficulty in accessing relevant services such as intervention programs [30]. Thus, the pandemic seems to have additionally highlighted the need for better organization and the development of youth-friendly and easy-to-access sexual and reproductive healthcare services. Concerning the limitations of this study, the COVID-19 pandemic is an ongoing phenomenon, and thus, the impact in the sexual activities of adolescents and young adults need to be further tested. In order to provide a global understanding of relevant effects, an examination of diverse groups should be a priority. Confinement measures and social distancing created a more complex reality, affecting all aspects of social life; the interplay with sexual life should be monitored in a more systematic way. Furthermore, the majority of the studies were cross-sectional, providing no long-term results, while self-reported assessments limited the validity of studies. Conclusions In conclusion, the present paper highlights that sexual activity, an important aspect of adolescents' and young adults' life, was reported to be considerably affected during the ongoing COVID-19 pandemic. The changes observed in sexual activity and relationships could play an important role in forming preventive and educational programs in collaboration with parents, caregivers, teachers, and medical staff, aimed towards good sexual health and well-being.
2021-07-08T13:12:51.076Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "ad94ec69e4a51e1fb58fd273f9c1ebcbeac3f99b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9067/8/7/577/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "5df1cbb6dfe4ce9ffa7ad7d53fd0ae0c82f5f431", "s2fieldsofstudy": [ "Psychology", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
252558067
pes2o/s2orc
v3-fos-license
A Rare Case of Neuroendocrine Tumor in a Patient With Neurofibromatosis Type 1: Is There Any Association? Neurofibromatosis type 1 (NF1) is an autosomal dominant condition characterized by café-au-lait spots, cutaneous neurofibromas, axillary and inguinal freckling, and iris Lisch nodules; however, the presentations vary greatly, even within families. NF1 is also a recognized risk factor for the development of malignancy particularly malignant peripheral nerve sheath tumors (MPNST), optic gliomas, other gliomas, and leukemia. Nevertheless, the occurrence of lung cancer in a patient with neurofibromatosis type 1 is a rare phenomenon. Here we present a case of neuroendocrine tumor in a patient with neurofibromatosis type 1, highlighting the association between the two diseases. This case report also aimed to raise awareness of possible malignancies in patients with neurofibromatosis type 1. Introduction Neurofibromatosis type 1 (NF1), formerly known as von Recklinghausen's disease, is an autosomal dominant disease that is one of the most prevalent genetic disorders [1] with an estimated prevalence of one in 3000 births [2]. NF1 is induced by loss-of-function mutations in the NF1 gene, located on chromosome 17q11.2, encoding neurofibromin 1. Neurofibromin 1 is a protein related to tumor suppression [3]. Skin pigmented lesions known as "café-au-lait spots," axillary freckling, and cutaneous neurofibromas are all symptoms that can be observed clinically in NF1. NF1 is a risk factor for malignancy such as malignant peripheral nerve sheath tumors and gliomas and leukemia [1]. However, the occurrence of neuroendocrine tumor of lung is rare. In light of this, we report a case of a 37-year-old female with neurofibromatosis type 1, followed since the age of 12 years, who developed a neuroendocrine tumor of the lung. The relation between NF1 and neuroendocrine tumor of lung is discussed. Case Presentation A 37-year-old female, lifetime non-smoker with congenital kyphoscoliosis is followed since the age of 12 years for neurofibromatosis type 1 (NF1) with no other personal or familial history. The patient reports a 10year history of chronic dyspnea grade I. She was admitted to the pulmonology department at our hospital after her condition started worsening two months ago with the progression of her dyspnea to grade III associated with a dry cough and xerostomia. At her first clinical presentation, she was apyretic (37.2°C). In the clinical examination, kyphoscoliosis with pectus carinatum was noted with multiple pigmented lesions of the skin called café-au-lait spots with neurofibromas all over the trunk and the back ( Figures 1A, 1B). The pulmonary examination found decreased vocal fremitus and breath sound in the right lung with dullness to percussion, while the examination of the left lung was normal. Other than that, our patient was conscious and alert, following all the commands, with a well-preserved motricity at four limbs, she was normotensive with blood pressure of 110/62 mmHg, heart rate of 75 beats per minute, respiratory rate of 18 breaths per minute, and had oxygen saturation of 95% on room air. The rest of the physical examination revealed no abnormalities. A chest x-ray was performed and revealed homogeneous fluid tone opacity in the upper 2/3 of right hemithorax associated with a heterogenous opacity in the lower 1/3 of the same side with pulmonary hyper-transparency and widening of the intercostal spaces ( Figure 2). FIGURE 2: Chest x-rays of the patient. The image shows homogeneous fluid tone opacity in the upper 2/3 of right hemithorax associated with a heterogenous opacity in the lower 1/3 of the same side with pulmonary hyper-transparency and widening of the intercostal spaces. Due to highly suspected malignancy, a CT scan of the thorax and abdomen was obtained and showed hypodense paravertebral masses, mostly necrotic with extension through the intervertebral foramen into the spinal canal next to T3 compressing lung and surrounding the abdominal aorta, associated with multiple subcutaneous and muscular lesions ( Figures 3A-3C). FIGURE 3: Post-contrast thoracoabdominal CT scan. CT scan, (A-C) axial sections, showing hypodense paravertebral masses, mostly necrotic with extension through the intervertebral foramen into the spinal canal next to T3 (yellow arrow), compressing lung and surrounding the abdominal aorta, associated with multiple subcutaneous and muscular lesions (orange arrow). A bronchoscopy with biopsy was performed and showed a bud in the right bronchial tree completely obstructing the lower lobe ( Figure 4) which was biopsied ( Figures 5A-5C) leading to the diagnosis of neuroendocrine tumor. The patient's case was reviewed by the thoracic surgery team and the decision was not to for the operative treatment. The patient was then referred to the oncology department for further treatment. Discussion As mentioned earlier, NF1 is a risk factor for the occurrence of certain malignancies including lung cancers. NF1 increases the sensitivity of the lungs to cigarette smoking which in turn increases the risk of interstitial lung disease which further raises risk of cancer development [5]. However, the association of lung cancers with NF1 is uncommon and rarely reported in the medical literature [6]. To explain the link between NF1 and lung cancer, two basic hypotheses have been proposed. The first hypothesis is that since NF1 is a potential risk in the occurrence of interstitial lung diseases, the scars or bullae secondary to the latter can give way to the development of tumors [6,7]. The second hypothesis is the inactivation of the p53 tumor suppressor gene secondary to the deletion of chromosome 17p. Inactivation of p53 has been linked to the development of small-cell lung cancer in individuals with NF1 as well as the p53 mutation was detected in half of the patients with non-small-cell lung cancer [6]. This underscores that NF1 is a potential risk factor for lung cancer in non-smoking patients. The association of pulmonary neuroendocrine tumors with NF1 is rarely reported in the medical literature. To the best of our knowledge, only one case has been reported in the literature concerning a 45-year-old patient with NF1 and active smoking [7]. Lung neuroendocrine tumors are a rare type of tumor originating in neuroendocrine cells with amine precursor uptake and decarboxylation (APUD) derived from Kulchitsky cells [8]. Pulmonary carcinoids are common in the fourth and sixth decades of life with a median age of 45 years [8]. In addition, smoking is not considered a risk factor unlike SCLC and LCNEC [9]. As our case illustrates, the patient had neither active nor passive smoking. The typical carcinoid is 10 times more frequent than the atypical carcinoid which is characterized by high metastatic potential in 50% of cases [10]. The clinical presentation of pulmonary neuroendocrine tumors is variable depending on the location, type, and size of each tumor. Nevertheless, the most frequent clinical manifestations in the case of carcinoid tumors are dyspnea, cough, chronic respiratory infections, and hemoptysis [11]. Our patient presented to our department with progressive dyspnea which worsened two months earlier, associated with dry cough and xerostomia. In the case of peripheral pulmonary carcinoid, the clinical picture is generally asymptomatic with most often accidental discovery [11]. Although not common, paraneoplastic syndromes can appear. For example, carcinoid syndrome which is considered characteristic is more common in gastrointestinal tumors and constitutes only 1-3% of tumors of pulmonary origin [12]. With routine chest x-rays, more than 40% of lung neuroendocrine tumor (LNET) cases are discovered incidentally [13]. Chest CT with contrast remains the gold standard. The most common radiographic findings were small, spherical pulmonary cysts and asymmetric, thin-walled bullae with apical prominence. In addition, bullae and cysts are thicker and more circumscribed in patients with NF1 [7]. For the identification of distant metastases, somatostatin receptor PET is of great help [14]. Because most malignancies are central, a biopsy sample is often obtained using fiber-optic bronchoscopy. A transthoracic biopsy or aspiration may be the first option if the tumor is peripheral [12]. The confirmatory diagnosis is histopathological. Surgical resection with maximal preservation of the lung parenchyma is the treatment of choice in stages I and II CT and atypical carcinoid (CA). In the case of tumors of peripheral location, a segmentectomy or a lobectomy with regional lymphadenectomy should be performed [10][11][12][13][14]. Interventional bronchoscopy is reserved for patients with a high surgical risk. Local radiotherapy is indicated if the patient refuses surgery [14]. Adjuvant treatment, with a combination of radiation and chemotherapy, is utilized following surgery for AC and LCNEC patients, with demonstrable survival advantages [14]. Somatostatin analogs result in remission in up to 10% of symptomatic patients with hormone-active carcinoid tumors and in disease stability in 30-50% [15]. The prognosis mainly depends on the histological, biological, and clinical elements. Carcinoid tumors diagnosed with tumor, lymph node, and metastasis (TNM) stage I and II are operable with a five-year survival, unlike LCNECs and SCLCs which are often diagnosed late with a very low survival [12]. Conclusions NF1 represents an important risk factor to consider in the occurrence of even rare lung cancers which can be life-threatening for patient. Furthermore, pulmonary neuroendocrine tumors remain rare presenting with variable and sometimes misleading clinical pictures. Imaging examination is necessary for the establishment of diagnosis accompanied subsequently by a tissue biopsy. The treatment is controversial but the resection of the tumor remains the curative treatment especially in stages I and II of the TNM classification. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-09-28T15:02:17.579Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "734f4d10e8be8a4279e8be4e70b64b364f882a15", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/114282-a-rare-case-of-neuroendocrine-tumor-in-a-patient-with-neurofibromatosis-type-1-is-there-any-association.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "311868b0291dac1c43edd27156d9368ebb788fc7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
211396595
pes2o/s2orc
v3-fos-license
Can an Increase of Infrastructure Spending Contribute to Higher Potential Output in the Medium and Longer Term? This discussion on recent economic literature concerns some issues on potential output and related policy implications. It is not clear to what extent potential output growth has been affected by the recent crisis. Then the actual stabilization policies—based on the existence of output gaps and public debt sustainability—might not be appropriate to mitigate effectively cyclical fluctuations and to stimulate economic growth. The recent empirical evidence on the determinants of potential output—i.e. the origin of cyclical fluctua-tions—leads to a higher uncertainty on potential output measurement. Moreover the existing methods to estimate potential output present some weaknesses reducing reliability of the estimation results. Focusing on European case, the measure of potential output is considered as an useful guidance for policy. In particular, Stability and Growth Pact and Treaty on Stability, Coordination and Governance in the Economic and Monetary Union refer to the concepts of potential output, output gap and structural budget balance. Re-cently academics and some policy makers have criticized these measures and the related austerity policy because they worsened the economic situation. According to recent theoretical and empirical contributions it is important to rethink at the role of fiscal policy, focusing on fiscal stimulus and in particular on additional infrastructure spending because it can positively affect GDP as well as potential output. A part of this literature discusses extensively how public capital affects the economy. Under certain conditions such as a good institutional framework and sound projects, a higher spending on public infrastructure has a high economic impact. The Origins of "Potential Output" Concept and Its Relation with Inflation According to the modern macroeconomic theory and in particular with the New-Keynesian view, the economic system may not be producing at its potential level. Potential output-defined as level of output achievable in absence of nominal rigidities-implies a full use of production factors with a given technology. At the level of potential output the actual rate of inflation is equal to the expected rate of inflation. When the actual output is not at the level of potential output, inflationary pressures occur. This relation is well explained by Jahan and Mahmud [1] as follows: the output gap is an indicator of the relative demand and supply components of economic activity. It measures the degree of inflation pressure in the economy: when the actual output is greater than potential output, prices will rise in response to demand pressure in key markets. On the contrary when the actual output is below potential output, prices will fall as a consequence of a weak demand. The concept of potential output was born in 1962, when Okun spoke on the significance and measurement of potential GNP. Okun defined potential output as the maximum level of output achievable without creating inflation, linking the concept of maximum potential output with the criterion of an unemployment rate consistent with zero inflation. If current output diverges from potential output, output gaps emerge from over-or underutilization of productive capacities. In a scenario of negative gaps, the "attainable" level of entrepreneurial profits, household incomes, and long-term oriented investments in production facilities, installment, research and development is lower than a situation of full utilization of resources. A negative output gap might indicate a sluggish economy and portends a declining GDP growth rate and potential recession. Alternatively a positive output gap indicates an overutilization of resources, forcing businesses and employees to work beyond their maximum efficiency level. A positive output gap can spur inflation in an economy because both labor costs and prices of goods can increase. Potential output becomes, then, the main driver for stabilization policy because the existence of output gaps (positive and negative) implies macroeconomic inefficiency. According to Okun ([2] [3]), an effective stabilization policy mitigates cyclical fluctuations in the utilization of the current output potential and it furthers economic growth (Hauptmeier et al. [4]). Before Okun's law, Phillips [5]-focusing attention on labour market in Great Britain-validated a negative relationship, stable in the long run, between the unemployment rate and the rate of change in nominal wage rates. In particular he demonstrated that wages regularly decreased when unemployment was high and sharply increased in a state of full employment. Two years later, Samuelson and Solow [6] substituted the change rate of nominal wages by the rate of inflation based on some assumptions on the increase in productivity and profit mark-ups. In doing so, they noted a stable trade-off between inflation and unemployment that determined a goal conflict between monetary stability and full Consequently policymakers had to choose between two scenario: 1) low inflation coupled with high unemployment or 2) full employment coupled with high inflation. The idea that full employment can only be possible at the cost of inflation was subject to some criticism by Phelps [7] and Friedman [8]. The authors stated that the concept of a Phillips curve, stable in the long run, was incompatible with rational economic behavior because what matters for employees was the real wage. Thus a certain and stable relation between unemployment and inflation did not exist. The authors affirmed that in a state of labor-market equilibrium the level of employment is compatible with any rate of inflation as long as nominal wages change in step. Followings this reasoning Friedman introduced the term of "natural rate of unemployment" as the level of unemployment that corresponds to theoretical full employment. Nevertheless, Friedman [8] argued that in the short run an increase of inflation might reduce statistical unemployment. He justified this situation with the presence of information asymmetries between employers and employees. Sometimes employers have information about the progression of prices in advance of their employees. In other terms workers and their representatives believe that nominal wage increases in phases of expansion correspond to a high purchasing power and accordingly raise labor supply. This situation is called "money illusion" and expansionary stabilization policy can generate inflation that is underestimated. At the end workers will learn from their mistakes and correct their expectations on future inflation on the basis of observed rise of inflation (under the hypothesis of adaptive expectations). In sum according to Friedman [8] and Phelps [7], an inverse relationship between inflation and unemployment can only prevail in the short run. The long-term Phillips curve is a vertical line at the natural rate of unemployment, so inflation and unemployment are unrelated in the long run. This implies that in the long run monetary policy is neutral with respect to real economic variables (Hauptmeier et al. [4]). Based on Friedman's [8] definition of natural rate of unemployment-considered by most mainstream economists equal to natural rate of unemployment with non-inflationary unemployment or the NAIRU-the concept of potential output is implicitly defined as the level of national income compatible with the natural rate of unemployment. This is the basis of what Blanchard calls the "divine coincidence" according to which stabilizing inflation is equivalent to stabilizing the welfare-relevant output gap. In particular at the NAIRU rate, stable inflation goes along with the maximum employment which can be attained given the production capacity of the economy. Consequently any stabilization policy, aiming at displacing the unemployment rate from its natural location, causes only inflation gaps in the long run without any increase of employment. This implies also that the NAIRU is stable and independent of actual unemployment rates, i.e. its changes depend on changes in its structural determinants (Stirati [9]). This interpretation was empirically successful in the seventies but later empirical estimates of the NAIRU proved to conflict with the theory. Firstly, they were found to vary a lot, as testified by the creation of the new concept of a time-varying NAIRU 1 (Blanchard [10]; Gordon [11]). Secondly, changes in NAIRU appeared to follow those in actual unemployment, in contrast with theory according to which NAIRU variations depend on changes in its underlying determinants. Reference [12] and other economists started addressing this problem back in the 1980s, developing models with hysteresis. Next Pichelmann and Schuh [13] stated that the observed trend increase in actual unemployment cannot be explained by changes in the basic determinants of equilibrium unemployment, then unemployment may be strongly dependent on its own history ("hysteresis"). Consequently current equilibrium unemployment is not independent of past actual unemployment and demand shocks end up having longer-term supply consequences (Pichelmann and Schuh [13]). Reference [14] questioned the dependence of the estimated NAIRU on underlying determinants such as labour market institutions. They investigated the relation between estimated NAIRU and labor market institutions for 20 OECD countries over the past forty years, showing that: 1) this relation does not appear to be stable over time and 2) the most comprehensive available measures of institutions and policies can only account for a minor part of the differences in the evolution of unemployment across these countries. Reference [15] show that a large negative output gap can produce persistent effects on the level of potential output, via hysteresis effects in the labor market or reduced investment lowering the capital stock. Reference [16], in its advice to the UK government, found that hysteresis from unemployment effects reduces potential GDP by 0.1 percent for each 1 percentage point increase in the cumulative annual output gap. Reference [17]-based on data for 20 industrial countries in the period 1960-2013-and Blanchard [10]-focused only on the United States-reached the following empirical results: 1) it exists a hysteresis in output, implying that Central Bank policy should focus not only on price stability but also unemployment (only price stabilization measures may have significant costs in terms of high and persistent unemployment). 2) It exists a relation between the level of unemployment and the level of inflation and it goes from the first to the second; this relation looks like the old downward sloping Phillips Curve and it is weak as showed by large standard error of residuals. 3) The Phillips Curve is currently rather flat, implying that large changes in the unemployment rate are necessary to bring about significant changes in inflation. From the above discussion emerges quite clearly that market forces themselves do not bring output and employment to their natural path, then stabilization policies are needed. One of the main driver of these measures is the concept 1 The result of a time-varying NAIRU implies that: 1) the rate depends on the actual path of unemployment, that is a function of changes in aggregate demand, and 2) changes in the unemployment rate tend to be persistent; consequently after a recession (or a boom) the unemployment rate does not return to an equilibrium rate which is determined independently of the recession (or boom) itself. of potential output that is the parameter to take into account for mitigate cyclical fluctuations and stimulate economic growth. In a more general way, in defining stabilization measures, it is important to consider the following facts: 1) the concept of potential output defined as the level of national income compatible with the natural rate of unemployment is not validated by recent empirical results, then it is necessary to better investigate the relationships underlying the concept; 2) the so called "divine coincidence", according to which stabilizing inflation is equivalent to stabilizing the welfare-relevant output gap, it does not happen as showed by the recent empirical results, and 3) monetary policy might not be successful to pursue: a) unemployment and inflation targets and b) re-start economic growth 2 . That said the paper provides an overview of both theoretical and empirical recent literature about the concept of potential output, with a special focus on European case. In particular some issues on potential output are discussed because they have direct implications on defining appropriate economic policies. According to this literature, fiscal policy, as a macroeconomic tool, should be revised also because some theoretical assumptions-such as the relations between potential output, output gap, unemployment and inflation-do not seem to occur in reality as showed by empirical evidence. Given the inability of economic system to generate persistent growth, there is a growing consensus that the most appropriate policy response should be based on fiscal stimulus and in particular on additional infrastructure spending. The article is organized as follows. Section 2 discusses the underlying relationships to potential output and the related policy implications. Section 3 describes the alternative methods of estimating potential output. Section 4 focuses on potential output of Euro area after the recent crisis and the policy response of European Institutions. Section 5 discusses the relation between potential output and additional infrastructure spending, taking into account also the recent empirical contributions focused on Euro area. Finally, Section 6 presents conclusions. The Underlying Relationships to Potential Output and the Related Policy Implications Potential output can be considered as an optimal benchmark for the actual GDP which can rise or fall according to cyclical fluctuations. As explained before a positive output gap happens when demand is very high while a negative output gap occurs when economic system is not producing at full capacity, due to weak demand. As the output gap occurs, implications for monetary policy and/or fiscal policy emerge. For example when actual output falls below its potential, a Central Bank could low interest rates to rise demand and prevent a fall of inflation below the central bank's inflation rate target. On the contrary when output rises above its potential level a Central Bank could decide to raise interest rates to control upward pressure on inflation. Also Governments, through fiscal policy, can act to close the output gap. In case of a negative output gap, an expansionary fiscal policy-i.e. an increase of government spending or a reduction of taxation-can be applied. Alternatively when there is a positive output gap, an contractionary fiscal policy-i.e. a reduction of government spending or an increase of taxation-can be adopted to reduce demand and to combat inflation (Jahan and Mahmud [1]). As noted by Jahan and Mahmud [1] estimating potential output and output gap is not so clear and immediate, given the uncertainty of underlying relationships in the economy. One of the issues addressed by the economic literature is why recovery has been very slow since the 2008 crisis, and there is still no sign of a return to the GDP forecasts made prior to 2008. In particular the recent economic studies show that after the 2008 recession, estimated potential output has declined and the estimated NAIRU has increased in most countries. Others empirical works have provided evidence, based on the experience of many countries over a long time period, that recessions have persistent effects on the path of GDP. Then it seems not plausible the notion that GDP would return to an independent, supply determined trajectory (Martin et al. [27]; Ball et al. [28]; Ball [29] [30]; Blanchard et al. [17]; Cerra and Saxena [31]; Fatàs and Summers [32]; Reifschneider et al. [33]). In other terms, fluctuations tend to be associated with persistent changes in GDP trajectories, as a result the return to an independently determined GDP trend must be extremely slow (i.e. beyond the horizon time assumed for cyclical fluctuations and economic policy). This evidence has been interpreted by the "real business cycle" literature as the following: cycle and trend are determined by the same factors, i.e. supply determined. At the same time, on the basis of other empirical results which show that is aggregate demand to drive fluctuations (Fazzari et al. [34]; Gali [35]; Lorentz [36]; Girardi [37]), persistent changes in GDP trajectories can be determined by aggregate demand, i.e. cycle and trend are driven by aggregate demand. According to most of this literature, the persistence occurs only from negative shocks, which generate higher equilibrium unemployment and lower potential output. In this new equilibrium any attempts to restore output and lower unemployment by means of aggregate demand would determine an high and accelerating inflation (Girardi et al. [38]). In contrast with this latter result, Girardi et al. [37]-investigating the effects of positive demand shocks through the selection of 94 episodes of demand expansion (i.e. an increase in autonomous demand) in a panel of 34 OECD countries between 1960 and 2015-shown that a positive demand shock determines: 1) a persistent effect on the GDP level; 2) a persistent reduction of unemployment level; 3) an increase in labour participation, employment and the capital stock and 4) a positive and quite persistent effects on productivity. In other words, their results show that production, employment and unemployment are not in-dependent by aggregate demand even in the long run. Another important result concerns the capital stock. The authors show that aggregate private investment largely depends on lagged GDP, and little, if at all, on interest rate. Subsequently Girardi et al. [38]-referring at most recent empirical literature focused on potential output, unemployment and hysteresis-discussed the circumstances in which changes in aggregate demand can have an appreciable persistent effect on aggregate supply. In doing so they offered an explanation about the reasons of the so called "secular stagnation" (i.e. inability to generate persistent growth (Summers [39]; Teulings and Baldwin [40])), and at the same time they gave some advices in terms of policy. According to the authors, the literature on secular stagnation has identified three (separate or interlinked) factors for explaining these phenomena: 1) a negative equilibrium real interest rate; 2) slow (or even negative) growth due to structural factors, such as demographic and technological trends; and 3) hysteresis. Within the literature on secular stagnation, "hysteresis" or "persistence" appears to be the best line of interpretation of the current situation (Blanchard et al. [17]; Martin et al. [27]; Cerra and Saxena [31], Guajardo et al. [41]; Jordà and Taylor [42] among others). At this regards, Girardi et al. [38] affirm that "the dependence of capital formation on aggregate demand growth appears to be the most convincing and empirically supported explanation of the persistent effects on GDP resulting from shifts in aggregate demand". Based on the evidence that aggregate demand expansions determine persistent effects on GDP, capital stock, participation and employment, the authors conclude that fiscal stimulus would be the most appropriate policy response to counteract secular stagnation. Their results are in line with other recent literature (Blanchard et al. [43]; Summers [39] and [44]; Turner [45]) that sustains the importance of fiscal stimulus, given that hysteresis is based on the effect of aggregate demand on capital formation. As stated by Blanchard et al. [43] after the crisis, the role of fiscal policy, as a macroeconomic tool, has been revised for two main reasons. First, given that monetary policy, including credit and quantitative easing, has largely reached its limits, policymakers have to use fiscal policy. Second, given that the recession is not totally overcome, the fiscal stimulus might have time to produce beneficial impacts, despite the implementation lags. In this context the policymakers have potentially more instruments at their disposal with respect the scenario before the crisis. The authors point out that the main goals to achieve should remain the same-i.e. stable output gap and stable inflation-also if the secondary goals are higher, including the composition of output, the behavior of asset prices, and the leverage of different agents. Thus, the challenge for policymakers consists to find the best way for applying instruments of economic policy (Blanchard et al. [43]). The literature on fiscal stimulus was defined by Furman [46] as a "New View" of fiscal policy. He argues that when monetary policy is at the effective lower bound, fiscal policy may even be more effective than previously realized. This can occur given that monetary policy-through interest-rate channel or ex-change-rate channel-will not even partially offset fiscal policy. In this context fiscal policy might also determine a crowding-in additional private investment. In other words, an expanded aggregate demand raises growth rates and thus increases investment growth, as predicted by the standard accelerator model for investment (IMF [47]; OECD [48]). In particular in an economy with a large output gap, fiscal expansion can expand private investment by raising inflation expectations, which would lower real interest rates. The above discussion-focused on empirical evidence of the last decadeemphasizes the following results: 1) cyclical fluctuations have persistent effect on GDP trajectories, and consequently on GDP trends which represent a measure of potential output; 2) these cyclical fluctuations seem to be driven by aggregate demand shocks (also positive), in particular aggregate demand expansions bring about persistent effects on GDP, capital stock, participation and employment at the cost of an extremely short-lived and moderate increase in inflation 3 ; 3) on the basis of this empirical evidence, a part of recent economic literature sustains the importance of fiscal stimulus as the most appropriate policy response to secular stagnation. Alternative Methods of Estimating Potential Output At this point of the discussion I want to pay attention also at technical aspects concern potential output, i.e. how it is possible to estimate it. To estimate potential output it is assumed that output can be divided into a trend and cyclical components. The trend represents a measure of the economy's potential output while the cycle is considered as a measure of the output gap. Consequently estimating potential output means to estimate trend, removing the cyclical changes. These cyclical changes-i.e. fluctuations-reflect movements in the trend components of inputs, such as: 1) labor (two key concepts connected to the labor component are the NAIRU and the NAWRU-non-accelerating wage rate of unemployment), 2) total factor productivity (TFP) and 3) capital (also if is often assumed that all capital is trend capital). According to European Parliament [49], the main goal of using potential output estimates and related concepts is to enable a counter-cyclical economic policy, "i.e. avoiding further inflationary pressures in boom times and support demand in contractionary periods". However the measurement of potential output is not observable, and it depends on models and assumptions applied to estimate it. This means that different models and assumptions produce different estimates and economists evaluate the performance of the applied methodologies by looking at revisions of the estimated values over time. Generally, there are two main directions for estimating potential output. Firstly, there are statistical non-parametric and univariate techniques such as filtering which decompose time series into trend and cyclical components. The advantage of such methods is their relatively simple implementation. On the other hand, these methods basically just filter out some frequencies from the data and therefore are not able to catch any structural changes within the sample. Within statistical techniques, the most used filters are the followings: 1) the HP filter, based on the work by Hodrick & Prescott [50] and 2) the Baxter-King (BK) filter, settled by Baxter & King [51]. The main drawbacks of these methods are discussed by Anderton et al. [52]. Firstly, using the filtering to estimate trends implicitly creates assumptions about the trend's (HP) or lower frequencies' (BK) existence. Consequently there is a possibility of a mistake in identifying the correct cycle, as the filter may not define the actual one. Secondly, these methods are highly dependent on the choice of parameters 4 that is made directly by researcher. The choice is arbitrary to some degree, even though there are "best practice" guidelines on how to proceed. The third major drawback comes from the fact that the univariate methods suffer from very large end sample biases. Filtering is basically a non-parametric method and as so it has poor forecasting reliability. An alternative method to estimate potential output is based on economic theory and in particular on the production function. According to production function method, potential output is the level of production (typically in terms of value added) at which the factors of production are fully employed, at least at the maximum level compatible with the NAIRU or NAWRU measures. This approach allows for a more direct link to sources of structural information and for an easier interpretation of the source of changes in the output gap or potential output (Anderton et al. [52]). However, there is additional uncertainty in these models. Total factor productivity and Non-accelerating inflation rate of unemployment (NAIRU) or alternatively NAWRU components of the function are themselves unobservable. These components are often obtained by statistical filtering which puts the production function approach to a criticism, as the uncertainty is shifted to the sub-steps. In a more general way, with production function approach, as with other methods, is impossible to evaluate in a formalized manner the degree of uncertainty of potential output estimations. An important advantage of this method is its reliability at sample end point (Cotis et al. [53]). For what concerns the functional form the most used specification is that of Cobb-Douglas or, alternatively, the constant elasticity of substitution function. Both usually include three factors of production: 1) capital, 2) labor and 3) total factor productivity. The view given by this approach is structural as it is based on a supply side model that can help to identify the underlying contributions of respective factors as well as explain the forces underlying developments of growth in the medium term. The classical production function considers the level of output (Y) to be a combination of two factor inputs, namely labour (L) and capital (K), where each factor input is corrected for the degree of excess capacity (U L , U K ) and adjusted for the level of efficiency (E L , E K ). In addition to the labor and capital inputs, output is expected to be affected by total factor productivity (TFP), which is measured as the Solow residual (Anderton et al. [52]). Under the assumption of constant returns to scale and perfect competition, Equation (1) and Equation (2) imply that the output elasticities of the two factor inputs equal the factor shares in output, with α representing the output elasticity of labor and 1-α that of capital. The Cobb-Douglas production function also assumes that the elasticity of substitution between labor and capital is 1. To calculate the potential level of output, it is necessary to calculate the trend components of the various inputs which are defined as followings (European Parliament [49]): -Capital that is a function on: 1) past capital stock, 2) investments, and 3) depreciation rate, (ranging from 1% for computer hardware and equipment to 30% for housing). Thanks to its smoothness and stability, capital is identified with trend capital. -Labor that is expressed in terms of hours worked; it is determined as a product of population projections, participation rates, hours worked and the non-accelerating wage or inflation rate of unemployment. Trend labour is the product of its trend components. -Total factor productivity (TFP) that measures productivity growth (such as technology improvements); it is estimated as a difference between output and input components. The trend is obtained by filtering its time-series. TFP is an indirect indicator because it represents a residual of unobservable quantities, then it is very difficult to estimate. Other approaches include the unobserved components method and the structural vector autoregression approach or SVAR. The unobserved components methods (Beveridge-Nelson [54]) decomposition, univariate unobserved components model, bivariate unobserved components model and common permanent and temporary components) estimate unobserved variables such as potential output and the NAIRU, using information from observed variables. The SVAR approach is based on the method developed by Blanchard and Quah [55] to distinguish between the permanent and transitory components of output using a structural vector autoregression with long run restrictions. The unobserved components approach has the advantage that relationships between output, unemployment, and inflation can be specified (Cerra and Saxena [31]). The relationships are first written in state-space form, that is a general way of representing dynamic systems (which include measurement and transition equations). The observed variables are specified as a function of the unobserved state variables in the measurement equation while transition equation specifies the autoregressive process for the state variables. When a dynamic time series model is written in a state-space form, the unobserved state vector can be estimated using the Kalman filter 5 . In general this approach has the disadvantage of requiring considerable programming, with ensuing difficulties in debugging the model and interpreting the results. Moreover results are often sensitive to the initial guesses for the parameters (Cerra and Saxena [31]). Based on the traditional Keynesian and neoclassical synthesis, the SVAR method identifies potential output with the aggregate supply capacity of the economy and cyclical fluctuations with changes in aggregate demand. Within this approach the Blanchard and Quah [55] method-based on a vector autoregression (VAR) for output growth and unemployment-identify structural supply and demand disturbances according to the hypothesis that the former have a permanent impact on output, while the latter can have only temporary effects on it. The analysis can be extended to consider also temporary nominal shocks by inserting also a price variable (Cerra and Saxena [31]). Within the existing methods, the most used by international Institutions is the production function approach, that is also widely applied by central banks (Havik, et al. [57]). For what concerns specific technicalities adopted by international institutions, it was observed that: 1) The European Commission estimates are from the regular projection exercises and from the assessment of stability/convergence programmes of EU Member States (Ufficio Parlamentare di Bilancio [58]). The methodology is based on a Cobb-Douglas production function, in which the trend labor component is derived from population projections, trend participation rate, trend hours worked and NAWRU. The trend TFP is obtained by means of a bivariate filter, augmented with a capacity utilization measure. This is supplemented, for the longer term, by a number of considerations, e.g. on cross-country convergence, etc. 2) The IMF estimates are not based on any "official" method, and may incorporate judgement by the relevant country desks. However, for the euro area countries, some version of the production function approach is usually involved (Epstein and Machiarelli [59]; Konuki [60]). 3) The OECD estimates are based on a similar methodology, including a Cobb-Douglas production function with labor, physical and human capital and multi-factor productivity (the equivalent of TFP) as factors as well as an exogenous trend. OECD estimates consider the Kalman filter to define the NAIRU (Cotis et al. [53]). As discussed above, the existing methodologies, including production function approach, present some drawbacks which can affect the estimation results of potential output. At this regards Coibion et al. [61] affirm that much work have to be done to create a better measure of potential GDP. According to the authors some of the existing methods seem potentially underused, consequently they offer the following suggestions to ameliorate their utilization: 1) using addi-tional macroeconomic variables and restrictions to better identify supply and demand shocks rather than relying on univariate processes; 2) supporting public estimates of potential GDP with information about private sector forecasts which are more successful at isolating supply shocks from demand shocks and 3) limiting excessive use of model-averaging, or at least to exclude, among the class of models used, simple approaches like HP-filters since these mechanically induce movements in estimates of potential after cyclical demand-driven fluctuations. The general weakness of the estimation methods reduces the reliability of the potential output estimations. This issue should be take adequately into account when rules or economic policies, mainly based on potential output measure, are defined. Potential Output of Euro Area after the Crisis and the Policy Response of European Institutions After the recent crisis the crucial problem is the inability to generate persistent growth (Summers [39] and [44]; Teulings and Baldwin [40]). This scenario may have been worsened also by a tight fiscal policy which has been dominated by consolidation measures focused to expenditure cuts especially for investments/infrastructures. According to Summers [44] an increase in public investment represents the key to restore reasonable growth and it is hard to make a rational case against a substantial increase in public investments in Europe and in United States. Until today-that is a long time after the crisis-negative output gaps and employment gaps are not still closing. For example taking into account the United States and Euro area, Anderton et al. [52] observe that the crisis has affected mainly labor and capital as a components of potential output (Anderton at al [52], figure 22). In particular, for what concerns the Euro area the decline in labor contribution was largely caused by the considerable rise in structural unemployment. In both areas the decline of capital stock was a consequence of lower investments which may have had a constraining effect on the supply capacity of the economy and hence on potential output growth in the longer run. Intuitively, lower investment leads to a permanently lower capital stock. To the extent that new capital also embodies technological improvement, lower investment may also be associated with lower TFP growth. In addition to a reduction of investment growth, the rate of capital scrapping and the life span of capital assets have also been affected by the crisis, in particular in the Euro area. Moreover in the Euro area there is still an excess of savings which requires real interest rates to be low or negative for an extended time in order to support the return of output to potential and the labor market to full employment (Anderton et al. [52]). While potential output growth in the Euro area remained weak in 2011-12, in the same period in United States it started to recover. For the future a faster re-covery in US potential output with respect that of Euro area is expected. The difference is explained by higher capital and TFP contributions as well as a substantial difference in the projected labor contribution. Several factors could explain this divergence: 1) the more flexible nature of the US economy, allowing faster labor market adjustment; 2) an US fiscal policy more prone to support activity with respect those in euro area where the sovereign debt crisis and the associated surge in uncertainty had a direct negative impact on economic activity, e.g. via cuts in public infrastructure investment; 3) an US bank credit standard on mortgages and loans to non-financial corporations that, starting from the middle of 2010, became less tight with respect to the euro area. In general terms, according to Anderton et al. [52]: "it is not yet clear to what extent potential output growth has been affected by the crisis and this assessment is more uncertain than in previous downturns, owing to the severity of the slowdown in activity and of the imbalances that had accumulated prior to it". Moreover as showed by recent empirical literature, the determinants of potential output can be originated by both demand and supply side and this evidence contributes to increase uncertainty on the actual measurement of potential output. Given these issues, today the concept of potential output and its measurement should be taken with cautions by Institutions that adopt stabilization policies. In other words, given the uncertainty surrounding estimates of potential output it became more difficult to judge both the current degree of slack in the economy and the growth and inflation prospects for the economy. As a result the definition of appropriate monetary and fiscal policies, based on potential output and output gap, is more difficult. However the European institutions continue to consider the concept of potential output and NAIRU as an useful guidance for policy. For instance they are used to assess structural budget deficit constraints. In particular, the Stability and Growth Pact (SGP) and related secondary law widely refer to the concepts of potential output, output gap and structural budget balance (SBB). Moreover the Treaty on Stability, Coordination and Governance in the Economic and Monetary Union (TSGC) refers to a fiscal targets which are expressed in terms of structural budget balances (European Parliament [49]). Within the SGP, the fiscal medium-term objective (MTOs) for euro area Member States (and Member States belonging to the Exchange Rate Mechanism-ERM II) are specified to vary within a range between −1% of GDP and a balance or surplus. The fiscal medium-term objective (MTOs) are updated every three years, or in case of major structural reforms. Within of MTOs two typologies of countries are identified: 1) Countries under the preventive arm of the SGP, not having achieved their MTOs, should respect an adjustment path of their SBB towards it, with an annual improvement of 0.5% of GDP per year as a benchmark. In this case the expenditure benchmark rule has been introduced to complement the MTOs. This rule is defined in terms of potential output estimates and it limits the growth rate of government spending. In particular it establishes that a spending growth rate beyond the medium-term potential eco-nomic growth rate must be compensated by additional discretionary revenue measures. 2) Countries under the corrective arm of the SGP, i.e. those in excessive deficit situations, should improve their SBB of at least 0.5% of GDP per year as a benchmark. In its communication on January 2015, the European Commission introduced the output gap in one of the flexibility clause used to assess the adherence of a Member State to the SGP. This clause takes into account "good" and "bad" economic times and to this aim, the Commission defines five "output gap intervals" in order to assess the annual fiscal adjustments towards the MTOs. The structural budget balance represent also the main element of the TSGC. All signatories of the TSGC with a debt ratio well below 60%, and/or facing low risks to the sustainability of public finances, are committed to set a MTO of at least −1.0% of GDP; while signatories from the euro area with a debt ratio above 60%, or facing risks to the sustainability of their public finances, are committed to set a MTO of at least −0.5% of GDP. Under the terms of the TSGC, all signatories are committed to: 1) approving national binding law rules, by observing the provisions of the preventive arm of the SGP intended to limit their structural deficits, and ii) defining a correction mechanism that would be triggered automatically, at national level (European Parliament [49]). According to the European Commission, structural fiscal balance indicators are very useful, albeit not perfect, indicators of the fiscal policy stance. In practice, measures of the structural fiscal balance mostly depend on measures of the output gap and on the government aggregate tax or expenditure to GDP ratio. Given their reliance on the output gap, estimates of the structural fiscal balance are rather uncertain, while measures of its changes over time are generally considered to be more robust (Economic Policy Committee [62]). Recently, the concept of structural balance, the related indicators, and the methodology used to estimate it, have been subject to some criticisms by EU and the Member States, in particular as regards their reliability and transparency. At this regards I cite: 1) Deutsche Bundesbank study [63], focusing on G7 countries, gives warning of the high degree of uncertainty of output gap estimates, and expressing doubts on the suitability of such estimates in economic policy. 2) In 2014 European Central Bank mentioned the problem posed by the stability of output gap estimates and of their revision, by comparing the estimates of 2007 output gaps made at different points in time. 3) In 2016 the Ministers of finance of eight Member States (Italy, Spain, Latvia, Lithuania, Luxembourg, Portugal, Slovenia and Slovakia) sent a letter to the Commission, expressing their concerns regarding the estimation of potential output and asking the Commission to extend the length of its forecast horizon from two to four years (European Parliament [49]). In a more general perspective, Furman [46] discusses other aspects which should lead to rethink the stabilization policy in Euro area. In particular, the Stability and Growth Pact (SGP)-that represents an attempt of fiscal policy coordination-is asymmetric since it can compel deficit reduction but cannot compel fiscal expansion. The European Central Bank (ECB), managing macroeconomic policy in Euro Area, might not able to address-by means of monetary policy-shocks, which seem to be persistent and affect the entire Euro area (also because the actual monetary policy runs into limits). The actual European institutional structure acts as barriers to effective policy, amplifying shocks rather than dampen them. At this purpose Furman [46] suggests to undertake more countercyclical fiscal policy at the Euro area level or at least ameliorate coordination of national fiscal policy by means of a revision of the SGP or with a new multilateral agreement. Within countercyclical measures, Furman [46] includes an increase in infrastructure funding through, for example, the European Investment Bank. Fiscal rules of the Stability and Growth Pact have been critized also because they do not consider the so-called "golden rule of public finance", which excludes public investments from the deficit ceiling (Blanchard and Giavazzi [64]). At this regards some studies argue that the composition of expenditure cuts may critically influence fiscal consolidation processes put into action to respect the SGP rules. In particular Hakhu et al. [65] investigate the relationship between investment spending and debt financing in the EU. They find a negative relationship between public capital expenditure and public debt. According to their empirical results, strengthening the sustainability of EU public finances can be obtained by rising public expenditure in assets, such as, investments in technology and infrastructures. Potential Output and the Role of Public Infrastructure Expenditure As discussed before the cyclical fluctuations can origin by supply and demand [70]) has proven that an increase of public spending and in particular on infrastructures can positively affect the economy in two ways. In the short term it boosts aggregate demand through the short-term fiscal multiplier, also by potentially crowding in private investment, given the complementary nature of infrastructure services. As stated by Fournier [71]: "If public and private capital are complementary (e.g. roads that connect enterprises), higher public investment can spur private investment. This corresponds to cases in which the social return is above the private return...". Over time, there is also a supply-side effect of capital expenditure when it turns into effective capital formation, that is productive machinery and, because of the role of government, in an increase in the production capacity of the infrastructures or of other public goods put into operation. This effect (if there is a good governance and investment was properly selected by cost benefit analysis) could be greater than the government-backed expenditure for its implementation (IMF [69]). in a more general way shocks to demand can spill even more swiftly and strongly across borders when aggregate demand is weak and interest rates are low (Furman [46]). Reference [73] found that countries or regions engaging in an indi- The efficiency of investment is central to determining how large these effects will be. Reference [78] found a small and non-significant long-term effect of large infrastructure projects and public capital increases in low-income countries. The author discussed on the quality of investment programs implemented on selected low-income countries. He noted that these programs suffered of the following problems: 1) incentive problems, 2) agency problems, 3) a pervasive avoidance of rational analysis and 4) even difficulty obtaining or collecting the critical data. As a result the crucial information-which normally constitute the basis on investment choices-were unavailable. Then the final result was a low-quality of the selected investment projects. Reference [79] investigate the nexus between efficiency, public investment and growth. They found with a standard model that both efficiency and rate of return of public capital need to be considered together in assessing the impact of increases in investment. Changes in efficiency have direct and potentially powerful impacts on growth. They suggested then to "investing in investing" through structural reforms that increase efficiency leading to a very high rate of return. Reference [80] highlights the role of public investment for economic growth and the policies that governments should adopt to improve public investment efficiency. Inefficiencies in the investment process, such as poor project selection, implementation, and monitoring, can result in only a fraction of public investment translating into productive infrastructure, limiting the long-term output gains. Reference [71] sheds light on the long-term effects of public investment, estimating the average effect and providing some insights on the specific circumstances, which make public investment particularly effective. Considering data on OECD countries, the most important findings are the following: 1) increasing the share of public investment in total government spending yields large growth gains; 2) these effects are highest in sectors that are associated with large externalities, such as research and development or health, and they are lowest in countries where the public capital stock is already high such as Japan; 3) a spending shift towards public investment, away from other spending, would also speed up the convergence of lagging countries towards the income of the most advanced economies; 4) in terms of economic policies, governments should implement sound public investment policies (provide the right incentives, carry out cost/benefit analysis underpinned with good data) and focus on sectors with high externalities, because public investment is a lever to boost growth in the long run. Reference [69] concluded that well-planned fiscal expansions will more than pay for themselves. Specifically, the IMF found that infrastructure investment would have substantial positive impacts on gross domestic product and those impacts would be large enough to reduce debt burdens. Reference [15]-considering a depressed economy characterized by a form of negative hysteresis-found that a debt-financed increase in public investment as a share of potential GDP leads, in the short run, to a change in the debt-to-potential GDP ratio. However, in the long run these effects are countered by the emergence of supply effects i.e. from the increase in productive capacity, and productivity that efficient investments will generate. At the end the final result is positive and the expansionary fiscal policy is self-financing. Reference [73] found that a permanent increase in government investment of 1% of GDP increases growth through permanently increasing investment and consumption. This fiscal spending creates future fiscal space 6 through increasing government revenue and reducing the debt-to-GDP ratio (Furman [46]). In what follows I focus the discussion on the most relevant empirical contributions which analyze the role of infrastructure spending on GDP as well as on potential output of Euro area countries. Reference [81], focusing on European case during the period 2008-2014, argued on the "negative loop" determined as a consequences of the interaction between an overly pessimistic view on potential 6 A needed condition for a country to obtain a better fiscal space is the existence of credible political system that is capable of making firm, long-term commitments, since upfront fiscal expansion can be combined with medium-and long-term fiscal consolidation (Furman [46]). output among policy makers and the effect of fiscal policy focused mainly on contractionary measure. These measures determined a reduction in potential output that validated the original pessimistic forecasts and led to a second round of fiscal consolidation. The author shows that for many European countries the succession of contractionary fiscal policies "was likely self-defeating as the negative effects on GDP caused more damage to the sustainability of debt than the benefits of the budgetary adjustments". In other words the effects of contractionary fiscal policy (adopted mainly for the purpose of public debt sustainability) on output fed into more pessimistic views on the future leading to an additional fiscal consolidation. These damaging effects affected the estimates of potential output that were highly pro-cyclical: "if cyclical events lead to immediate reductions to long-term projections of GDP, it might lead to even more contractionary fiscal policy and further negative effects on output". The author suggests that for a good design of fiscal policy, in particular when sustainability is an issue, governments have to define accurately potential GDP and output gap. Reference [82] showed that the negative effects of contractionary fiscal policy become permanent via hysteresis effects during the fiscal contraction 2010-2011 in Europe. The authors affirm that "in the presence of hysteresis, not only we are underestimating the effects of fiscal policy on output, but we might fall in a vicious cycle that we call the fiscal policy doom loop". These contractionary measures were applied in almost all advanced economies for which the capital stock was declining. In particular as discussed above, for the two big areas-the United States and Euro area-the decline of capital stock was a consequence of lower investments which may have had a constraining effect on the supply capacity of the economy and hence on potential output growth in the longer run. Reference [83], applying model simulations (EAGLE model 7 ), investigates the effect of a temporary increase in public investment in a large euro area country (Germany), focusing on: 1) output and 2) public finances. They found that an investment shock equal to 1% of GDP over 20 quarters, financed through public debt, implies a positive impact equal to: 1) 1.7 on GDP for Germany; 2) less than 0.1 on GDP for Rest of Euro Area. Other empirical results are the followings: 1) the longer-term positive effects on the economy's potential output and 2) the evidence that impact on public finances crucially depend on the effectiveness of investment and the productivity of public capital 8 . If they are low, an increase in public investment is associated with a greater deterioration of the debt outlook and less persistent output gains. This study concludes that any increase in public investment needs to be assessed in the light of its productivity, its financing and the relative costs and benefits of the financing options. Then economic consid- 7 For technical details: Gomes et al. [84]. 8 At this regard other literature is focusing on estimate optimal public capital stock to GDP ratios, under the assumption that the marginal returns of public capital are decreasing. Reference [85] shows that in the United States, the optimal capital stock is about 60% of GDP; Kamps [67] finds an optimal capital stock around 40% in European countries and more recently, Checherita-Westphal et al. [86] find that the optimal public capital stock level in OECD countries is between 50% and 80% of GDP. Another interesting empirical paper is presented by Mourougane et al. [87]. Using F&F, FM and NiGEM structural macro-econometric models 9 , they show results of a set of simulations suggesting that raising public investment rises business investment in the most advanced economies after one year and, with corresponding increases in the business sector capital stock and potential output. Moreover if the additional public investment is concentrated in network industries these positive effects could be even stronger, in particular in European Union where there is a greater possibility of crowding in private investment. In the simulations, the long-term impact of a permanent investment increase on the productive capacity of the economy produces: 1) a direct effect on capital accumulation in the production function and 2) some spillover effects from the higher public capital stock on potential output. The authors show also that combining an investment stimulus with structural reform enhances growth impacts and it accentuates the reduction of the public debt-to-GDP ratio. In particular the implementation of product-market reforms can enhance the impact of an investment-led stimulus on growth and public finances, through their impact on total factor productivity and potential output. By increasing potential output in the long run, a product market reform package reduces uncertainties surrounding public debt, especially in the most indebted European countries. Reference [88] analyze the sectorial and regional effects of infrastructure investments in Portugal. Applying a VAR model, they identify areas of infrastructures investments with virtuous economic and budgetary effects. At this regards their results show that investments in transportation infrastructures (railroads, ports and airports) and in social infrastructures (health and education infrastructures) should be considered as priority investments also because they will pay for themselves in the form of long term enhanced tax revenues under rather reasonable effective tax rates. Reference [89]-applying the same methodologies and focusing on data of 4 Member States (France, Germany, Italy and Spain)-obtain similar results. They find that infrastructure investment not only drives positive demand shock but also raises factor productivity. Moreover they find that infrastructure investment has a higher impact on activity in economic bad times than in economic normal times. Consequently infrastructure investment is highly recommendable as policy lever to augment GDP and reduce the public debt burden. The general conclusions of their work are the following: 1) infrastructure investment has very large positive effects on the economic performance of Euro area countries and 2) a very large infrastructure multiplier for the euro area sug- 9 The structural macro-econometric models can be considered more faithful to the Keynesian paradigm. Most of them combine Keynesian reactions in the short run with neoclassical features in the long run. They usually lead to multipliers larger than 1 through crowding-in effects on private consumption or investment, depending on the monetary and foreign trade regime. For further details: Mourougane et al. [87]. gests that infrastructure investment is likely to pay for itself. Using a NiGEM model Fic and Portes [90] quantify the macroeconomic impacts of investment in infrastructure in the UK. In particular, they look at the impacts on output, potential output, unemployment and fiscal balances, distinguishing between normal times and crisis periods (i.e. abnormal monetary and credit conditions). The authors find that increasing infrastructure investment in the UK has the potential to boost growth both in the short (defined as the first two years after the shock) and long run (defined as eight to sixteen years after the shock) and the impacts are even stronger in a crisis period as compared to normal times. The simulations show that an increase in infrastructure spending of 1% of GDP results in an increase about 1% of GDP in short run, and increases potential GDP by about 0.2% in the long run. Another branch of empirical literature, that studies the effects of additional infrastructure spending, considers the production function approach (Núñez-Serrano and Velázquez [91]; Agénor and Neanidis [92]; Bom and Ligthart [93]; Romp and De Hann [72]). This approach studies the technical relationship between public capital and other production factors on the one hand, and output on the other. In other words, a public capital stock is often incorporated as an additional production factor, next to a private capital stock and labor, by augmenting the production function. The empirical works generally assume that public capital forms an element in the macroeconomic production function and enters in two ways (directly as a third input in production function and indirectly by means of multifactor productivity). According to Pereira and Andraz [94] the positive contribution of public capital increases to growth shows a decline over time, especially in developed countries, because of a downward trend in the marginal productivity of public capital which determines gains for additional investment smaller than in the past. When the public capital stock is allowed to degrade through lack of investment, this could in theory lead to slower private-sector productivity growth. However Bivens [95] states that improving private-sector productivity is just one reason to support expanded public investment. He sustains that also when public investments do not affect private-sector productivity, they produce always a benefit if it allows to delivery more efficiently public goods. For example if people receive clean water and air, safe food and medicine, and transportation services for less money than they spend currently, this is a perfect way to enjoy the economic returns to expanded public investment, even if they do not boost private-sector productivity. Moreover another reason to support expanded public investment consists in the possibility that its benefits are more broadly shared than the benefits of private-sector investment. The general conclusion of this strand of the literature is that public capital-and in particular investment in core infrastructure (Bom and Ligthart [93])-supports the GDP as well as potential output. The empirical results on positive effects of public investment differ across countries, regions, and sectors (Bom and Lighart [93]; Núñez-Serrano and Velázquez [91]). Conclusions This theoretical discussion argues on some issues on potential output because they have important policy implications. The starting point was the analysis of the economic theory about the relation between potential output, inflation and unemployment. In doing so I emphasized as the theoretical assumptions have not been proven by empirical evidence on the last decade. In particular, the most important empirical results are the following: 1) the concept of potential output defined as the level of national income compatible with the natural rate of unemployment is not validated, then it is necessary to better investigate the relationships underlying the concept; 2) the so called "divine coincidence", according to which stabilizing inflation is equivalent to stabilizing the welfare-relevant output gap, does not happen, 3) monetary policy might not be successful, 4) the relation between the level of unemployment and the level of inflation is weak and it looks like the old downward sloping Phillips Curve, and 5) the persistent (hysteresis) effect of cyclical fluctuations can origin by supply and demand shocks (also positive). In add at this divergence between theory and empirical evidence, there are also technical issues concerning the methods to estimate potential output. The measurement of potential output is not observable, and it depends on models and assumptions applied to estimate it. At this regards the existing methods present some weaknesses reducing reliability of the estimation results. These issues should be adequately taken into account by decision makers when defining economic policy in particular in the actual secular stagnation scenario. Until today it is not clear to what extent potential output growth has been affected by the recent crisis. The actual stabilization policies-based on the existence of output gaps and on public debt sustainability-might be not able to mitigate effectively cyclical fluctuations and to stimulate economic growth. Looking at potential output of two advanced economies, as the United States and Euro area, it seems to recover in the US while in Euro area it is still sluggish. At this regards, academics and some policy makers have criticized both tight fiscal rules of SGP and TSGC (which are closely related to the concept of potential output and NAIRU) and the related austerity measures implemented by Member States in order to respect them. Recent empirical evidence shows that those measures worsened economic situation. In particular some authors argue about "negative loop" determined as a consequence of the interaction between an overly pessimistic view on potential output among policy makers and the effect of fiscal policy focused mainly on contractionary measure. This theoretical analysis emphasized an economic literature that discusses an alternative way to use fiscal policy on the basis of positive relation between infrastructure spending and economic growth. The main conclusion is that an increase of public capital affects positively economic growth. In other words, an increase of public spending and in particular additional infrastructure spending can rise GDP as well as potential output. To deepen the discussion I looked at empirical works focused on Euro area which confirm this positive relation. At the end of this analysis, the following advices emerge: 1) to take with caution the actual measure of potential output given the weaknesses of the existing methods to estimate it and the higher uncertainty on its underlying relationships i.e. the origin of cyclical fluctuations; 2) to consider fiscal stimulus as a policy option potentially effective; 3) to inform adequately policy makers (at national and international level) about the net benefits to implement another fiscal policy, based on additional infrastructure spending; and 4) to consider as preconditions for a good infrastructure spending, the followings: a) appropriate institutional governance, that for what concerns Euro area implies a better coordination of national fiscal policies and b) selection of sound investment projects, through robust evaluation methods. Conflicts of Interest The author declares no conflicts of interest regarding the publication of this paper.
2019-10-17T09:06:21.699Z
2019-08-29T00:00:00.000
{ "year": 2019, "sha1": "c256dcf8967c4a808d24967a1c7c26f24b12fdf3", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=95764", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "fa9f719bb43837ab1ac0ddc53cdb2d1f1678654f", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
237233734
pes2o/s2orc
v3-fos-license
Effect of Phosphate and Other Anions on Trimethylarsine Formation by Candida humicola Phosphate inhibited the formation of trimethylarsine from arsenite, arsenate, and monomethylarsonate, but not from dimethylarsinate, by growing cultures of Candida humicola. Phosphite suppressed trimethylarsine production by growing cultures from monomethylarsonate but not from arsenate and dimethylarsinate, and hypophosphite caused a temporary inhibition of both proliferation and the conversion of these three arsenic sources to trimethylarsine. Resting cells of C. humicola derived from cultures grown in arsenic-free media generated the volatile arsenical only after a lag phase. High antimonate concentrations reduced the rate of conversion of arsenate to trimethylarsine by resting cells, but nitrate was without effect. Arsenic compounds have been and still are widely used as pesticides. Inorganic arsenicals were applied for pest control in the past in amounts such that certain areas even now cannot be farmed, owing to the presence of significant quantities of this toxic element in the soil (4). At the present time, *organic arsenicals such as cacodylic acid and monosodium methanearsonate are used as herbicides or defoliants. The microbial conversion of the latter compound to arsenate and CO2 has been demonstrated in samples of several soils (3,7). Despite the vast amounts of such pesticides which have entered the soil, little is known about the behavior of this element in natural ecosystems. The formation by microorganisms of volatile arsenic compounds is of special concern because volatilization may result in human exposure to this toxic element. For example, several fungi have been reported to be able to form trimethylarsine, a volatile and highly toxic metabolite (1). A culture of Methanobacterium, by contrast, was recently found to generate dimethylarsine from arsenite under anaerobic conditions (6), and a volatile product with a garlic-like odor, presumably an alkylarsine, has been observed to be evolved from soil treated with dimethylarsinic acid (Woolson and Kearney, Environ. Sci. Technol., in press). Candida humicola is of particular interest because, as shown herein, it is capable of converting salts of arsenic, arsenious, monomethylarsonic, or dimethylarsinic acids, substances which are either pesticides or are formed from pesticides, to trimethylarsine. The present study was initiated to determine some of the variables affecting arsenic methylation by C. humicola, to provide a basis for assessing the potential influence of environmental factors on this microbial process. MATERIALS AND METHODS All chemicals were of reagent grade. The purity of the commercial trimethylarsine and the identity of the volatile microbial product were determined by gas chromatography with a Varian Aerograph gas chromatograph, model 1700 (Walnut Creek, Calif.), fitted with a flame ionization detector. The stainlesssteel column was 78-cm long and had a 2-mm inner diameter. The column used regularly was 5% (wt/wt) FFAP-coated Chromosorb G maintained at 75 C, but sometimes Chromosorb 101 at 150 C was employed. The temperature of the injector and detector was 200 C, and the flow rate of the carrier gas, N., was 100 ml/min. The identities of the authentic and biologically produced trimethylarsine were verified by mass spectrometry with a Perkin-Elmer mass spectrometer, model 270 (Norwalk, Conn.), operating with an ionization voltage of 70 eV and an electron voltage of 2,000 eV. Both compounds exhibited parent ions at m/e 120. The growth studies were initiated by growing an inoculum of C. humicola overnight in a medium containing one of the arsenicals. The culture was diluted in 100 ml of the same medium to an absorbancy of 0.12 to 0.15 at 525 nm. The culture vessel, which was a 300-ml Erlenmeyer flask fitted with a side arm, was sealed with a red neoprene sleeve stopper (West Co., Phoenixville, Pa.) which was held fast with a hose clamp, and it was incubated at 30 C on a reciprocal shaker. The head space above the culture was analyzed for trimethylarsine production by removing a 1.0-ml sample with a gas-tight syringe and injecting this sample into the gas chromatograph. To prepare cell suspensions, the inoculum prepared as above was transferred to a 2-liter Erlenmeyer flask containing 500 ml of culture medium. The culture was incubated at 30 C on a rotary shaker until an absorbancy of about 1.0 was reached, at which time the cells were collected by centrifugation, washed twice in 0.85% NaCl, and suspended in 7 to 8 ml of distilled water. A 0.5-ml portion of this suspension was placed in a 50-ml Erlenmeyer flask containing 10 ml of 0.5 M succinate buffer (pH 5.0), one of the arsenic compounds, and a compound whose effect on trimethylarsine production was to be examined. The flasks were sealed with the rubber stoppers and incubated at 30 C on a reciprocal shaker. At regular intervals, the head space was analyzed for its content of trimethylarsine. The 10% aqueous solutions of KNO,, KHPO4, NaH.PO4, (NH4)2HPO4, NaHPO,, NaHPO,, and NaSbO, were adjusted to pH 5.0 before portions were added to representative cultures. To determine dry weights, duplicate 0.5-ml samples of the culture were placed on tared planchets and dried overnight at 110 C. The planchets were weighed after having cooled to room temperature in a desiccator. Trimethylarsine evolution often was detected only after several hours, and analyses were not routinely performed during this initial period. At the time of the first analysis in some experiments, therefore, some volatile arsenic was found, and the data showing this initial level are presented as the trimethylarsine concentration in the head space at zero time. RESULTS Trimethylarsine was produced by cultures of C. humicola growing in media supplemented with 0.10% of the sodium salts of arsenate, arsebute, monomethylarsonate, or dimethylarsinate. The latter three may be interme- ( Fig. 1). The greatest amount of trimethylarsine relative to cell density was formed in cultures containing arsenate. Lesser quantities were generated in the presence of arsenite and monomethylarsonate even though the arsenic concentrations in these instances were higher than in flasks containing arsenate and dimethylarsinate. The optimum pH for trimethylarsine production was determined by suspending washed cells in four different buffers, each containing 1.0 g of sodium arsenate per liter. Citrate (0.05 M) was used at pH 3.0, 4.0, and 5.0, and 0.05 maleate-tris(hydroxymethyl)aminomethane buffer was used at pH 6.0. The buffer at pH 5.0 supported the greatest activity and gave the largest product yield (Fig. 2), and it was thus used in subsequent studies. However, succinate was used at pH 5.0 in place of the citrate. The possible effect of phosphate on the formation of this toxic gas by growing cultures was studied by comparing the product yield in media to which were added 0.01 and 0.10% KH2PO4. The lower level was required to allow for adequate growth of the organism, and this concentration was present in all cultures. Addi-tional phosphorus (0.1% KH2PO.) was added to half the flasks after growth was proceeding. Addition of the higher phosphate level reduced trimethylarsine production from arsenate, arsenite, and methylarsonate (Fig. 3). The effect was not the result of a pH change from the added phosphate, because the phosphate was adjusted to pH 5.0 before it was introduced. By contrast, the high-phosphate levels did not reduce arsenic volatilization in cultures containing dimethylarsinate, and an addition of even up to 0.8% KH2P04 did not affect trimethylarsine evolution from this substrate. The higher KH2PO4 concentrations did The influence of other phosphorus salts on trimethylarsine production was also tested with growing cultures. The addition of Na2HPO3 to give a final concentration of 0.10% did not affect trimethylarsine biosynthesis in media containing 0.10% of the salts of arsenate or dimethylarsinate. However, this level of phosphite inhibited arsenic volatilization when the medium contained methylarsonate (Fig. 4). On the other hand, phosphite did not reduce the rate of growth in solutions containing methylarsonate. The addition of NaH2PO2 to growing cultures to a final concentration of 0.10% had a different effect than that of the other phosphorus compounds. Introduction of hypophosphite into C. humicola cultures caused an immediate reduction or even a cessation of trimethylarsine evolution from arsenate, methylarsonate, and dimethylarsinate (Fig. 5). However, this inhibition disappeared rapidly, and the rate of formation of volatile arsenic was similar or essentially the same as that in the cultures not treated with hypophosphite. A study was then made of the influence of varying the hypophosphite concentration on trimethylarsine production from dimethylarsinate by growing cultures. A distinct inhibition was again noted (Fig. 6), but this inhibition was soon overcome and arsenic gas production recommenced. The length of the period before trimethylarsine evolution resumed was proportional to the hypophosphite concentration, and the rate after the period of suppression was essentially the same as in untreated cultures. The temporary suppression by large amounts of hypophosphite may have resulted from a short-lived inhibition of growth. Thus, measurements of the absorbancy at 525 nm demonstrated that the addition of 0.05 to 0. NaH2PO2 led to an immediate cessation of growth, but proliferation resumed at the same rate as in the untreated flasks within periods of about 1 h or less. The inhibition of trimethylarsine production sometimes was as long as 2.5 to 3 h at 0.4 and 0.8% NaH2PO2. Relief of the toxicity may be a consequence of the conversion of hypophosphite to a nontoxic product. Subsequent studies were conducted with resting cell suspensions of C. humicola to eliminate possible effects of the test compounds on multiplication. An experiment was conducted to determine whether these cell suspensions would have greater activity if the organisms were previously grown in media containing an arsenical than if grown in solutions devoid of this element. The organism was thus cultured in medium with and without arsenate, and the cells were washed and then incubated in the presence of 0.10% NaH2AsO4. Cells from arsenic-free media generated the alkylarsine only after a long lag period (Fig. 7). A short lag phase was evident in cells collected from media with 0.01 and 0.05% arsenate. By contrast, cells from media with 0.20% arsenate began to synthesize trimethylarsine with essentially no lag, and the reaction was linear with time. Phosphate reduced the extent of trimethylarsine formation by resting cells. The arsenic source in these tests was 0.1% NaH.AsO., and the cells were suspended in succinate buffer at pH 5.0. The rates of gas evolution depicted in Fig. 8 demonstrate a modest inhibition at 0.001% KH2PO4 and a significant depression at concentrations of 0.005% or greater. Almost no activity was observed in the presence of 0.05% of the phosphate salt. The results were similar whether the phosphate was provided as the sodium or the ammonium salt, demonstrating that the anion is probably responsible for the inhibition. To determine whether other elements of group 5 would suppress this activity, resting cells were incubated with 0.10% NaH2AsO, in the presence of various concentrations of KNO3, NaH2PO2, NaSbO3, and KH2P04. Nitrate and hypophosphite were without significant effect (Fig. 9). Antimonate was not toxic at the lower concentrations, but a reduction in activity occurred in the presence of 0.10%o NaSbO3. The solubility of the antimony salt is low, and a greater inhibition might have been evident were more of the antimonate in solution. Orthophosphate, as before, at 0.01% almost totally abolished activity. DISCUSSION The observation that phosphate does not inhibit the conversion of dimethylarsinate to trimethylarsine by C. humicola is of interest in light of the report by Da Costa (2) that phosphate failed to relieve the toxicity of dimethylarsinate to arsenic-tolerant and arsenicsensitive fungi, although it did overcome arsenate and arsenite toxicity. Considering the present findings on the phosphate inhibition of trimethylarsine production from the test arsenicals, it may be postulated that phosphate suppresses gas evolution by blocking the conversion of the arsenicals to trimethylarsine at a stage between the mono-and dimethylarsenic compounds. If the inhibition of fungi noted by Da Costa (2) results from the formation of a single toxicant from the various arsenicals he tested, then phosphate may overcome the inhibition of arsenate and arsenite by blocking its formation. This hypothetical inhibitor might then be a product of the sequence leading to trimethylarsine production, and the relief of arsenic inhibition of fungi by phosphate and of gas evolution by C. humicola may have a common basis. The finding that C. humicola is able to convert widely used arsenic-containing pesticides to a volatile product, herein identified as trimethylarsine, reemphasizes the need for a reassessment of the widespread use of such pest-control agents. Whether the final product is dimethylarsine, as reported by McBride and Wolfe (6), or trimethylarsine, the potential release from treated soils of such potent toxicants should be examined to avoid possible instances of human intoxication. Nevertheless, with the exception of Epps and Sturgis (5), who reported an unidentified volatile arsenic compound to be released from soil, and the observation that a substance with a garlic-like odor is evolved after soil treatment with dimethylarsinate (Woolson and Kearney, Environ. Sci. Technol., in press), no study has been made of the possibility of gas evolution from soils naturally containing arsenic compounds or treated with arsenic-containing pesticides.
2020-12-10T09:04:12.671Z
1973-03-01T00:00:00.000
{ "year": 1973, "sha1": "2f308c5b084316a4fbad941abcce158bd2b44f67", "oa_license": null, "oa_url": "https://aem.asm.org/content/aem/25/3/408.full.pdf", "oa_status": "GOLD", "pdf_src": "ASMUSA", "pdf_hash": "977351d0706ec821d94806c77a5dfc586eab2188", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Biology" ], "extfieldsofstudy": [] }
29857954
pes2o/s2orc
v3-fos-license
The selfish germ Curiosity about the sex life of a wasp led to a new way of thinking and a powerful demonstration that evolutionary science could be predictive. That same approach could help find ways to slow or prevent treatment failures in cancer and infectious diseases. Sometime in the early 1960s, a graduate student read "with near incredulity" a 1922 paper about a tiny wasp. The wasps were fully sexual but produced almost no males. Back then, expert opinion was that natural selection favored 1:1 sex ratios. The student, W. D. Hamilton, was so puzzled that he started a file called "The Melittobia Problem," named for the insect. [1]. Several years later, he published the answer. It was incest. Melittobia females mate with their brothers and their sons. Simple calculus led Hamilton to predict for any organism that the more inbreeding mothers do, the fewer sons they will produce [2]. That was something unheard of in biology: a novel, inescapably quantitative prediction of staggering generality. Field naturalists went looking. It turned out he was right [3]. What Hamilton had done was to reason mathematically about how natural selection must work in the environment in which organisms are actually living. That way of thinking, selection thinking, was later explained by Richard Dawkins in The Selfish Gene. To understand trait evolution, you have to compare the survival and reproduction consequences of the possible states that a trait could take. It seems obvious now, and a few were thinking that way back then, but they were mostly doing it without mathematics. Hamilton showed that relatively simple calculations could get you both rigor and prediction. I well remember discovering as a graduate student the sheer and utter beauty of it all. Towards the end of my PhD, I had to try it myself. If Hamilton's sex ratio theory was that good, it should apply to my new interest, malaria parasites. Take a quick look at Fig 1. The details aren't important. Absent Hamilton's theory, there is no reason to even consider drawing such a plot. But after we had done a little algebra, it was clear that I would be doing just that because, radically, Hamilton's theory predicted that malaria parasite populations should not be all over that space or even clustered around 1:1 but, instead, that evolution should have put them in the small region between the blue lines. The first thing I did when I set up my own lab in the early 1990s was test that prediction by sampling sex ratios in parasite populations from around the world. I still recall the spine tingles each time new data came in. Within measurement error, every one of the populations fell into the zone [4]. Wow. The solution to a problem identified in an obscure wasp applied to microbes that were then killing over a million people a year. Wow. Hamilton's theory, it turned out, was that good. Reviewers were unimpressed. As they were at pains to point out, Hamilton's theory was by then well verified and nobody really cares about malaria sex ratios (they don't make you sick). But I was stunned: selection thinking had empowered me to make a novel prediction that checked out. That's how grown-up science was supposed to work-and I could do it! From that success came the confidence to bet my career on the next step: applying selection thinking to more complex microbial traits, not least those that do make people sick. And that's been very rewarding because it has let me see things I never expected. One was the realization that some vaccines can enable more virulent pathogens to evolve [5]. That claim proved highly controversial, but it takes just a few lines of Hamiltonian-like calculus to get there from the widely accepted view that pathogens that are too nasty have no future. Some years after we did the calculus, our experiments in lab and farm animals provided proof of principle [6,7]. Another thing I got to see was that it is possible to make insecticides for malaria control that will never be undermined by insecticide resistance. An early success of selection thinking to which Hamilton also made seminal contributions [1] was the insight that natural selection does not favor life-enhancing genes if they only act late in life. That explains why we age. It also means that insecticides that only act on older mosquitoes, the ones that transmit malaria, cannot fail even when insecticide-resistance genes are present [8]. There is no selection to survive insecticide resistance if death will happen anyway. The prospect of evolution-proof insecticides got me into a third area, one that has yet to fully work itself out. Evolution kills about 600,000 people each year in America alone. Cancers have a ferocious capacity to evolve themselves around the insults oncologists throw at them, and an increasing number of people die from infections that were readily treated with antibiotics when I was a graduate student. Could selection thinking suggest ways to slow or redirect that life-threatening resistance evolution? A growing number of us think so [9][10][11][12][13][14][15][16]. Consider, for instance, the following challenge. How should we treat patients when resistance to medications is already present in a tumor or infection and there are no other options? Standard practice is to aggressively treat infections and cancers to stop them becoming resistant. But that notion always troubled me. Drug use causes drug resistance. So why keep taking drugs when the patient no longer feels sick, especially when the drugs themselves sicken? For sure, killing cancer cells or infectious agents stops them becoming resistant (dead things don't mutate). But a firestorm of drugs removes the competitors of the very things we fear: the cells and bugs we can't kill. Expert opinion is that stopping mutations is more important than competition, and if that's right, today's standard practice is indeed best. But where it is wrong, less aggressive regimens may be better [15]. There are circumstances in which patients will live longer if drugs are used to contain life-threatening tumors and chronic infections rather than try to cure them. Selection thinking makes it possible to define those situations [16]. Could selection thinking really prolong lives? The math is harder for therapy resistance than for sex ratios, but the fundamentals are the same, and so it should work. Selection thinking is by no means the only way to try to head off the real-time evolution that harms human well-being, but it is one way. And in no small part, we have it because a graduate student got curious about the sex life of a little wasp.
2017-07-19T12:34:33.254Z
2017-07-01T00:00:00.000
{ "year": 2017, "sha1": "f256ae28bdd0fe7c72b2f7642ffccadb2f1645f8", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.2003250&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f256ae28bdd0fe7c72b2f7642ffccadb2f1645f8", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258346177
pes2o/s2orc
v3-fos-license
Crystal structure of 3-(benzo[d]thiazol-2-yl)-6-methyl-2H-chromen-2-one The title molecule is almost planar, with an intramolecular S⋯O=C contact. The packing is a layer structure with dimeric units connected by a C—H⋯O=C hydrogen bond. Chemical context Benzothiazole and its derivatives are important heterocyclic aromatic compounds. Benzothiazole can be readily substituted at the C-2 position of the thiazole ring . Compounds containing a benzothiazolyl group have found numerous applications in medicine and in nonlinear optics (Sigmundová et al., 2007). Benzothiazole derivatives can also display strong fluorescence and luminescence in the solid state and in solution (Wang et al., 2010). Benzothiazole compounds as incorporated in organic light-emitting diodes have attracted substantial attention because of their notable photovoltaic properties (Ghanavatkar et al., 2020). Recently, we have synthesized novel heterocyclic derivatives involving the benzothiazole moiety, many of which showed significant biological and fluorescence activities Khedr et al., 2022). Coumarin is a natural product with the systematic name 2Hchromen-2-one. Its derivatives represent an important class of organic heterocycles. Thus, they are constituents of many intensively investigated and commercially important organic fluorescent materials; they also display important biological activities and are found in synthetic drugs (Curini et al., 2006). Furthermore, many coumarin compounds are current photosensitizers with valuable applications in medicinal chemistry (Bansal et al., 2013). Because of their photochemical properties, coumarin compounds have found applications in nonlinear optical materials, solar energy collectors and charge-transfer agents (Kim et al., 2011), and also as daylight fluorescent pigments, tunable dye lasers, fluorescent probes and components of organic light-emitting diodes (Christie & Lui, 2000). The emission intensities of coumarin chromo-phores depend on the nature of their substituents at various sites (Ż amojć et al., 2014). In two recent papers (Abdallah et al., 2022(Abdallah et al., , 2023, we have given a more extensive list of references concerning the properties of benzothiazoles and coumarins, including many of our own publications in these fields. Some decades ago, we reported the syntheses and characterizations of novel coumarin derivatives that have found applications as laser dyes (Elgemeie, 1989); these included 3-(benzo[d]thiazol-2-yl)-2H-chromen-2-one, the desmethyl analogue of title compound 4, a benzothiazole-based coumarin derivative which was synthesized by the reaction of 2-(cyanomethyl)benzothiazole with salicyaldehyde. Afterwards, other research groups utilized essentially the same reaction to synthesize different derivatives of the same heterocyclic framework, including compound 4 (Chao et al., 2010;Makowska et al., 2019). Structural commentary The structure of compound 4 is shown in Fig. 2. Its bond lengths and angles may be regarded as normal; a selection is presented in Table 1. The chromene and benzothiazole ring systems are planar as expected, with respective r.m.s. deviations of 0.020 and 0.015 Å ; the angle between these planes is only 3.01 (3) , so that the whole molecule almost planar. A short intramolecular S11Á Á ÁO2 contact of 2.792 (1) Å is observed. The desmethyl analogue (Abdallah et al., 2022) has, as expected, a closely similar molecular structure, with an SÁ Á ÁO C contact of 2.727 (2) Å and an interplanar angle of 6.47 (6) , but is not isotypic to 4. Database survey The searches employed the routine ConQuest (Bruno et al., 2002), part of Version 2022.3.0 of the Cambridge Structural Database (Groom et al., 2016). A search for structures containing both a coumarin and a benzothiazole ring system in the same residue gave 16 hits. After excluding ring systems with more extended annelation and molecules where the ring systems were not directly bonded to each other, 7 hits remained. In all of these, the benzothiazole was bonded via its 2-position. The structure with refcode AKUCUG (Bakthadoss & Selvakumar, 2016) Refinement The title crystal was a non-merohedral two-component twin. The orientations are related by a 180 rotation around the reciprocal axis c*. The structure was refined using the HKLF5 method, whereby the relative volume of the smaller twin component refined to 0.387 (1). For non-merohedral twins thus refined, R int is not a valid concept, and the number of reflections should be interpreted with caution, because the equivalent reflections in the intensity file have already been merged. Crystal data, data collection and structure refinement details are summarized in Table 3. The methyl group was included as an idealized rigid group allowed to rotate but not tip (C-H = 0.98 Å and H-C-H = 109.5 ). Other H atoms were included using a riding model starting from calculated positions (aromatic C-H = 0.95 Å ). The U iso (H) values were fixed at 1.5 times U eq of the parent C atoms for methyl groups and at 1.2 times U eq for the other H atoms. Computer programs: CrysAlis PRO (Rigaku OD, 2022), SHELXT (Sheldrick, 2015a), SHELXL2018 (Sheldrick, 2015b) and XP (Siemens, 1994). Figure 3 Packing diagram of compound 4, viewed perpendicular to (211). Dashed lines indicate 'weak' hydrogen bonds. Atom labels indicate the asymmetric unit. H atoms other than H17 have been omitted. 3-(Benzo[d]thiazol-2-yl)-6-methyl-2H-chromen-2-one Crystal data Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 )
2023-04-27T15:12:40.174Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "4586276a69b529b7efe61c2daf5779e7bb50a0e9", "oa_license": "CCBY", "oa_url": "https://journals.iucr.org/e/issues/2023/05/00/yz2033/yz2033.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ff73d750e3245dbd768de36b9c89521e163ee09a", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
246337654
pes2o/s2orc
v3-fos-license
Sp1 Plays a Key Role in Vasculogenic Mimicry of Human Prostate Cancer Cells Sp1 transcription factor regulates genes involved in various phenomena of tumor progression. Vasculogenic mimicry (VM) is the alternative neovascularization by aggressive tumor cells. However, there is no evidence of the relationship between Sp1 and VM. This study investigated whether and how Sp1 plays a crucial role in the process of VM in human prostate cancer (PCa) cell lines, PC-3 and DU145. A cell viability assay and three-dimensional culture VM tube formation assay were performed. Protein and mRNA expression levels were detected by Western blot and reverse transcriptase-polymerase chain reaction, respectively. The nuclear twist expression was observed by immunofluorescence assay. A co-immunoprecipitation assay was performed. Mithramycin A (MiA) and Sp1 siRNA significantly decreased serum-induced VM, whereas Sp1 overexpression caused a significant induction of VM. Serum-upregulated vascular endothelial cadherin (VE-cadherin) protein and mRNA expression levels were decreased after MiA treatment or Sp1 silencing. The protein expression and the nuclear localization of twist were increased by serum, which was effectively inhibited after MiA treatment or Sp1 silencing. The interaction between Sp1 and twist was reduced by MiA. On the contrary, Sp1 overexpression enhanced VE-cadherin and twist expressions. Serum phosphorylated AKT and raised matrix metalloproteinase-2 (MMP-2) and laminin subunit 5 gamma-2 (LAMC2) expressions. MiA or Sp1 silencing impaired these effects. However, Sp1 overexpression upregulated phosphor-AKT, MMP-2 and LAMC2 expressions. Serum-upregulated Sp1 was significantly reduced by an AKT inhibitor, wortmannin. These results demonstrate that Sp1 mediates VM formation through interacting with the twist/VE-cadherin/AKT pathway in human PCa cells. Introduction Prostate cancer (PCa) is a common cancer among men around the world [1]. PCa spread to nearby organs, tissues and other parts of the body including lymph nodes and bones [2]. After spreading, cancer cells attach to the other tissues and grow to form new tumors that can cause damage where they land [3]. Reportedly, a quarter of men with PCa in the world have a metastatic disease and the 5-year survival rate of patients with metastasis to distant sites is 29% [4]. PCa cells are known to have largely aggressive properties [5]. Since these tumors need a blood supply to grow and spread through blood circulation [6], it is important to shut off the blood supply to prevent tumor growth and metastasis in PCa. Vasculogenic mimicry (VM), discovered in 1999 [7], is the alternative neovascularization by aggressive tumor cells without the presence of endothelial cells (ECs) and it functions as blood vessels by ECs [8][9][10]. Blood supply is the indispensable process for cancer cells to grow and metastasize through providing oxygen and nutrients [11]. However, the therapeutic efficacy of drugs targeting only blood vessels by ECs is limited due to an adequate blood supply through a new pattern such as VM [12][13][14]. VM has appeared in various types of cancer including PCa and is related to poor prognosis of cancer patients by meta-analysis [15,16]. Overall survival (OS) and disease-free survival (DFS) were significantly lower in VM-positive PCa patients [17]. Since VM has the essential effects on tumor progression, VM is a new therapeutic strategy to improve the therapeutic efficacy of cancer patients including PCa. Sp1 transcription factor is overexpressed in many types of cancer cells including PCa and controls several genes that are involved in many cellular processes, including cell differentiation, cell growth, apoptosis, angiogenesis, and response to DNA damage [18][19][20][21]. Additionally, it contributes to progression and metastasis of PCa [21]. Therefore, Sp1 is an attractive target of cancer treatment in PCa patients. Although there are many studies on the functions of Sp1, there is no evidence of the relationship between Sp1 and VM formation. Among human PCa cell lines, PC-3 and DU145 cells have a powerful property of VM formation compared with LNCaP cells [22]. Thus, this study investigated whether and how Sp1 affects VM formation in human PCa PC-3 and DU145 cells. Sp1 Mediates VM Formation in PCa Cells PC-3 cells were treated with an increasing concentration of serum for 24 h and then the expression level of Sp1 was checked by Western blot. Sp1 was dramatically upregulated by serum in a dose-dependent manner ( Figure 1A). Since serum promotes VM formation of PC-3 cells [23], to determine the role of Sp1 and VM formation, loss-of-function approach was introduced. PC-3 cells were treated with a selective Sp1 inhibitor, mithramycin A (MiA), or were transfected with the siRNA-targeting Sp1 gene. First, the cell viability assay was performed to determine non-cytotoxic concentrations of MiA and Sp1 siRNA. There was no cytotoxic effect of MiA or siRNA up to 200 nM or 15 nM, respectively ( Figures 1B,C). This study used 100 and 200 nM of MiA or 15 nM of siRNA for subsequent experiments. Serumupregulated Sp1 expression was effectively inhibited by MiA or Sp1 silencing ( Figure 1D,E). To determine whether Sp1 is associated with VM formation, 3D culture VM formation assay was performed in PC-3 cells after MiA treatment or transfection with Sp1 siRNA. Serum stimulation led to the induction of tubular channels by PC-3 cells, which was effectively reduced by MiA in a dose-dependent manner ( Figure 1F). Similarly, Sp1 silencing had an obvious inhibitory effect on serum-induced the formation of tubular channels ( Figure 1G). To verify a role of Sp1 in VM formation, a Western blot for Sp1 after serum treatment and transfection with Sp1 siRNA, and 3D culture VM formation assay after transfection with Sp1 siRNA were performed in another PCa DU145 cells. Consistent with the results from PC-3 cells, Sp1 was upregulated by serum in DU145 cells ( Figure 2A). Additionally, serum-induced VM formation was significantly reduced after Sp1 silencing in DU145 cells ( Figure 2B,C). To confirm a novel functional role of Sp1 in VM formation, a gain-of-function approach was introduced using Sp1 CRISPR activation plasmid in both PC-3 and DU145 PCa cells. Sp1 overexpression caused an effective increase in VM tubular formation compared with control plasmid without serum in both PC-3 ( Figure 3A) and DU145 cells ( Figure 3B) by a 3D culture VM formation assay. Taken together, Sp1 silencing inhibited serum-stimulated VM formation, whereas Sp1 overexpression triggered VM formation in PCa cells, suggesting that Sp1 is required to induce VM formation in PCa cells. , and in siRNA-transfected cells with serum (E) for 24 h. Cell viability was measured by MTT assay in MiA-treated with cells with serum (B), and in siRNA-transfected cells with serum (C) for 24 h. VM tube formation assay was carried out in MiA-treated cells with serum (F) and in siRNA-transfected cells with serum (G). After 16 h incubation, images were obtained under an inverted light microscope at 40× magnification. Scale bar = 250 μm. The number of formed VM structures was counted. Data are shown as mean ± SD and were statistically calculated by one-way ANOVA followed by Tukey's studentized range test. * Means with different letters are significantly different between groups. To verify a role of Sp1 in VM formation, a Western blot for Sp1 after serum treatment and transfection with Sp1 siRNA, and 3D culture VM formation assay after transfection with Sp1 siRNA were performed in another PCa DU145 cells. Consistent with the results from PC-3 cells, Sp1 was upregulated by serum in DU145 cells (Figure 2A). Additionally serum-induced VM formation was significantly reduced after Sp1 silencing in DU145 cells ( Figure 2B,C). To confirm a novel functional role of Sp1 in VM formation, a gain-of-function approach was introduced using Sp1 CRISPR activation plasmid in both PC-3 and DU145 PCa cells. Sp1 overexpression caused an effective increase in VM tubular formation compared with control plasmid without serum in both PC-3 ( Figure 3A) and DU145 cells (Figure 3B) by a 3D culture VM formation assay. Taken together, Sp1 silencing inhibited serum-stimulated VM formation, whereas Sp1 overexpression triggered VM formation in PCa cells, suggesting that Sp1 is required to induce VM formation in PCa cells. Sp1 Upregulates VE-Cadherin Expression through the Nuclear Twist in PC-3 Cells To reveal whether Sp1 affects the expression of VE-cadherin to induce VM forma a Western blot was conducted in PC-3 cells. Serum upregulated VE-cadherin protein pression, which was attenuated by MiA in a dose-dependent manner ( Figure 4A). A tionally, VE-cadherin protein expression by serum was markedly inhibited in Sp1 siR treated cells ( Figure 4B). However, Sp1 overexpression slightly upregulated VE-cadh protein expression without serum ( Figure 4C). To assess whether the VE-cadherin pro level was affected by the transcriptional level, the mRNA expression level of VE-cadh was detected by RT-PCR. Consistent with the protein expression of VE-cadherin, th rum-upregulated mRNA level of VE-cadherin was decreased after treatment with ( Figure 4D) or Sp1 siRNA ( Figure 4E). These results indicated that Sp1 regulates VEherin expression at the transcription level. Figure 3. Sp1 mediates VM formation in PCa cells. Western blot was performed in PC-3 cells (A) and DU145 cells (B) after transfection with CRISPR activation plasmid. VM tube formation assay was carried out in PC-3 cells (C) and DU145 cells (D) after transfection with CRISPR activation plasmid. After 16 h incubation, images were obtained under an inverted light microscope at 40× magnification. Scale bar = 250 µm. The number of formed VM structures was counted. Data are shown as mean ± SD and were statistically calculated by one-way ANOVA followed by Tukey's studentized range test. * Means with different letters are significantly different between groups. Sp1 Upregulates VE-Cadherin Expression through the Nuclear Twist in PC-3 Cells To reveal whether Sp1 affects the expression of VE-cadherin to induce VM formation, a Western blot was conducted in PC-3 cells. Serum upregulated VE-cadherin protein expression, which was attenuated by MiA in a dose-dependent manner ( Figure 4A). Additionally, VE-cadherin protein expression by serum was markedly inhibited in Sp1 siRNA-treated cells ( Figure 4B). However, Sp1 overexpression slightly upregulated VE-cadherin protein expression without serum ( Figure 4C). To assess whether the VE-cadherin protein level was affected by the transcriptional level, the mRNA expression level of VE-cadherin was detected by RT-PCR. Consistent with the protein expression of VE-cadherin, the serum-upregulated mRNA level of VE-cadherin was decreased after treatment with MiA ( Figure 4D) or Sp1 siRNA ( Figure 4E). These results indicated that Sp1 regulates VE-cadherin expression at the transcription level. To identify the transcriptional regulation of VE-cadherin, a Western blot and immunofluorescence analysis were performed in PC-3 cells. Twist was elevated by serum, which was decreased by MiA treatment ( Figure 5A) or Sp1 silencing ( Figure 5B). However, the overexpression of Sp1 increased the expression level of twist without serum compared with control plasmid ( Figure 5C). Immunofluorescence staining showed that enhanced twist expression in the nucleus by serum was attenuated after MiA treatment ( Figure 5D) or Sp1 silencing ( Figure 5E). As shown in Figure 5F, the interaction between Sp1 and twist was induced by serum, which was significantly reduced by MiA treatment. Taken together, these results demonstrated that the nuclear twist upregulates VE-cadherin expression, which the process of which is mediated by Sp1. To identify the transcriptional regulation of VE-cadherin, a Western blot and immunofluorescence analysis were performed in PC-3 cells. Twist was elevated by serum, which was decreased by MiA treatment ( Figure 5A) or Sp1 silencing ( Figure 5B). However, the overexpression of Sp1 increased the expression level of twist without serum compared with control plasmid ( Figure 5C). Immunofluorescence staining showed that enhanced twist expression in the nucleus by serum was attenuated after MiA treatment (Figure 5D) or Sp1 silencing ( Figure 5E). As shown in Figure 5F, the interaction between Sp1 and twist was induced by serum, which was significantly reduced by MiA treatment. Taken together, these results demonstrated that the nuclear twist upregulates VE-cadherin expression, which the process of which is mediated by Sp1. Sp1 Promotes the Activation of AKT Pathway in PC-3 Cells To investigate whether Sp1 is involved in the AKT pathway to induce V the Western blot analyzed PC-3 cells. The phosphorylation of AKT and t levels of MMP-2 and LAMC2 were augmented by serum. MiA treatment ( Sp1 silencing ( Figure 6B) decreased the effects of serum. In contrast, Sp1 o elevated the phosphorylation of AKT and the expression levels of MMP-2 without serum compared to the control plasmid ( Figure 6C). Serum-upregu not twist, was significantly reduced by the AKT inhibitor, wortmannin (Figu After incubating with twist antibody (green) followed by FITC-conjugated secondary antibody, the nuclei were counterstained with propidium iodide (red). Images were obtained by a fluorescence microscope at 400× magnification. Scale bar = 40 µm. (F) Co-IP was performed in MiA-treated cells with serum. IgG: negative control. Data were statistically calculated by one-way ANOVA followed by Tukey's studentized range test. * Means with different letters are significantly different between groups. Sp1 Promotes the Activation of AKT Pathway in PC-3 Cells To investigate whether Sp1 is involved in the AKT pathway to induce VM formation, the Western blot analyzed PC-3 cells. The phosphorylation of AKT and the expression levels of MMP-2 and LAMC2 were augmented by serum. MiA treatment ( Figure 6A) or Sp1 silencing ( Figure 6B) decreased the effects of serum. In contrast, Sp1 overexpression elevated the phosphorylation of AKT and the expression levels of MMP-2 and LAMC2 without serum compared to the control plasmid ( Figure 6C). Serum-upregulated Sp1, but not twist, was significantly reduced by the AKT inhibitor, wortmannin ( Figure 6D). These results indicate that Sp1 contributes to the activation of VM-related AKT signaling. Additionally, Akt was controlled by Sp1 expression. Discussion VM is the formation of a vessel-like network lined by cancer cells. The function of VM is similar to that of blood vessels formed by ECs [8][9][10]. VM strongly participates in tumor invasion, metastasis, and growth through a blood supply and is closely related to poor prognosis in cancer patients [8,15,24]. VM-positive PCa patients showed high Discussion VM is the formation of a vessel-like network lined by cancer cells. The function of VM is similar to that of blood vessels formed by ECs [8][9][10]. VM strongly participates in tumor invasion, metastasis, and growth through a blood supply and is closely related to poor prognosis in cancer patients [8,15,24]. VM-positive PCa patients showed high Gleason scores and distance metastasis as well as short OS and DFS [17]. The Sp1 transcription factor plays a crucial role in the progression and metastasis of PCa [21]. However, the involvement of Sp1 in VM formation has not been determined yet. Therefore, this study investigated a novel functional role of Sp1 in the process of VM in human PCa cells. Since a previous study demonstrated that serum promotes VM formation in human PCa PC-3 cells [23], this study focused on Sp1 to explore an underlying molecular mechanism of VM. As expected, serum dramatically upregulated the expression of Sp1 at the protein level in both PCa PC-3 and DU145 cells (Figures 1A and 2A). To elucidate a novel functional role of Sp1 in VM formation, MiA and Sp1 siRNA used for a loss-of-function approach and Sp1 CRISPR activation plasmid was used for a gain-of-function approach. The inhibition of Sp1 by MiA and Sp1 siRNA caused the perfect blockage in VM formation induced by serum (Figures 1 and 2). On the contrary, despite the absence of serum, the overexpression of Sp1 by CRIPSR activation plasmid sufficiently formed VM ( Figure 3). Therefore, these results clearly demonstrate that Sp1 may be an important factor in the process of VM formation in PCa cells. Highly aggressive tumor cells overexpress VE-cadherin, but not non-aggressive tumor cells [25]. VE-cadherin, an endothelial-specific junction molecule, is a biomarker of VM and plays a crucial role in VM formation [26][27][28]. The endothelial-specific transcriptional active region of VE-cadherin contains the Sp1 binding site [29,30], highlighting the relationship between Sp1 and VE-cadherin. In this study, the serum-upregulated expressions of VE-cadherin at the protein and mRNA levels were decreased after treatment with MiA or Sp1 siRNA (Figure 4), highlighting the transcriptional regulation of VE-cadherin expression. However, the overexpression of Sp1 upregulated the protein expression of VE-cadherin ( Figure 4C). Twist is a transcription factor that regulates the expression of VE-cadherin [31,32]. Twist has been reported to be associated with tumor metastasis and angiogenesis [33] and also regulates VM formation [32]. In this study, serum-treated PC-3 cells were found to increase the expression of twist in the nucleus, which was reduced by the inhibition of Sp1 by MiA or siRNA ( Figure 5A,B). However, the overexpression of Sp1 elevated the protein expression of twist ( Figure 5C). Sp1 interacted with twist, which was significantly reduced by MiA treatment ( Figure 5F). Taken together, these results revealed that Sp1 regulates the expression of VE-cadherin by interacting with twist in the nucleus. Multiple signaling pathways such as AKT, FAK, hypoxia, and nodal/notch contribute to VM formation [8,24]. Among them, as a downstream signaling of VE-cadherin, AKT is activated by VE-cadherin [8,34]. Then, activated AKT elevates the expressions of matrix metalloproteinases (MMPs) such as MMP-2 and -14, thereby leading to VM formation through the remodeling of the extracellular matrix including LAMC2 [8,24]. Additionally, AKT promotes cancer cell growth, proliferation, and malignant behavior [35]. A previous study demonstrated that the AKT/MMP-2/LAMC2 signal transduction pathway participates in VM formation in response to serum [23]. Sp1 knockdown suppressed tumor progression by inhibiting AKT and ERK signaling [36]. AKT-mediated VEGF mRNA expression required Sp1 [37]. These reports indicated that Sp1 may be involved in the AKT signaling pathway. In this study, the serum-induced phosphorylation of AKT in PC-3 cells was seen to decrease when Sp1 was suppressed by MiA or siRNA ( Figure 6A,B). However, the overexpression of Sp1 enhanced the phosphorylation of AKT ( Figure 6C). Meanwhile, AKT signaling also regulated the Sp1 expression. Both serum-upregulated MMP-2 and LAMC2 expressions were decreased when Sp1 was inhibited by MiA or siRNA ( Figure 6A,B). On the contrary, the overexpression of Sp1 enhanced the expression levels of MMP-2 and LAMC2 ( Figure 6C). These results verified that Sp1 is involved in the AKT pathway to induce VM in PC-3 cells. In conclusion, this study demonstrated a novel functional role of Sp1 in VM formation through loss-and gain-of-function approaches and these results are summarized in Figure 7. Sp1 regulated the expression of VE-cadherin through controlling the nuclear expression of transcription factor, twist. Sp1-induced the upregulation of twist/VE-cadherin in turn activated the AKT pathway including MMP-2 and LAMC2, thereby causing an induction of VM. Taken together, Sp1 plays a key role in VM formation through the twist/VEcadherin/AKT pathway in human PCa cells. These results may provide a new therapeutic strategy for the treatment of PCa patients associated with VM through targeting Sp1. was seen to decrease when Sp1 was suppressed by MiA or siRNA ( Figure 6A,B). However, the overexpression of Sp1 enhanced the phosphorylation of AKT ( Figure 6C). Meanwhile, AKT signaling also regulated the Sp1 expression. Both serum-upregulated MMP-2 and LAMC2 expressions were decreased when Sp1 was inhibited by MiA or siRNA (Figure 6A,B). On the contrary, the overexpression of Sp1 enhanced the expression levels of MMP-2 and LAMC2 ( Figure 6C). These results verified that Sp1 is involved in the AKT pathway to induce VM in PC-3 cells. In conclusion, this study demonstrated a novel functional role of Sp1 in VM formation through loss-and gain-of-function approaches and these results are summarized in Figure 7. Sp1 regulated the expression of VE-cadherin through controlling the nuclear expression of transcription factor, twist. Sp1-induced the upregulation of twist/VE-cadherin in turn activated the AKT pathway including MMP-2 and LAMC2, thereby causing an induction of VM. Taken together, Sp1 plays a key role in VM formation through the twist/VE-cadherin/AKT pathway in human PCa cells. These results may provide a new therapeutic strategy for the treatment of PCa patients associated with VM through targeting Sp1. Three-Dimensional (3D) Culture VM Tube Formation Assay VM tube formation was assessed as described previously [23,41]. Cells (3.6 × 10 5 ) were seeded on a matrigel-polymerized 24-well plate and then treated with serum with or without MiA for 16 h at 37 • C. In siRNA-transfected cells, cells (3.6 × 10 5 ) were seeded after 48 h transfection and then treated with serum. In CRISPR activation plasmid-transfected cells, cells (3.6 × 10 5 ) were seeded after 48 h transfection without serum. Tubular shapes were counted after imaging using an inverted light microscope Ts2_PH (Nikon, Tokyo, Japan) at 40× magnification. Western Blot Analysis Western blot was performed in MiA-treated cells with serum and in siRNA-transfected cells with serum for 24 h, in CRISPR activation plasmid-treated cells, and in serum-treated cells with or without wortmannin (WM, Merk, Darmstadt, Germany) for 24 h. Total proteins were isolated using RIPA buffer (Thermo Scientific, Rockford, IL, USA) supplemented with phosphatase inhibitor cocktail (Thermo Scientific, Rockford, IL, USA) and protease inhibitor cocktail (Thermo Scientific, Rockford, IL, USA). The protein samples (30-35 µg) were separated by SDS-polyacrylamide gel (8-12%) electrophoresis and then transferred onto a membrane (Pall Corporation, Port Washington, NY, USA). The membrane was incubated with the indicated primary antibodies (Table 1) overnight at 4 • C, followed by incubation with specific secondary antibodies for 2 h at room temperature (RT). Protein bands were visualized using an enhanced chemiluminescence reagent (GE Healthcare, Chicago, IL, USA) and ImageJ 1.40 g software (National Institute of Health, Bethesda, MD, USA) was used to quantify each protein band. Isolation of RNA and Reverse Transcriptase Polymerase Chain Reaction (RT-PCR) Total RNA extraction was carried out in MiA-treated or Sp1 siRNA-transfected cells using a TRIzol reagent (Invitrogen, Carlsbad, CA, USA). cDNA synthesis and PCR were performed as described previously [23]. ImageJ 1.40g software was used to quantify each PCR product band. Immunofluorescence Assay Cells were seeded on an 8-well chamber slide with serum with or without MiA. In siRNA-transfected cells, cells (7 × 10 4 ) were seeded on an 8-well chamber slide and transfected with siRNA for 48 h and treated with serum. Immunofluorescence assay was performed as described previously [23]. Images were captured using an ECLIPS Ts2-FL (Nikon, Tokyo, Japan) at 400× magnification. Co-Immunoprecipitation (Co-IP) The total cell lysate (300 µg) were mixed with 0.5 µg of twist antibody (Abcam plc., Cambrige, UK) for 1 h at 4 • C and then added protein A/G agarose (Santa Cruz Biotechnology, Inc., Danvers, MA, USA) for 1 h at 4 • C. The beads were collected by centrifugation and washed 3 times with lysis buffer. The immunoprecipitated protein complexes were analyzed by Western blot. Statistical Analysis All experiments were performed at least three times. Data are shown as mean ± standard deviation (SD). All data were analyzed by one-way ANOVA followed by Tukey's studentized range test using a GraphPad Prism software (GraphPad Software Inc., San Diego, CA, USA). Means with different letters are significantly different between groups.
2022-01-28T16:08:31.631Z
2022-01-25T00:00:00.000
{ "year": 2022, "sha1": "7541048ee1c761fbfd810a8cbc1626c942007b4b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/3/1321/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3d0d74b9df32b7510be3089d3c61bca04b9d8c6a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
244628092
pes2o/s2orc
v3-fos-license
Factors Associated With Low Relapse Rates of Schizophrenia In Southern Thailand: A University Hospital-Based Study Background: Schizophrenia is a chronic disease that has residual symptoms and relapse. Relapse prevention research will provide useful knowledge for the employment of an effective caring process. This study aims to explore factors associated with relapse rates in hospital where there are comparatively low relapse rates for schizophrenia. Method: Medical records of patients who had their �rst schizophrenia diagnosis, in the Songklanagarind hospital’s inpatient psychiatric unit, were retrospectively reviewed for the period from January 2007 to December 2019. This yielded data outlining demographic information, pro�les of schizophrenia and treatment. Descriptive statistical analysis was utilized to process all data; and factors associated to relapse were investigated using bivariate and multivariate analyses. Results: Reviewed medical records identi�ed a sample size of 156 schizophrenias. The majority were male (50.6%), Buddhist (85.9%), unmarried (80.1%), unemployed (50.6%) and living with their families (90.4%). Their mean age was 39.2 years. Relapse was de�ned as readmission to a psychiatric unit within 5 years after their �rst psychotic episode. From the 156 patients, 53.8% featured relapse whereas 46.2% were in remission. Cumulatively, the �rst to the fth-year relapse rate was 22.4%, 35.3%, 44.9%, 50.0%, and 53.8% respectively. Multivariate analysis indicated that patients having stressful life events, non-adherence to medication, prescription changes and lack of insight were all factors with a statistically signi�cant association to relapse rates. Conclusions: Stressful life events, adverse events, medical non-adherence, change prescription, and lack of insight were related to relapse. Emphasizing multimodality of treatment could be key to successful relapse prevention for schizophrenia. Background Schizophrenia is a mental illness affecting about 7 per 1,000 adults globally [1] or approximately 1% of the world's population [2].In addition, schizophrenia is a long-term debilitating chronic disease that has residual symptoms, including functional impairment, and requires lifelong medical care and supervision [1][2][3].Therefore, biological treatment and psychiatric rehabilitation are necessary to improve patients' quality of life [4] and lessen the burden for their families [5,6]. Historically, the core strategy in the management of schizophrenia has been a combination of medication, ensuring patients gain insight about their condition and the teaching of essential communityliving skills as integrating patients back into society and engaging in employment and other meaningful activities can reduce social stigma [7].Although some schizophrenic patients can gain insight well, the reality of their condition causes them emotional pain due to social stigmatization.This often leads to them refusing to take their medication, leading to relapse. Relapse in schizophrenia is broadly de ned as the reemergence or the worsening of psychotic symptoms [2].Furthermore, this de nition can include the aggravation of positive or negative symptoms, hospital admission, more intensive management and/or changes in medication [8]. Systematic review and meta-analytic studies found that at least 80% of rst diagnosed schizophrenic patients had a relapse of symptoms within a minimum 12-month follow-up period [6].Nevertheless, some studies identi ed that relapse rates vary from 50-92% globally [9][10].Furthermore, most schizophrenic patients in treatment relapse within 5 years and suicide might occur in up to 10% of them [3].Several studies suggest that there is an association between suicidal ideation and the likelihood of relapse with the probability of relapse being 9.1 times larger than schizophrenic patients without suicidal ideation [11,12].Additionally, the factors commonly associated with relapse are poor or non-adherence to medication, ranging from 26-44%, as many as two-thirds of patients were at least partially poorly adherent, resulting in an increased risk of hospitalization [1,7].However, persistent substance use disorder [13,14], poorer premorbid adjustments, co-morbid psychiatric illness [15], co-morbid medical or health-related problems [16,17], poverty, unemployment [13], stressful life events, family' criticism, or low family support [9,13], patient-provider relationships or the treatment setting [18,19], and aging are also signi cantly predisposing the risk for relapse in schizophrenia [20].In other words, relapse among schizophrenic patients from non-adherence to medication may be due to factors that are related to: patients (forgetfulness, anxiety about adverse effects, inadequate knowledge, lack of insight and motivation and fear of stigma) [20][21][22]; healthcare (poor patient-health care provider relationship, poor service and access to services, poor staff training) [23]; socioeconomic aspects (low level of education, unemployed, cultural and social attitudes, belief systems) [6]; and treatment strategy (poly-pharmacology, complex treatment regimens) [24].In addition, there is some evidence suggesting that drug treatment regimen history, drug administration route, and antipsychotic drug adverse effects can also contribute to schizophrenia relapse [25,26].Nowadays, antipsychotic medication plays an effective role regarding symptom control in the management of schizophrenia, but continuous long-term treatment is required to ensure medication adherence, control symptoms, and prevent relapse and its consequences [27]. However, schizophrenic patients often relapse, even during treatment [2].This makes a compelling case for the importance of promptly identifying relapse associated factors, their contribution to relapse rates and the signi cance of relapse prevention strategies in schizophrenia management [3,7]. Successive relapse control can reduce the degree and duration of the next remission, worsen disability, and increase refractoriness to future treatment [28].Relapse control has been a concern and an area of focus for many years in our healthcare setting, Songklanagarind hospital.Our prior studies of relapse prevention were done aiming to assist in the maintenance of life quality and to lessen the burden and stigma for the families of schizophrenic patients [4,5].Additionally, our goals of social de-stigmatization were to boost social or community recognition and positive attitudes towards schizophrenic patients [29]. As a result, our schizophrenic patients has good medication adherence and relapse rate has been low compared to other countries [30].Thus, this study aimed to evaluate the relapse rate among patients with schizophrenia, within a ve-year period, and to identify factors associated with relapse in comparison with a group on remission. Methods The study protocol was performed in accordance with the relevant guidelines.After being approved by the Ethics Committee of the Faculty of Medicine, Prince of Songkla University (REC: 63-523-3-4), a retrospective study was conducted at Songklanagarind Hospital, Thailand, which is an 800-bed university hospital serving as a tertiary referral center in Southern Thailand.A review was conducted of all the information in the medical records of schizophrenic patients from the hospital computer system; the inclusion criteria was that patients had a rst episode of schizophrenia as per the ICD-10 code F 20.0 to 20.9, they had a diagnosis by a psychiatrist, had medical registration at the psychiatric inpatient unit from January 2007 to December 2019, and their age ranged from 18-60 years old.Meanwhile, patients with schizophrenia and co-morbidity with other mental illness (mood disorder, anxiety disorder) and having diagnosis changes were excluded.The patients were divided into 2 groups, the relapse group and the remission group. The relapse group contained the cases of patients with schizophrenia who had a symptom relapse within 5 years after the rst episode of psychosis.Therefore, we viewed information from their rst episode of schizophrenia until their next relapse episode.The remission group had the schizophrenic patients who did not have a history of symptom relapse within 5 years after the rst episode of psychosis.We viewed the information from their rst episode of schizophrenia until the next relapse episode occurred.The factors associated with relapse as well as the protective factors found between the 2 groups were compared and analyzed. Relapse was identi ed in cases of patients with schizophrenia who had documented evidence of reemergence or aggravation of psychotic symptoms and hospitalization to a psychiatric unit within 5 years after the rst episode of psychosis.Planned hospital admission for a non-related illness or special investigation was not deemed to be a relapse [8]. Remission applied to patients with schizophrenia who did not have a history of relapse within 5 years after the rst episode of psychosis. Measures Data records were reviewed by 5 psychiatrists and a content validity analysis was performed; the content validity index (CVI) score was 0.8.They were composed of 2 parts: 1. Personal and demographic information inquiry consisting of questions related to age, gender, religion, marital status, education level, occupation, patient income, supporting system, health coverage, physical illness, and substance usage. 2. Pro le of schizophrenic disorder and treatment information included residual symptoms, insight, stressful life event, suicidal ideation, psychiatric family history, cause of discontinuing antipsychotic drugs due to factors of patient-related or healthcare-related or socio-economically-related, number and duration of readmission, follow-up interval, the regimen of treatment, type and route of drug administration, antipsychotic adverse events, and history of changing type or dose of antipsychotic drugs. Statistical methods Descriptive statistics such as percentage, frequency, proportion, mean, and standard deviation (SD) were calculated.Bivariate and multivariate analyses were utilized to identify factors associated with symptom relapse. According to patients in the relapse group, half had residual symptoms (48.8%) and more than half of the patients reported poor insight (60.7%), having stressful life events (56%) and suicidal ideation (19%).There was a statistically signi cant difference in residual symptoms, insight, stressful life events and suicidal ideation between relapse and remission groups (p<0.05).(Table 2) Treatment information In regards to all schizophrenic patients, their median (IOR) length of hospital stay was 17 (12.8, 25) days.The median (IOR) number of types and amounts in regards to their oral medication prescriptions, tablets, were 3 (2,5), and 5 (3,8) respectively.Most of the patients self-administered tablets 1-2 times per day (65.4%)via the oral route (88.5%).Two-thirds (64.1%) had no adverse effects and demonstrated good medical adherence (62.2%) (Table 2).About half of the patients (51.3%) had follow-ups by a physician visiting them at an interval of every 1-3 months. According to relapse patients, two-thirds (59.5%) had a history of medical non-adherence.The most common cause of medical non-adherence was patient-related factors (53.6%) such as; lack of insight (62.2%); anxiety about the adverse events of antipsychotic medication (24.4%), and factors associated with patient's socio-economic status (14.3%).However, a few patients reported being medical nonadherent due to not having su cient knowledge, being forgetful, and feeling stigmatized due to having a prescription.There was a statistically signi cant difference in adverse event, medical non-adherence, and history of change prescription between the relapse and the remission groups (p<0.001).(Table 2) Additionally, no statistically signi cant difference was found regarding the length of hospital stay between relapse and remission groups (p<0.001). Focusing on the adverse events of antipsychotic drugs, schizophrenic patients mainly reported no adverse effects (64.1%).Furthermore, more than half of all patients received second-generation antipsychotic drugs; pure second-generation antipsychotic drugs (37.2%); and/or a combination of rst and second-generation antipsychotic drugs (18.6%).(Table 2) Extrapyramidal symptoms (EPS) (56.7%) were the most common adverse event among the relapse group.Schizophrenic patients who had received rst-generation antipsychotics developed EPS more than patients taking second-generation antipsychotic drugs.(Figure 1 and Figure 2) However, no statistically signi cant difference in EPS between the relapse and remission group was detected.Viewing this from a different perspective, the adverse event could be related to medical adherence.A statistically signi cant association between the group of patients who developed adverse events during treatment and the history of poor drug compliance was detected (p<0.001).(Figure 3) The association between demographic and schizophrenic characteristics, treatment, and symptom relapse Multivariate analysis indicated that having stressful life events, medical non-adherence, history of change prescription, and lack of insight were all factors statistically signi cantly associated to symptom relapse.The schizophrenic patients who had stressful life events had a higher rate of relapse than the remission group, the adjusted odds ratio (AOR) was 23.5 and the 95% con dence interval (CI) was 5.2 to 107.1 The same was true when comparing them with those who had medical non-adherence, prescription changes, and poor insight; AOR (95%CI) was 5 (1.3,19.7),10.9 (1.2, 100.9), and 22.6 (4.1, 123.5) respectively.(Table 3) Discussion This survey indicated that the rst-year relapse rate and cumulative relapse rate in ve years among patients with schizophrenia were 22.4% and 53.8% respectively.Patients from the relapse group reported having lack of insight (60.7%), stressful life events (56%), residual symptoms (48.8%), and suicidal ideation (19%).Moreover, two-thirds (59.5%) demonstrated medical non-adherence due to patient-related factors (53.6%) such as lack of insight and experiencing anxiety about the adverse events of antipsychotic drugs as well as due to socio-economic factors (14.3%).The factors with a statistically signi cant association to relapse risk were stressful life events, medical non-adherence, prescription changes and a lack of insight Regarding the rate of relapse, some studies identi ed that relapse rates vary from 50-92% globally [9,10].Our study features a lesser relapse rate vs. the general international levels: much lower than studies in Birmingham [31] and New York [18] that found rst-year relapse rates of 67%, and 81.9% respectively.These ndings corroborated previous studies in which Asian ethnicity tended to have lower relapse rates compared to white and Afro-Caribbean patients [31].This may be due to Thai and Asian culture in general being more collectivist vs.European and American culture that tends to be more individualistic [32,33]. In our study, 90.4% of patients stayed with family and it appeared that they had good family support without feeling like a burden in family or experiencing stigma in schizophrenic patients [4,5].In a previous study in our healthcare setting 73.2% of the caregivers report non-severe or nil burden as a result of being with the patient [5].This information highlighted that there is a collectivist system in Southern Thailand where there is a whole-family sense of duty to care for the schizophrenic patient and family member.Furthermore, the ability to access our health care system seems easier when compared to other countries [34].Comprehensive psychoeducation appears to be important in aiding relatives and patients to better understand treatment guidelines, potentially reducing relapse rates and any burden on the family. About factors associated with relapse, our ndings support those from previous reports from Australia as non-adherence to medication and psychosocial stressors were commonly noted as clinical precipitants of relapse [35].Furthermore, discontinuing antipsychotic drug therapy increased the risk of relapse by almost 5 times [18]; and it was found that relapse was common after the discontinuation of antipsychotic medication post recovery from the rst episode of psychosis.Patients who wished to discontinue their medication needed to be informed of the high relapse rates and the associated risks. Furthermore, male patients who had previous hospital admissions potentially require closer monitoring [36].However, this study identi ed no statistically signi cant difference in gender between the relapse and remission groups.The reason for this might be the fact that most schizophrenic patients stayed with their family and they equally received caring from their caretakers [4,5]. In addition, this study revealed that psychosocial stressors were a factor noted as a clinical precipitant of relapse.According to guidelines from the National Institute for Health and Care Excellence in the United Kingdom and the Schizophrenia Patient Outcomes Research Team in the United States, psychotherapeutic treatments have the potential to improve the therapeutic change in schizophrenia spectrum disorder [37].Psychotherapeutic treatment is potentially an essential component for the treatment of schizophrenic patients; aiming to reduce suicidal ideation and the impact of stressful life events, which are both precipitating factors. In regards to adverse effects of antipsychotic drugs associated with medical non-adherence, optimum dosing with the lowest level of adverse effects would be our paramount concern.Even though the type and route of administration of antipsychotic drugs was not a signi cant factor in our research; the new generation of antipsychotic drugs have shown that they cause fewer adverse events, especially in regards to EPS as compared with the rst generation of drugs.Furthermore, real-world data suggest that longacting injections tend to reduce the occurrence of EPS and/or neuroleptic malignant syndrome but there is other adverse effect with similar risks to oral antipsychotics [38].According to this information, atypical antipsychotic long-acting injections could be an interesting choice due to reducing the risk of adverse events, medical non-adherence, and any uctuations of the concentration of the drug in the system of schizophrenic patients.If the conventional antipsychotic is necessary, balancing optimum dosing to control symptoms with the least amount of side effects should be considered.A shared decision-making process between physician and patient may be one of the main ways to reduce patient distress. As per the low relapse rate that was shown in this study, Songklanagarind Hospital has continued creating relapse prevention programs and to work through various aspects accentuating the quality of life of schizophrenic patients.A previous study identi ed that most schizophrenia outpatients at Songklanagarind Hospital had good medication adherence and high scores of meaning in their lives.Furthermore, results indicated that the presence of a sense of meaning in life and engaging in something that increased patients' lives feel meaningful and purposeful could reduce social stigma and promote insight.Therefore, most schizophrenic outpatients had good medication adherence being associated to their sense of a meaning in their lives [30].Additionally, a destigmatization program was created by our homestay program to rehabilitate and engage patients by boosting their social skills.This program enables the community and the patient's family to understand the patient's disease, decreasing stigma and making the patient's relationship with society more harmonious [29].One study found that schizophrenic patients (62%) perceived a low level of stigma and that only 1.8% of patients experienced a high level of stigma in our healthcare setting [4].All our inpatients with a rst episode of schizophrenia would engage in a discussion with our staff, aiming to discover positive meaning in their lives; and would participate in our rehabilitation program.Potentially, the reduction of patient perceived stigma and the boosting of insight could improve their adherence to the pharmacological component of their treatment plans and reduce levels of distress. Finally, this study was one of the rst studies showing that substance use or inadequate knowledge resulted in no increase of relapse risk; not corroborating prior reports from Australia [35].This may be because our study had a low sample size in regards to substance use and a relatively small number of patients with insu cient knowledge.Therefore, psychoeducation and co-morbidity prevention for patients with a rst episode of schizophrenia should be highlighted accordingly. This study had a number of noteworthy strengths and limitations.To our knowledge, this is the rst study that explored the rate of schizophrenia relapse and its associated factors in southern Thailand.Furthermore, it involved an adequately large participant sample size.Nevertheless, it was a retrospective study reviewing computer medical records which have limitations and the potential of bias.Furthermore, the study used only quantitative data and the sample size was restricted to schizophrenic inpatients only from our hospital instead of including other areas of Thailand.Hence, this dataset may not be representative of schizophrenic patients across the whole of Thailand. Conclusions Our results suggest that half of patients relapse within 5 years of their rst diagnosis of schizophrenia and that the factors heightening the risk of relapse were: stressful life events, medical non-adherence, prescription changes, and lack of insight by the patient.Although our study has some methodological limitations, the results provided interesting information that can inform additional therapeutic intervention strategies.Furthermore, they emphasize the importance of implementing a multimodality of treatment such as managing adverse events and reducing the like hood of relapse by combining optimum dosing, type, and route of antipsychotic drug administration.Furthermore, at the same time, integrating pharmacological strategies with effective psychotherapeutic methods aiming to reduce stress, gain insight, increase patient perceived life meaning and boost social integration and community acceptance. Demographic characteristics Comparison of the adverse event of antipsychotic drug between EPS and other adverse effect among relapse and remission group.(N=44) Comparison of the type of antipsychotic drugs among schizophrenic patient who developed EPS.(N=23) Table 3 Factors related to relapse: multivariate analysis
2021-10-19T15:40:21.007Z
2021-09-24T00:00:00.000
{ "year": 2021, "sha1": "249d50567077635bd60ee05ca653d862ac1288b7", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-922573/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "c88d7a3b2d4b9ee26546aabcb31eff3dcef6ce2b", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
254446331
pes2o/s2orc
v3-fos-license
PROTOCOL: Exploring education to support vaccine confidence amongst healthcare and long‐term care staff amidst the COVID‐19 pandemic: A protocol for a living scoping review Abstract Despite the demonstrated effectiveness of vaccines, varying levels of hesitancy were observed among healthcare and long‐term care workers, who were prioritized in the roll out of COVID‐19 vaccines due to their high risk of exposure to SARS‐CoV‐2 transmission. However, the evidence around the measurable impact of various educational interventions to improve vaccine confidence is limited. The proposed scoping review is intended to explore any emerging research and experiences of delivering educational interventions to improve COVID‐19 vaccine confidence among health and long‐term care workforces. We aim to identify characteristics of both informal and formal educational interventions delivered during the pandemic to support COVID‐19 vaccine hesitancy. Using the guidance outlined by the Joanna Briggs Institute, we intend to search five databases including, Ovid MEDLINE and Web of Science, as well as grey literature. We will consider all study designs and reports in an effort to include a breadth of sources to ensure our review will capture preliminary evidence, as well as more exploratory experiences of COVID‐19 vaccine education delivery. Articles will be screened by three reviewers independently and the data will be charted, and results described narratively. | INTRODUCTION The vaccination of healthcare workers, especially those who work in long-term care settings, against COVID-19 has been a top priority, given their high risk of COVID-19 exposure and close contact with vulnerable populations (Government of Canada, 2020). Despite the demonstrated effectiveness of currently available vaccines, varying levels of vaccine hesitancy among both healthcare and long-term care workers were observed during the rollout of COVID-19 vaccines (Biswas et al., 2021;Desveaux et al., 2021; Campbell Systematic Reviews. 2022;e1293. | Participants This review will consider only COVID-19 vaccine educational interventions delivered to adult populations and will exclude any education developed and delivered to children or adolescent populations. | Concept This review will focus only on interventions that educated participants about any of the COVID-19 vaccines and the process of receiving any COVID-19 vaccination. We will include studies and reports that describe both informally and formally delivered educational interventions pertaining to any of the COVID-19 vaccines. With respect to both informal and formal delivery of education, we will define education as an informative action, interaction, or intervention. More specifically, we will characterize formal education as education that is guided or systematic (Feng et al., 2017). For example, formal education is described as being introduced through a rigid curriculum and delivered in 'formal institutions' (Feng et al., 2017). Whereas, informal education, as defined by Spaan and colleagues, is any unstructured or opportunistic interactions that take place outside of formal training settings and are 'in the control of the learner' (Spaan et al., 2016). Therefore, we anticipate identify studies that discuss workshops or presentations, as well peer-to-peer educative interactions. Studies that describe exclusively educational resources or tools with no clearly described delivery interaction between facilitator and audience or participant will not be included in this review. These include generic emails, information handouts, or online e-learning requirements, for example. Finally, reports or studies focused on non-COVID-19 related vaccines will also be excluded. | Context As COVID-19 is a global pandemic, this review will include studies and reports outlining COVID-19 vaccine educational interventions from anywhere in the world. However, we excluded any national or mass vaccination campaigns, as well as any prompting or messaging interventions, as our focus is to better understand the delivery of the education itself and the potential for any interactions between the facilitator and the participants. In addition, we wanted to ensure that any of the educational characteristics we captured in this review were reproducible at an institutional, hospital or long-term care level, and would allow opportunities for engaged learning at time of delivery. | Types of sources Studies and reports included in this review will be identified through a comprehensive search of various electronic databases, grey literature sources and reference scanning of relevant studies. All study designs and papers will be considered, including primary research studies, systematic reviews, case studies, short research reports, editorial comment, rapid communications, letters to the editor, and opinion papers. We chose to include a breadth of sources to ensure our review could capture preliminary evidence, as well as any documented experiential or narrative reports of COVID-19 vaccine education delivery. | Search strategy In consultation with an experienced medical information scientist, we will develop and test the search strategies through an iterative process in consultation with the review team. The MEDLINE strategy will be peer reviewed by another senior information scientist before proposed MEDLINE strategy appears in Supporting Information: Appendix 1. As this is a living review, we are planning to update the search at 6 months (midway mark) and 1 year following the initial search. Any updates to the search will be done in consultation with experts and our information scientist to ensure it is appropriate and has potential to meaningfully contribute to the literature. | Study selection Studies will be selected according to pre-defined criteria, as stated above. Three researchers (ACR, MM, AR) will review all results from relevant searchers as per PRISMA Extension for Scoping Reviews (PRISMA-ScR) protocols (Tricco et al., 2018). Bibliographic screening will also be used to identify any additional relevant publications. Titles and abstracts will be screened independently by three researchers. Full texts of potentially eligible papers for inclusion will be retrieved and screened, and bibliographies of selected studies will be screened for relevant studies missed during the search process. Any screening conflicts that may arise will be discussed between researchers' ACR, MM, and AR, should they be unable to reach a consensus another member of the research team (AH) will be brought in to make the final decision. Data will then be extracted from the studies selected for inclusion. | Data extraction Our approach to data extraction and organization was informed by | Data analysis and presentation Following data extraction, the reviewers will discuss the findings of each study, highlighting any commonalities as well as differences between the included studies. Data will be displayed in a tabular format, accompanied by a narrative summary that describes how the data relates to the scoping review objectives and research questions. Emphasis will be placed on any findings that relate directly to the health and long-term care workforces. In addition, we will highlight any lessons learned from other populations and contexts that could be applied to COVID-19 vaccine education interventions intended for health and long-term care staff. Furthermore, we will identify study characteristics and any gaps in the literature that could merit future exploration. As is common practice in scoping reviews, we plan to include stakeholder consultation throughout our review and will share our findings with our long-term care partners in Ontario. Finally, a PRISMA flow diagram will be used to outline the screening process of the academic literature. | POTENTIAL IMPACT Presently, research and evidence around the measurable impact of various education strategies to improve vaccine confidence is limited. We anticipate this scoping review will contribute to this gap by identifying important characteristics and concepts that could merit future exploration and study. More specifically, while the studies and reports included in this review use a variety of methods, both with and without primary data sources, we anticipate being able to map out how research is being conducted is this area, understand the challenges faced and identify any existing knowledge gaps. Furthermore, while COVID-19 vaccines have been mandated for health and long-term care workforces in some jurisdictions (Ontario Government, 2021;Paterlini, 2021), the COVID-19 context continues to evolve and vaccine requirements continue to be modified. Thus, a better understanding of how to support COVID-19 vaccine confidence and uptake among these workforces continues to be vital. Finally, these findings could also contribute to education developed for any future booster vaccinations, new COVID-19 vaccines, or non-COVID-19 vaccines. DECLARATION OF INTEREST Vivian Welch is editor in chief and interim CEO of the Campbell Collaboration. She was not involved in the editorial process or decision to publish for this manuscript. No other conflicts of interest to declare. Internal Sources: None
2022-12-09T16:24:02.275Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "8cafb278b807f01646c8b457867cea74fe7108ee", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "596f3032cdedef28768e8e4d9577bee661f8d18d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220970832
pes2o/s2orc
v3-fos-license
Analysis of Tools Used in Assessing Technical Skills and Operative Competence in Trauma and Orthopaedic Surgical Training Background: Robust assessment of skills acquisition and surgical performance during training is vital to ensuring operative competence among orthopaedic surgeons. A move to competency-based surgical training requires the use of tools that can assess surgical skills objectively and systematically. The aim of this systematic review was to describe the evidence for the utility of assessment tools used in evaluating operative performance in trauma and orthopaedic surgical training. Methods: We performed a comprehensive literature search of MEDLINE, Embase, and Google Scholar databases to June 2019. From eligible studies we abstracted data on study aim, assessment format (live theater or simulated setting), skills assessed, and tools or metrics used to assess surgical performance. The strengths, limitations, and psychometric properties of the assessment tools are reported on the basis of previously defined utility criteria. Results: One hundred and five studies published between 1990 and 2019 were included. Forty-two studies involved open orthopaedic surgical procedures, and 63 involved arthroscopy. The majority (85%) were used in the simulated environment. There was wide variation in the type of assessment tools in used, the strengths and weaknesses of which are assessor and setting-dependent. Conclusions: Current technical skills-assessment tools in trauma and orthopaedic surgery are largely procedure-specific and limited to research use in the simulated environment. An objective technical skills-assessment tool that is suitable for use in the live operative theater requires development and validation, to ensure proper competency-based assessment of surgical performance and readiness for unsupervised clinical practice. Clinical Relevance: Trainers and trainees can gain further insight into the technical skills assessment tools that they use in practice through the utility evidence provided. Clinical Relevance: Trainers and trainees can gain further insight into the technical skills assessment tools that they use in practice through the utility evidence provided. W ithin an educational paradigm shift toward competencybased measures of performance in surgical training 1 , there is a need to evaluate surgical skills objectively and systematically, and hence, there is a drive toward developing more reliable and valid measures of surgical competence [1][2][3] . Several surgical skill-assessment tools are currently in use in orthopaedic training, and studies evaluating the ability of these tools to objectively measure surgical performance have been performed. To our knowledge, this is the first systematic appraisal of the evidence for these assessment tools. It is imperative that the modernization of surgical curricula be supported by evidencebased tools for assessing technical skill and to enable summative judgments on progression through training and readiness for unsupervised operating. The aim of this systematic review was to evaluate the orthopaedic surgicalcompetency literature and report on the metrics and tools used for skills assessment in trauma and orthopaedic surgical training; their utility with respect to validity, reliability, and impact on learning; and evidence for strengths and weaknesses of the various tools. Materials and Methods This review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines 4 and registered with PROSPERO (International Prospective Register of Systematic Reviews) 5 . Data Sources We performed a comprehensive literature search of MEDLINE, Embase, and Google Scholar electronic databases. The search strategy was developed by collating keywords from an initial scoping search (Table I). Categories 1, 2, and 3 were combined using Boolean "AND/OR" operators and results were limited to human subjects. No date or language limits were applied. The last search was performed in June 2019. Duplicates were removed, and retrieved titles were screened for initial eligibility. Study Selection Eligible for inclusion were primary empirical research studies assessing postgraduate surgical resident performance in open or arthroscopic orthopaedic surgical skills in a simulated or live operative theater environment. Nonempirical studies and those that focused solely on patient or procedural outcome, or only described a training intervention, were excluded. A deliberately broad search strategy was employed to capture all studies in which an orthopaedic surgical skill was assessed. Title and Abstract Review The search identified 2,305 citations. Initial title screening was undertaken by 1 author (H.K.J., a doctoral researcher), with studies that were obviously irrelevant excluded. One hundred and eightyseven abstracts subsequently underwent screening by 2 authors (H.K.J. and A.W.C., an attending surgeon), and 106 were retrieved in full text. Of these, 105 were included in the final review (1 study was excluded at full-text review as the participants were not surgical residents). Studies were rejected at screening if they were not empirical research, if the study participants were undergraduates, or if nontechnical skills were being assessed; studies reporting simulator protocol development or validation were also excluded at this stage. The reference lists of full-text articles were examined for relevant studies, and those found by hand searching were subject to the same eligibility screening process. Data Extraction and Analysis Data items relevant to the review objectives were extracted into a structured form to ensure consistency. The first reviewer undertook data extraction for all studies. Extracted data included study aim, setting, assessment format, number and training stage of participants, skills assessed, assessment tool and/or metrics, assessment tool category, study results, and "take-home" message related to the assessment tool. Assessment tools were classified by the type of method; the following categories were defined: traditional written assessments, objective assessment of technical skill, procedure-specific rating scale, individual procedural metrics, movement analysis, psychomotor testing, and subjective assessments. Search Results One hundred and six articles were evaluated in detail, 1 of which was excluded at full-text review because the participants were not surgeons-in-training; 105 articles were therefore included in the review. The flow of studies is shown in Figure 1. Study Aims, Setting, and Participants The studies were broadly split into 3 categories: studies measuring the impact of a simulation training intervention (26 | A s s e s s i n g Te c h n i c a l S k i l l s a n d O p e r a t i v e C o m p e t e n c e i n T & O S u r g i c a l Tr a i n i n g studies 6-31 ), studies assessing the construct validity of a simulator designed for training surgeons (42 studies 32-73 ), and studies validating an assessment tool (37 studies ) (see Appendix Tables 1 and 2 ; and various open reduction and internal fixation (ORIF) procedures for fractures of the forearm (7 studies), ankle (2 studies), and tibia (1 study), and complex articular fractures (1 study). Elective hand procedures, including trigger-finger release (1 study) and carpal tunnel decompression (3 studies), and elective hip (1 study) and knee (1 study) arthroplasty were also assessed. The majority (85%) assessed skills in the simulated setting, 10 studies assessed skills in the live operative theater, and 10 studies assessed skills in both the simulated and live operative theater. Overall, 2,088 orthopaedic resident participants were involved in the studies, with experience level ranging from PGY (postgraduate year) 1 to 10. Assessment Format The assessment format varied considerably (see Appendix Tables 1 and 2, column 3). Fifty-nine studies assessed performance using live observation, A s s e s s i n g Te c h n i c a l S k i l l s a n d O p e r a t i v e C o m p e t e n c e i n T & O S u r g i c a l Tr a i n i n g | JUNE 2020 · VOLUME 8, ISSUE 6 · e19.00167 and 50 used post-hoc analysis of video footage by experts. Simulator-derived metrics were used in 72 studies. Finalproduct analysis by expert assessors was used for 3 studies, and biomechanical testing of the final product was used in 7. Assessment Tools or Metrics A wide variety of assessment tools were used (see Appendix Table 3). Traditional assessments, such as written examinations, were used in 5 studies. Objective assessment of technical skills was widely used, and took many forms: task-specific checklists (20 studies), global rating scales (19 studies), and novel objective skillsassessment tools for both arthroscopy (22 studies) and open surgery (6 studies). Procedure-specific rating scales were used for both arthroscopic (7 studies) and open procedures (6 studies). Individual procedural metrics, such as final-product analysis and procedure time, were used in 56 studies. Movement analysis using simulatorderived metrics, such as hand movements, gaze tracking, hand-position checking, and instrument speed and path length, was used in 22 studies. Psychomotor testing using commercial dexterity tests was used in 5 studies. Subjective assessment measures were used in 4 studies. Quality Assessment Van Der Vleuten described a series of utility criteria, known as the "utility index," which is a widely accepted framework for assessing an evaluation instrument 111 . The features of the utility index are described in Table II. Each assessment tool was appraised for utility; the evidence for each of the various technical skills-assessment tools in current use is summarized according to the utility index criteria (see Appendix Table 3, columns 5 to 11). There was a wide spread of utility characteristics among the different tools, and their heterogeneity precludes any formal analysis. The strengths and limitations of the respective tools are presented in Appendix Table 3, columns 3 and 4. Discussion Robust assessment of competency and operative skill in trauma and orthopaedic surgery is a topical issue in training. The primary goals of surgicalcompetency assessment are to provide a platform for learning through feedback, to make summative judgments about capability and progression through training, to maintain standards within the profession, and ultimately, to protect patients from incompetent surgeons 1 . To our knowledge, this review is the first comprehensive analysis of the tools currently available for assessing technical skill and operative competency in trauma and orthopaedic surgical training. The results show that none of the tools currently used for assessing technical skill in orthopaedic surgical training fulfill the criteria of Norcini et al. for effective assessment 112 . There is a similar deficiency of utility evidence in technical skills-assessment tools in general surgery 113 and vascular surgery 1 , which face the same challenges as trauma and orthopaedics in moving toward a competency-based approach to training 1 . Checklists and global rating scales were commonly used tools for technical skills assessment in the review studies (see Appendix Trable 3). Checklists deconstruct a task into discrete steps, and may have educational value for teaching procedural sequencing to novice residents. They do not capture the quality of performance, and the rigid binary scoring does not allow deviation resulting from there possibly being .1 acceptable way of undertaking a procedure. Another disadvantage of checklists is an early ceiling effect 1 . Checklists do have the advantage of being able to be administered by nonexpert assessors, and judgment on performance can be made either live or from video footage. They also can be used in both the simulated and live theater environment. They show reasonable construct validity 68,77,96,98 , concurrent validity 37,77,96,102,103 , and reliability 37, 88,114 . With their limitations in mind, checklists are perhaps most appropriate for novice learners in a formative setting 1 . Global rating scales use generic domains with a Likert-type scale and descriptive anchors to capture the quality of performance 61,66,93 . They are generalizable between procedures and can be used to assess complex procedures when there is .1 accepted method. They can discriminate between competent and expert performance, and there | A s s e s s i n g Te c h n i c a l S k i l l s a n d O p e r a t i v e C o m p e t e n c e i n T & O S u r g i c a l Tr a i n i n g are many studies demonstrating their content 17, 96 and concurrent validity 17, 77,85,96,98,103 and their reliability 17,37,66,96 . They require expert surgeon evaluators and are more timeconsuming to administer, and may be susceptible to assessor bias, as domains of assessment such as instrument handling and respect for tissue are inherently quite subjective. The ability of global rating scales to distinguish between all levels of performance and the absence of a ceiling effect make them useful for high-stakes, summative assessment 1 and the assessment of advanced residents. Several novel objective assessment tools have been developed and combine task-specific checklists with a global rating scale. The most promising frontrunners among these are the Arthroscopic Surgical Skill Evaluation Tool (ASSET) 36,37,77 , which combines a task-specific checklist with an 8-domain global rating scale with end and middle descriptive anchors, and the Objective Structured Assessment of Technical Skills (OSATS) tool 23,93 (see Appendix Table 3). While the ASSET is obviously restricted to arthroscopic procedures, both have a growing body of evidence across all domains of the utility index (Table II). The hybrid approach of combining a task-specific checklist and a global rating scale into 1 assessment tool enables the strengths of both to be brought together within a single tool but has the disadvantage of becoming long and burdensome to complete, which negatively impacts their feasibility and acceptability in a busy workplace in which training assessment conflicts with service pressures. The OSATS tool is in current use in training programs in obstetrics/ gynecology 115 and ophthalmology 116 and is popular with residents 117 . It captures the quality of performance and can distinguish competence from mastery, and the stages of progression in between. There were several studies in this review that demonstrated the validity, reliability, feasibility, and educational value of the OSATS tool in trauma and orthopaedics in the simulated setting (see Appendix Table 3, columns 5 to 11). Further work is required to assess its utility in the live operative theater. There are a variety of procedurespecific rating scales that have been developed for both open 21,32, 58,70,99,118 and arthroscopic 7,76,81,82,90,92 procedures (see Appendix Table 3). Most are in the early stages of validation and are likely to be most useful for the research setting. They are not practical for the live workplace environment given the variety of procedures that are undertaken within a typical training rotation; a generic tool that may be applied to the assessment of all procedures is more feasible. Motion analysis (see Appendix Table 3) is also promising for assessing technical skill, particularly in arthroscopy, and several studies in this review demonstrated its utility 6,13,31,34,41,50, 66,74,75,86 . Its use to date has been largely restricted to the research setting, and further work on transfer validity and potential educational impact is required. Some of the obvious barriers, such as sterility concerns, have been mitigated by using elbow instead of hand-mounted sensors in the live operative theater 31 . Handmotion analysis can generate a sophisticated data profile that can detect subtle improvement in surgical performance, and may be able to measure the attainment of mastery. Other motion parameters, such as gaze tracking 6 , triangulation time 74 55 , have demonstrated construct validity and feasibility in the simulated environment but are unlikely to be useful in the live operative theater, as most of these measurements are derived from the simulator itself. Individual procedural metrics can also be used to assess technical skill (see Appendix Table 3). Final-product analysis provides an objective assessment of final product quality, from which technical proficiency is inferred. Examples include tip-apex distance in DHS fixation 58,62 , screw position 22,30,59,71,95 , and articular congruency 73,93 . Orthopaedics has the advantage of the routine use of intraoperative and postoperative radiographs from which relevant, real-life finalproduct analysis metrics such as implant position can easily be measured. Finalproduct analysis is objective and quite easy and efficient to perform. A nonspecialist assessor (who has been appropriately trained) can make the measurements. In the simulated setting, invasive final-product-analysis measures, such as biomechanical testing of a fracture construct, can be used to assess procedural success. Final-product analysis is appealing as it relates technical performance to real-world, clinically relevant measures of operative success. Conclusions regarding the construct validity of final-product analysis are, however, rather mixed, with almost as many studies refuting its construct validity 59 Procedure time was extensively used as a procedural metric to assess technical skill in the included studies. It is easy to measure in both the simulated and in vivo setting. It relies on the intuitive assumption that speed equates to proficiency. This is potentially problematic, as extrinsic patient and staff factors beyond surgeons' immediate control could influence procedure time, and it gives no indication of quality of performance; procedure time may be measured as fast because the surgeon was a masterfully efficient operator, but alternatively they may have rushed the procedure and been careless. The evidence for construct and concurrent validity for procedure time is mixed, with many studies showing it can discriminate between experience levels 6 Both final-product analysis and procedure time are therefore unlikely to be useful in isolation, but rather could be used as adjunctive measures of technical proficiency. Limitations This review is limited to the assessment of technical skills in trauma and orthopaedic surgery; the assessment of nontechnical skills for surgeons was not considered in our analysis. Nontechnical skills are undoubtedly an essential dimension of surgical competence and are rightly beginning to receive attention in the surgical education literature 119 . The perfect technical skills-assessment tool is therefore never going to be usable in isolation to comprehensively assess competence, but rather should form a key part of a battery of evidence-based assessment tools. Implications and Recommendations There is growing dissatisfaction with the current technical skills-assessment tools within the surgical education community 105,120 , and an increasingly urgent need to develop an evidencebased assessment tool that is generalizable to the broad range of technical and nontechnical skills in trauma and orthopaedic surgery, and that satisfies the utility criteria. The Procedure Based Assessment, which is the current main assessment tool used for high-stakes assessment in the U.K. training system, is lengthy to complete, comprising 40 to 50 tick boxes and 12 free-text spaces 105 . It was initially implemented prior to any formal validation beyond an initial consensus-setting (Delphi) process to define the domains 105,121 . Several years after its introduction, a large, pansurgical-specialty validation study was undertaken 109 , with a particular focus on demonstrating the reliability of the rating scales 105 . Within this study, orthopaedics appears underrepresented, with the totality of the procedure-based assessment-validity evidence relating to 2 orthopaedic procedures involving 7 residents. Subsequent validation work, using more traditional frameworks in general and vascular surgery, has demonstrated that the procedure-based assessment is a valid and reliable measure of performance 105 and responsive to change 105 , but there remains a deficiency of evidence for its utility in orthopaedics, which is surprising given that it is the current gold-standard assessment in the U.K. training system (see Appendix Table 3). Adding to the problem, engagement with the Procedure Based Assessment has been poor 105 , and it remains unpopular 120 . A national survey of trauma and orthopaedic resident attitudes toward procedure-based assessments (PBAs) in the U.K. found that more than half agreed or strongly agreed with the statement "completing PBAs is nothing but a form-filling exercise," 60% agreed or strongly agreed that there are "barriers to the successful use of PBAs by residents" 120 , and only one-third believed that they should be used for high-stakes assessment in training, such as the Annual Review of Competence Progression 120 . Further work has found that reasons for the poor engagement are that the Procedure Based Assessment is burdensome to complete; with a coarse rating scale of blunt, binary descriptors, it cannot distinguish mastery or higherorder skills; and it results in general assessment fatigue 105 . The Procedure Based Assessment was among the earliest formal tools for technical skills assessment in orthopaedic surgical training, and its creators deserve recognition for beginning the process of objectively assessing technical skills of surgeons-in-training. We propose that the Procedure Based Assessment is no longer appropriate for use in summative assessment in a modern competency-based assessment training environment. The OSATS tool and the ASSET show promise as replacements to the Procedure Based Assessment, and validation work on these, with a particular focus on their use in the live operative theater, should be continued. Conclusions The evidence for the utility of the technical skills-assessment tools currently used in trauma and orthopaedic surgical training is inadequate to support their use in summative high-stakes assessment of competency. An assessment tool that is generalizable to the broad range of technical and nontechnical skills relevant to trauma and orthopaedics, that satisfies the utility criteria, and that is cost-effective and feasible requires development. Appendix Supporting material provided by the authors is posted with the online version of this article as a data supplement at jbjs.org (http://links.lww.com/JBJSREV/A611).
2020-06-25T09:05:06.118Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "45bb54756e702f515edb0a01fcf616ed129c848d", "oa_license": "CCBY", "oa_url": "https://journals.lww.com/jbjsreviews/Fulltext/2020/06000/Analysis_of_Tools_Used_in_Assessing_Technical.14.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "742958b1504d98ded68acf0ae456b9ba119c8a5e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
88105396
pes2o/s2orc
v3-fos-license
Vitamin D Status of School Children in and around Guwahati Context: Peripubertal and adolescent children are vulnerable to vitamin D deficiency as this is the period of rapid skeletal growth. Aims: This study was done to assess the vitamin D status in school children between the age of 8–14 years attending the government schools in rural and urban areas of Assam in Northeast India. Settings and Design: This was a cross-sectional observational study. Materials and Methods: About 500 students (350 from rural and 150 from urban areas) were recruited in the study. Serum 25-hydroxy vitamin-D [25(OH)D], parathyroid hormone (PTH), calcium, phosphorus and alkaline phosphatase were measured in fasting state. Daily nutrition intake and sunlight exposure were assessed. Statistical Analysis: Student's t-test and Pearson correlation test were done to assess the association between different variables. P value <0.05 was considered significant. Results: The prevalence of vitamin D deficiency was 8.4% and vitamin D insufficiency was 14.2%. There was no significant difference of mean 25(OH)D levels and sun exposure between rural and urban children. Out of 42 children with vitamin D deficiency, 36 (85.7%) had sun exposure <20% and 41 (97.6%) had calcium intake < 1000 mg/day. The rural children had a higher calcium intake as compared to urban children (P = 0.005). There was a significant positive correlation of mean 25(OH)D levels with serum calcium, sun exposure and calcium intake. Conclusion: The prevalence of vitamin D deficiency in peripubertal and adolescent age group children in and around Guwahati city of Assam is comparatively lower than that in other parts of the country. these students with their parents were asked to come to school on a Sunday where the parents were interviewed regarding diet, sun exposure, any underlying illness that might interfere with vitamin D synthesis or any intake of drugs or vitamin supplements. Exclusion criteria were children with comorbid conditions that affect vitamin D synthesis and metabolism like diseases of skin, liver, kidney and gastrointestinal system. Children on vitamin D supplementation and on drugs including multivitamins, anticonvulsants, steroids, thyroxine, anti-tuberculosis treatment and antimetabolites were also excluded from the study. The study was approved by the Institutional Ethical Committee. Another approval was sought and granted by the Indian Council for Medical Research (ICMR) and the study was funded by the ICMR. A team from the Department of Endocrinology of Gauhati Medical College and Hospital, Assam, went to each school after a prior intimation. A written consent was taken from all the parents of students who consented to participate in the study. Detailed history and body measurements were taken and almost all students were accompanied by their mothers for clarity in answering questions. Data were collected on their food habits, physical activity and sun exposure. Height and weight were measured, and body mass index (BMI) was calculated. The parents were asked to provide a rough estimate of the average daily time their child spent under the sun in the past month. Physical activity was evaluated by asking the parent to provide a rough estimate of their child's level of physical activity by engaging in active playing, running, jumping, climbing, playing football or basketball or other activities. Sunlight exposure was assessed by documenting average duration of exposure and percentage of the surface area of the body exposed daily. The dressing manner of the subjects was unrestricted and was not therefore considered. The percentage of body surface area exposed was assessed by the Wallace rule of nine. Sunshine exposure was calculated as Sunshine exposure (%) = hours of exposure/day × percentage of body surface area exposed. [4] They were examined for clinical features of vitamin D deficiency like widening of wrist, double malleolus, genu varum, genu valgus, proximal muscle weakness and leg pain. Puberty was assessed using the Tanner's classification. [5] Fasting blood samples were taken for serum 25-hydroxy vitamin D [25(OH)D], serum calcium (Ca), albumin, phosphorus, alkaline phosphatase and intact parathyroid hormone (PTH). The samples were separated in a refrigerated centrifuge at 1200 rotation/minute for 15 min at 4°C and were divided into five aliquots and stored at −20°C until analysed. Serum 25(OH) D was measured using the radioimmunoassay (RIA) according to the manufacturer's protocol. The sensitivity of this assay is 2.5 ng/ml. The intra-assay coefficient of variation (CV) is 9.1% at 22.7 ng/ml. Vitamin D deficiency was defined as 25(OH)D levels of <20 ng/ml and insufficiency was defined as 25(OH)D levels from 21 to 29 ng/ml as per the Endocrine Society Clinical Practice Guidelines. [6] Serum PTH were measured by immunoradiometric assay (IRMA) according to the manufacturer's protocol (N-TACT PTH IRMA kit USA) in Automatic Gamma Counter (Gamma 10, version 2.0). Normal range of PTH is 13-54 pg/ml. Serum calcium was measured by the photometric test and normal values of calcium were taken as 8.1-10.4 (mg/dl). Serum phosphorus was measured by using a photometric UV test and the normal values of phosphorus were taken as 4-7 mg/dl. Serum alkaline phosphatase was measured using the Orthophosphoric Monoester Phosphohydrolase Method. Serum albumin was measured by the photometric colorimetric test method and corrected calcium was calculated using the formula: Corrected Ca = (4 − albumin) 0.8 + estimated calcium. Dietary nutrition intake was assessed by estimating the average composition of the daily diet in terms of calcium from a 24 h recall of the food intake utilising a published data on nutritive value of Indian food. [7] Statistical analysis Statistical software SAS 9.3 was used to analyse the data. Quantitative data are presented as mean ± standard deviation. Student's t-test was performed on different variables to compare between two groups. Association of 25(OH)D with different variables was analysed with the Chi-square test and the pattern of correlation was determined by the Pearson correlation. A P < 0.05 was considered as statistically significant. Results Five hundred children in the age group of 8-14 years were included in the study, out of which 350 children were from rural area and 150 children were from urban area. The age distribution of the study subjects is shown in Table 1. Out of 500, 246 (49.2%) were males and 254 (50.8%) were females. The heights, weight and body mass index (BMI) of the subjects were within the normal range for age and sex. Children from urban area had greater height, weight and BMI than the rural counterparts, whereas rural children had significantly higher sun exposure [ Table 2]. There was no significant difference of serum calcium, phosphorus and alkaline phosphatase levels in between the rural and urban group. The mean 25(OH)D levels in both the rural and urban children were within the normal range of ≥30 ng/ml and there was no significant difference of 25(OH)D levels between the rural and urban areas. The mean levels of serum PTH of the children were within the normal range of 13-54 pg/ml. A significantly higher sun exposure levels were seen in children of rural area of 12-13.99 years age group in both males and females. Rural females of the age group 8-9.99 years also had significantly higher sun exposure than the urban females of same age group [ Table 3]. dIscussIon Our study is the first to address the problem of vitamin D deficiency in the North-East region of the country in healthy adolescent and pre-adolescent children. The adolescent group deserves special consideration, as this is the period of accrual of peak bone mass and because of increased demands of accelerated skeletal growth, vitamin D deficiency is likely to occur. There is a dearth of large-scale community-based studies to assess vitamin D status in adolescent age group. The first of the two large studies were done by Marwaha et al. in 2005 on 5137 healthy school children of urban New Delhi. [8] The study included children from lower and upper socioeconomic class where children from lower socioeconomic group had significantly lower 25(OH)D levels. The second study by Puri et al. in 2008 on 3217 healthy school girls of Delhi demonstrated clinical vitamin D deficiency in 11.5% and biochemical hypovitaminosis D (serum 25-hydroxy vitamin D <50 nmol/l) in 90.8% of girls. [9] Our study did not consider the socioeconomic status and included 500 children from government schools of rural and urban region. The prevalence of vitamin D deficiency and vitamin D insufficiency in our study was found to be overall 8.4% and 14.2%, respectively. There was no difference of mean 25(OH)D levels between boys and girls. The prevalence of vitamin D deficiency was lower in our study as compared to the other parts of India as documented in various studies. [8][9][10] Mandlik et al. in their study in Maharashtra on school children between age of 6-12 years reported a prevalence of vitamin D insufficiency in 71% despite a sun exposure of 2 h. [10] This low prevalence of vitamin D deficiency in our subjects in Northeast India may be attributed to several factors. The first reason is the geographical location which ensures adequate sunshine throughout the year. The average duration of cloud-free sunshine in this region is around 8-10 h per day which translates into ample sunshine reaching the population throughout the year. [11] The most common Indian skin type has been found to be of type V followed by type IV and the estimated minimal erythemal dose (MED) for UVB for Indian skin is 61.5 ± 17.25 J/cm. [12] However, this data are from southern India. The northeast Indian population are now considered to originate from central and eastern Asia (e.g. Korean, Chinese, Japanese or Philippino roots) according to the new six genetico-racial categories, which is predominantly Fitzpatrick skin type III. [13,14] Fitzpatrick skin type III have been shown to have lower MED compared to type IV-VI. [12] Since winter is short here, there is a little seasonal variation of the peak intensity of sunlight in northeast India. Moreover, our study was conducted in a different region and latitude of 26°11ʹ0ʹʹ North and 91°44ʹ0ʹʹ East and therefore the prevalence of vitamin D deficiency found in our study reflects the status unique to this region. Secondly, the diet of the people in this region which is predominantly non-vegetarian diet may have contributed to the low prevalence of vitamin D deficiency in this region. There is daily intake of eggs, meat (pork, chicken and lamb) and fish which are natural sources of vitamin D. Vitamin D content of meat is approximately 23 µg/kg for various meat products. [15] The vitamin D deficient group had lower serum calcium, phosphorus and higher alkaline phosphatase. They also had lower intake of calcium and sun exposure. We found no significant difference of mean 25(OH)D levels and mean total sun exposure between children in the rural and urban area. There was no history of applying sunscreen in all the subjects. The rural children had a significantly higher calcium intake. This is because the rural children have better access to fresh vegetables as well as dairy products than the urban children. However, the mean calcium intake in both rural and urban children was lower than 1000 mg/day. The recommended daily allowance (RDA) of calcium for children aged 8-14 years ranges from 600 to 800 mg/day. [6,16] In the study by Harinarayan et al. in Andhra Pradesh, dietary calcium intake was lower in rural areas than urban areas and dietary phytate content was higher for rural areas but there was no significant difference in 25(OH)D levels of rural and urban children which is in concordance with our study. [17] In our study, the urban children had a significantly higher BMI than rural children. This may be attributed to the lack of open space and lesser physical activity resulting in greater weight in urban children. However, the mean weights of all the subjects were within normal limit for their age and sex. [18] Of the vitamin D deficient children, 54.76% had BMI <18.5 kg/m 2 , whereas 42.85% had normal BMI of 18.5-24.9 kg/m 2 . We also examined the relationship between 25(OH)D with BMI, serum calcium, calcium intake, sun exposure, serum alkaline phosphatase and PTH. Significant positive correlation of 25(OH)D levels with serum calcium, sun exposure and calcium intake were found. Significant negative correlation was seen between 25(OH)D levels with BMI, alkaline phosphatase and PTH. The finding is in accordance with the study of Puri et al. where they found significant correlation of vitamin D levels with sun exposure. [9] conclusIon In our study prevalence of vitamin D deficiency was 8.4% and vitamin D insufficiency was 14.2% in apparently healthy adolescent and preadolescent school children in and around the city of Guwahati in North-Eastern India which is comparatively lower than the prevalence found in other parts of the country and reflects the status unique to our region. There was no difference of mean 25(OH)D levels between the rural and urban population in the study. Financial support and sponsorship ICMR. Conflicts of interest There are no conflicts of interest.
2019-03-31T13:32:42.767Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "dbf4ecd72ce6cfb2af5956b700191a1b76a9e27f", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijem.ijem_552_18", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "d95c3acbb9b19697c6f6d9a53a938a6467efce81", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15156266
pes2o/s2orc
v3-fos-license
An Examination of a New Psychometric Method for Optimizing Multi-Faceted Assessment Instruments in the Context of Trait Emotional Intelligence Driven by the challenge of representing and measuring psychological attributes, this article outlines a psychometric method aimed at identifying problem facets. The method, which integrates theoretical and empirical steps, is applied in the context of the construct of trait emotional intelligence (trait EI), using data from six different samples (N = 1284) collected across Europe. Alternative representations of the trait EI variance, derived from the outcome variables used in previous validation studies of the Trait Emotional Intelligence Questionnaire, were regressed on the 15 trait EI facets using the stepwise method. The analyses revealed five facets, which did not occupy unique construct variance in any of the six samples. As expected, a composite of the remaining 10 facets consistently showed greater construct validity than the original 15-facet composite. Implications for construct and scale development are discussed, and directions for further validation of the method and for its application to other constructs are provided. © 2014 The Authors. European Journal of Personality published by John Wiley & Sons Ltd on behalf of European Association of Personality Psychology. Examining the literature of an individual-differences construct, one often finds a diversity of measures, with an overall abundance of facets. Even individual measures composed of a fairly large number of facets are quite common. In some cases, the arrays of facets used to represent the same construct diverge considerably (in quantity and/or types), and correlations between their composites are weak or moderate (e.g. Baer, Smith, Hopkins, Krietemeyer, & Toney, 2006;Brackett & Mayer, 2003). It is then difficult to accept that all measures reflect the same underlying attribute to a similar degree. This rather messy state reflects the lack of adequate criteria for defining psychological constructs, which are only indirectly inferable and measurable (Cronbach & Meehl, 1955). Thus, researchers have noted that there is considerable uncertainty in determining the set of facets and models from which the composite representative of the targeted attribute can be derived (e.g. Petrides & Furnham, 2001). The present article describes and applies a new psychometric method for developing and optimizing multi-faceted measurement instruments. Because scale development goes hand-in-hand with the development of construct representations (e.g. structural models), it also has implications for the latter. The method is intended to supplement the contemporary theoretical and empirical approaches to scale construction, by targeting 'problem' facets detrimental to construct validity. It thereby aims to minimize the plethora of facets through which constructs are often represented. The basic principle of the method is to identify problem facets based on their inability to occupy a unique part of the target construct's variance. It uses an alternative representation of the construct to assess whether a measure's facets fulfil this general criterion. Prior to describing the method in detail, it is necessary to specify its unique focus and explain how it supplements existing test construction methods. We then proceed with a brief review of the construct of trait emotional intelligence (trait EI), on which the method will be applied in the present investigation. Similar to definitions commonly used in the literature (Costa & McCrae, 1995;Smith, Fischer, & Fister, 2003), we use the term facet to refer to a variable representing a narrow and highly homogenous subset of affective, behavioural, or cognitive tendencies associated with a given construct. Facets are interrelated and define the hypothetical domain of a construct; their common variance is conceptualized as representing the construct of interest. We use the term factor to designate a variable that subsumes the common, construct-related variance of several facets. Factors provide a mid-level between facets and the latent construct, serving to organize the facets into subcategories and providing the basis for subscales. Rationale and focus: Redundant and extraneous facets The psychometric literatures of numerous constructs suggest that the contemporary scale-construction approaches lack efficacy in screening out a considerable number of problem facets. This is not particularly surprising, because their primary goal is to identify relevant content and build structural models, rather than to optimize and refine construct representations. In short, we argue that the contemporary psychometric approaches lack utility in identifying problem facets and thereby contribute to the inflation in the number of facets often seen in the literature. Further, we are convinced that this limitation plays a salient role in the diversification of measures. Defining problem facets We specify here three criteria a variable should meet in order to qualify as a useful facet of a higher-order construct. First, facets must tap into a homogenous set of psychological processes, situated at the same ontological level. Essentially, this means that a facet represents a set of proximate manifestations of the construct, rather than some distant outcome, indirectly associated with the construct (e.g. number of friends or romantic partners, highest level of education achieved, or age of death), or even an antecedent of the construct (e.g. parenting style). Second, a facet should share a non-negligible amount of variance with the other facets. Modest correlations between facets, or weak loadings of individual facets on the latent composite, may be due to untargeted sources, such as other constructs or response biases. However, although often taken as such, the common variance is insufficient as the sole empirical criterion for the validity of facets. A third criterion is that a facet should occupy a unique portion of the variance attributed to the construct it is theorized to represent (i.e. common variance not covered by other facets). This last criterion is the main focus of the method presented here. As regards the second and third criteria earlier, two types of problem facets can be operationally defined. We refer to them as extraneous and redundant facets (hereafter abbreviated as ET and RD facets, respectively). The best way to describe these facets is with respect to their component variance, as graphically illustrated in Figure 1. Facets can have two types of variance: reliable common variance, which is due to the target construct and shared with the other facets, and reliable specific variance, which is unrelated to the target construct (Smith et al., 2003). ET facets have no common variance at all (i.e. variance due to the target construct); their variance is due to dimensions other than the one reflecting the target construct, thus likely violating the second criterion. As indicated, however, ET facets may still share variance with valid facets, because of measurement bias or dimensions other than the target construct. Although RD facets have common (construct) variance, this variance is more efficiently covered by at least one other. Therefore, RD facets do not occupy 'unique common variance' and do not add to the comprehensive representation of the construct (Criterion 3). Both these two types of problem facet compromise the construct validity of a model or set of facets. RD facets lead to an unbalanced representation of the target construct's variance by over-representing some of its manifestations, while ET facets result in representations that extend beyond the target construct's boundaries, representing expressions of other, non-targeted dimensions. At the empirical level, both are prone to compromising the validity of the global composite derived from the facet scores. Neither is uniquely representative of the target construct and, hence, unlikely to occupy a distinctive portion of its variance vis-à-vis the other facets. When combined into a global composite, the effects of predictive facets are averaged out with those of the non-predictive facets (Smith et al., 2003). Consequently, the correlations of their composite with construct-relevant outcomes are lower than those of a composite encompassing exclusively predictive facets. Because ET facets stretch the variance of the composite thought to represent the target construct into other dimensions, they also impose constructunrelated variance on the composite. Limitations of contemporary psychometric approaches The existing methods have been classified as the deductive, inductive, and external approaches (Burisch, 1984) or, alternatively, as the rational-theoretical, internal consistency, and criterion-keying approaches, respectively (Burisch, 1984;Simms & Watson, 2007). Although the rational-theoretical approach encompasses the largest number of specific methods (e.g. content analysis, focus groups, and evidence-oriented methods), coming up with an optimal representation of the construct based on theory and reasoning alone is virtually impossible. Items or facets that appear to be conceptually relevant may not represent variance attributable to the target construct. Furthermore, as discussed, even thematically and empirically related facets may not represent a unique aspect of the construct relative to the other facets within the model. The internal consistency approach subsumes the range of variations and applications of factor analysis. However, this approach cannot identify RD facets, because it does not reveal whether a facet occupies a unique part of the construct variance not already covered by one or more of the other facets. In fact, RD facets are likely to have inflated factor loadings, leading to overrepresentations of certain manifestations of the construct and their variance within the total composite. Further, although this approach may reveal many ET facets, it cannot identify them reliably. Factor loadings depend on the facets in the model being tested. If a set of facets represents the construct weakly, ET facets are more likely to load on the latent composite. Also, ET facets are particularly likely to be retained where low cut-offs are used, which is a problem given that there are no agreed-on guidelines regarding the magnitude of factor loadings and communalities at which one should retain facets (Gignac, 2009). In contrast to the internal consistency approach, in which items or facets are selected based on their interrelationships, criterion-keying selects variables based on their ability to predict relevant external criteria. A variable's predictive ability has relevance for the identification of RD and ET facets, as these should not occupy any unique variance linked to the target construct. However, a widely discussed shortcoming of this approach is the lack of a rational-theoretical component, because test items are selected from large item pools based on their predictive ability alone. Moreover, criterion-keying is restricted to attributes for which people at the low or high extreme can be identified fairly objectively (e.g. extraverts and introverts, narcissists, and people identified as having a particular disorder). For many constructs, it is difficult to classify individuals unambiguously, because there is no shared agreement of how people at the extremes are like, which relates back to the conceptual ambiguity of these constructs. Variants of these traditional approaches or altogether different approaches focused on either construct testing or scale development have emerged in more recent years (Chen, Hayes, Carver, Laurenceau, & Zhang, 2012;Costa & McCrae, 1995;Hull, Lehn, & Tedlie, 1991;Smith et al., 2003). However, none of these addresses the problem of identifying RD and ET facets, which is the focus of the proposed method for optimizing assessment instruments outlined in this article. Description of new method The psychometric method we propose here is intended to complement the existing scale-construction approaches, by helping to identify RD and ET facets. It is, thus, especially useful if one deals with 'fuzzy' constructs that lack consensual definitions. Presently divided into five broad steps, the method seeks to identify RD and ET facets based on their inability to occupy a unique part of the target construct's variance. As discussed, the common, construct-based variance of RD facets is already occupied by other facets, whereas ET facets do not overlap with the target construct. Consequently, both types of facet compromise, rather than enhance, the representation of the construct. A basic premise of the method is that a variable representing the construct variance comprehensively can be derived from a source other than the construct's measurement vehicle. If such a variable can be extracted, it could be used as a benchmark to examine whether each of the hypothetical facets occupies a unique portion of the construct variance. Of course, sufficiently broad variables needed to represent the variance of most constructs do not pre-exist (Epstein, 1984). Individual outcome variables that are theoretically influenced by the target construct and commonly used to assess its criterion validity are unlikely to reflect its entire impact comprehensively. Moreover, they cannot be expected to represent the construct variance exclusively, and therefore, using multiple individual outcomes for the purpose of representing the construct would be no reasonable solution. Because of the specific variance that these criteria would bring into the equation, there would be an increased chance of seeing predictive effects of ET facets and, to a lesser extent, RD facets. Step 1 While using individual or multiple validation criteria is not instrumental for identifying RD and ET facets, a single variable that is representative of the target construct's variance should be defined by the shared variance of constructrelevant outcomes. Using latent composites of these outcome variables therefore appears to be a reasonable and practical solution to capturing the variance of a given construct comprehensively (hereafter, we use the term outcome-based composite to refer to variables representing the shared variance of construct-relevant outcomes). This composite can then be used to assess whether each of the hypothetical facets occupies unique construct variance. Thus, Step 1 is to obtain a comprehensive sample of construct-relevant outcomes with commonvariance representative of the target construct. Naturally, Step 1 also involves administering the chosen set of outcomes along with a comprehensive and multi-faceted measure of the target construct to multiple samples. Selecting outcome variables has a strong theoretical component, involving a systematic sampling process. Various approaches to selecting comprehensive sets of outcome variables are conceivable, although in general, it seems safest to rely on proximate outcomes (i.e. variables representing affect, behaviours, cognition, and desires) that share the general theme of the construct and correlate in the expected direction with it. More indirectly related outcomes increase the chances of significant incremental effects of ET facets. While it may be impractical to administer a representative sample of measures to a single sample of participants, it would be legitimate to spread out the measures across samples to ensure that all parts of the construct variance are represented. The number of measures per sample would then depend on the total number of measures needed to represent the construct variance and on how many measures one can reasonably administer to each sample without compromising the validity of the responses. Ideally, one would randomly assign outcomes corresponding to each empirically or theoretically derived higher-order factor across samples to ascertain that their common variance is representative of the target construct. Step 2 In Step 2, one extracts the first principal component from the chosen set of criteria, because it is, in theory, the one that is representative of the target construct's variance. Divergent outcome variables, specifically those that have low loadings on the first principal component and that mostly vary because of sources other than the target construct, can be readily identified and excluded. The method can thereby account for and, to some extent, resolve inconsistencies in researchers' conceptualizations of the target construct and in the outcomes they deem relevant. Step 3 Step 3 of the method examines whether each of the facets occupies a significant portion of variance in the derived outcome-based composite. Facets that consistently fail to account for variance in this composite are likely to be redundant or extraneous and should be excluded from the set of facets used to represent the construct. The most straightforward statistical procedure for this purpose is to regress the outcome-based composite on the theoretical set of facets, using statistical regression (also referred to as the stepwise method) to remove facets, although starting with all hypothetical facets at the initial step. Stepwise regression is the appropriate algorithm in this instance, as it both removes and adds predictors. Facets will be removed from the analysis successively if they do not explain unique variance in the criterion. In this process, RD and ET facets may initially suppress the (significant) effects of valid facets and lead to their removal at initial steps. Yet, the stepwise method re-enters facets removed from the analysis at preceding steps if they gain their significant explanatory effect at later steps (i.e. upon removal of problem facets with suppressor effects). High intercorrelations among predictors are generally considered problematic, because they can compromise the explanatory effects of individual predictors (Pedhazur, 1997). However, in conjunction with the systematic removal of facets via stepwise regression, the method advanced here capitalizes on this principle in order to identify RD facets. Essentially, it means that highly correlated predictors are likely to explain the same variance in the criterion, rendering some as redundant. The method is sufficient in identifying facets that do not occupy a significant part of the construct variance represented by the outcome-based composite (ET facets should not occupy any construct variance, irrespective of the presence of other facets). Step 4 Step 4 of the method involves a comparison of the composite excluding any facets that were consistently non-predictive of the outcome-based composites (i.e. ET and RD facets) against the original composite comprising all facets. These two composites are compared in their degree of convergence with the outcome-based composite derived at Step 2. Using a composite of all facets averages predictive and non-predictive facets, and the correlation of this composite with the outcomebased composite should be weaker than that of a composite encompassing predictive facets only (see Smith et al., 2003, for a more detailed discussion of this effect). Step 5 Zero-order correlations of the identified non-predictive facets with the revised composite can be examined in a final step (Step 5) to distinguish between RD and ET facets. RD facets should show substantial zero-order correlations with this composite, whereas ET facets should not. Trait emotional intelligence A construct of contemporary interest that illustrates the challenge of representing constructs is EI. Much has been said about what constitutes EI, as is apparent from the diversity of EI models and operationalizations. The divergence of research into the two increasingly distinct subareas of trait EI and ability EI has brought some structure into the field. Petrides and Furnham (2001) pointed to the fundamentally distinct nature of constructs based on typical performance, the predominant measurement method in the EI literature, as compared with those that are based on maximum performance. But even when taking the split between typicalperformance and maximum-performance measures into consideration, substantial discrepancies in how the construct is represented via structural models and arrays of facets remain across measures (cf. Dulewicz, Higgs, & Slaski, 2003;Jordan, Lawrence, 2009;Petrides & Furnham, 2001;Salovey, Mayer, Goldman, Turvey, & Palfai, 1995;Schutte et al., 1998;Tapia & Marsh, 2006;Tett, Fox, & Wang, 2005); the construct boundaries are far from agreed upon. Trait EI has provided a framework for reconceptualizing self-report measures of EI initially supposed to assess cognitive emotional abilities, which they are hardly able to measure (Freudenthaler & Neubauer, 2007;Paulhus, Lysy, & Yik, 1998). However, the distinction of ability and trait EI goes beyond mere operational differences in response format. For example, self-report measures based on Mayer and Salovey's (1997) four-branch ability EI model do not seem to measure trait EI comprehensively, as evidenced by their relatively weak construct validity compared with instruments developed to measure trait EI specifically (Gardner & Qualter, 2010;Martins, Ramalho, & Morin, 2010). By definition, trait EI refers to a compound trait located at the lower levels of personality hierarchies that integrates the affective aspects of personality (Petrides, Pita, & Kokkinaki, 2007); it does not encompass emotion-related skills or abilities. Trait EI is also conceptually distinct from the construct of social intelligence, irrespective of the method of measurement and conceptualization of trait versus ability. Whereas the former concerns primarily emotional aspects of personality, the latter reflects how people interact with others (e.g. Petrides, Mason, & Sevdalis, 2011). Of course, this does not preclude overlap in their sets of facets, because many specific attributes integrate social and emotional qualities (e.g. aggression, assertiveness, and empathy) and, thus, may be linked to both constructs. The key point is that these abstract and difficultto-define constructs are fundamentally distinct in their core. One would find considerably more emotional/affective facets within a measure of trait EI and more social/interpersonal facets in a measure of trait social intelligence. Present study This study will examine the utility of the psychometric method described in the introduction and illustrate its application. Specifically, the method will be applied to the construct of trait EI, as operationalized through the Trait Emotional Intelligence Questionnaire (TEIQue; Petrides, 2009). The TEIQue was designed to assess the construct of trait EI comprehensively and has hitherto produced very promising results in terms of construct validity (Freudenthaler, Neubauer, Gabler, Scherl, & Rindermann, 2008;Gardner & Qualter, 2010;Martins et al., 2010). Its theoretical set of 15 facets was determined through a content analysis of existing measures, retaining only those facets that were common across salient EI models. This unique approach captured the consensus among the existing models and measures, possibly yielding a more accurate representation of the target construct than other models. Evidence attesting that the TEIQue facets satisfy minimum standards for factor loadings has accumulated across translations of the measure (e.g. Freudenthaler et al., 2008;Martskvishvili, Arutinov, & Mestvirishvili, 2013;Mikolajczak, Luminet, Leroy, & Roy, 2007). A new psychometric method 45 Although the model underlying the TEIQue has withstood the test of time, it is possible that some of the numerous facets from which it derives are redundant or extraneous. In this preliminary examination of the proposed method, we used data gathered in previous psychometric studies of the TEIQue, including some of its translations (six samples in total). The data from each sample included measurements of various construct-relevant outcomes. This approach was deemed appropriate for this initial investigation, as the criteria assessed across these samples were diverse and representative of the four TEIQue factors. The principal components from the outcomes assessed in each of the samples were extracted in order to provide alternative representations of global trait EI (Step 2 of the method). These outcomebased composites were then regressed onto the 15 trait EI facets to identify any non-predictive facets. A composite comprising facets with predictive effects in any one or more of the six samples was compared with the original 15-facet composite in terms of its associations with the six criterionbased composites. Facets that did not occupy unique variance in any of the outcome-based composites were further examined to classify them as redundant versus extraneous. Samples and outcomes The data came from five cross-sectional studies (six samples), in which the criterion validity of the TEIQue across different sets of outcomes was investigated. We selected the samples based on their relevance to the present investigation, as they comprised thematically related, proximate outcomes. Samples 1, 4, and 5 were Greek, Spanish, and Georgian, respectively, whereas Samples 2, 3, and 6 were British. The demographic characteristics of the six samples are summarized in Table 1. With the exception of Sample 5, additional details for the samples can be found in previously published studies (Gardner & Qualter, 2010;Petrides, Pérez-González, & Furnham, 2007;Petrides, Pita, et al., 2007). The outcome variables are presented in Table 2, together with their corresponding measures. These outcomes are either entirely emotion-laden (e.g. depression, and positive and negative affect) or integrate emotional and social aspects of functioning (e.g. aggression, coping styles, personality disorders, life satisfaction, alcohol-related problems, and loneliness). Importantly, the outcomes considered across all six samples represent each of the four TEIQue factors (Well-Being, Self-Control, Emotionality, and Sociability), as indicated in Table 2. Thus, they are suitable for deriving (Diener et al., 1985) WB alternative representations of the trait EI variance, as required in Step 1 of the proposed method. Measures All measures in this study were based on self-report, mostly using multiple-point response scales. The four factors and their constituent facets are Well-Being (self-esteem, trait happiness, and trait optimism), Self-Control (emotion regulation, stress management, and low impulsiveness), Emotionality (emotion perception, trait empathy, emotion expression, and relationships), and Sociability (assertiveness, emotion management, and social awareness). Two facets (adaptability and self-motivation) have not been included in any of the four factors but contribute directly to the global score. More detailed descriptions of the facets and factors can be found in Petrides (2009). The TEIQue items are responded to on a 7-point Likert scale, ranging from 1 (disagree completely) to 7 (agree completely). Internal consistencies at the facet level were predominantly within a range of .70 to .80 across studies. Cronbach's alphas for global trait EI ranged from .81 (Sample 5) to .96 (Sample 6). Outcome variables A summary of the outcome measures and references can be found in Table 2. The measures administered to Sample 1 were translated by the authors who conducted the study. For Samples 4 and 5, the outcomes were assessed with available translations of the measures. Sample 1. The Satisfaction with Life Scale (Diener, Emmons, Larsen, & Griffin, 1985) consists of five items that yield a global life satisfaction score (e.g. 'In most ways my life is close to my ideal') measured on a 7-point Likert scale. Cronbach's alpha in this sample was .84. The 14-item rehearsal subscale from the Emotion Control Questionnaire (Roger & Najarian, 1989) was used as a measure of rumination (e.g. 'I remember things that upset me or make me angry for a long time afterwards'). Items are responded to on a 7-point Likert scale. Cronbach's alpha was .84. The Center for Epidemiologic Studies Depression Scale (Radloff, 1977) is a 20-item measure of depressive symptomatology, specifically developed for use in non-clinical settings. Respondents indicate how frequently they experience a range of depressive symptoms during the past week (e.g. 'I was bothered by things that usually don't bother me'). Items are responded to on a 4-point Likert scale. Cronbach's alpha was .92. The Dysfunctional Attitudes Scale (Weissman & Beck, 1978) is a measure of depressogenic attitudes and beliefs, based on a cognitive theory perspective and consisting of two parallel 40-item forms. Using a 7-point Likert scale, respondents answer each item according to how they think most of the time. Form A was administered to Sample 2, yielding an alpha level of .87. Sample 3. The Aggression Questionnaire (Buss & Perry, 1992) assesses four distinct types of aggression. It consists of 29 items responded to on a 5-point Likert scale. The four aggression scales, and their respective internal consistencies, are physical aggression (.80), verbal aggression (.69), anger (.80), and hostility (.79). Sample 4. The Positive and Negative Affect Schedule (Sandín et al., 1999;Watson, Clark, & Tellegen, 1988) was used to assess positive and negative affect. Each affective dimension has 10 items that are responded to on a 5-point Likert scale. The alpha level was .89 for positive affect and .85 for negative affect. The second edition of the Beck Depression Inventory (Beck, Steer, & Brown, 1996;Sanz, Perdigón, & Vázquez, 2003) was administered to this sample. It measures the severity of depression and consists of 21 items that are responded to on a 4-point Likert scale. The alpha level was .87. The International Personality Disorder Examination (López-Ibor Aliño, Pérez Urdaníz, & Rubio Larrosa, 1996; Loranger, Janca, & Sartorius, 1997) has a semi-structured interview format aligned to the ICD-10 and DSM-IV criteria. Typically used as a screener, this instrument comprises 77 dichotomous true-or-false items that produce scores representative of 10 distinct personality disorders. Alpha levels were generally low to moderate, ranging from .32 for Schizoid to .67 for Avoidant. Sample 5. The first edition of the Beck Depression Inventory (Beck, Ward, Mendelson, Mock, & Erbaugh, 1961) was administered to Sample 5. Like its successor, which was administered to Sample 4, this edition measures the severity of depression and consists of 21 items that are responded to on a 4-point Likert scale. The alpha level was .81. The State-Trait Anxiety Inventory (Spielberger, Gorsuch, Lushene, Vagg, & Jacobs, 1983) comprises 40 items, which are based on a 4-point Likert scale and represent two types of anxiety: state and trait anxiety. Accordingly, scores can be derived for both state and trait anxiety, which had alpha levels of .85 and .81, respectively. Sample 6. The Aggression Questionnaire (Buss & Perry, 1992), as described in Sample 3, was also administered to this sample. The internal consistencies were .71 for physical aggression, .65 for verbal aggression, .66 for anger, and .69 for hostility. The Eating Disorders Diagnostic Scale (Stice, Telch, & Rizvi, 2000) consists of 22 items, 19 of which (items 1-18 and 21) are used to derive the single composite of this scale. One of the 19 items (item 21, addressing amenorrhea) was omitted in order to make the scale suitable for participants of both genders. The measure's items have a mix of Likert-type and yes-or-no response formats. In this sample, the internal consistency was .86. The Self-Administered Alcoholism Screening Test (Hurt, Morse, & Swenson, 1980) consists of 35 dichotomous yes-or-no items, indicative of alcohol-related problems. Its internal consistency in this sample was .76. The Subjective Happiness Scale (Lyubomirsky & Lepper, 1999) consists of four items that are responded to on a 7-point Likert scale. Its internal consistency in this sample was .89. The Satisfaction with Life Scale (Diener et al., 1985) previously described in Sample 1 was also administered to this sample, in which it had an alpha level of .90. Statistical analyses The outcome variables corresponding to each sample were submitted to a principal component analysis to derive the outcomebased composites. Outcome variables were included within the respective outcome-based composite if they had loadings either (a) in excess of .50 or (b) of .30-.49 that were greater than their loadings on ensuing components. Conversely, variables were excluded from the analyses if they loaded weakly on the first principal component (<.50) and more strongly on ensuing components. These variables were deemed to be too distinct from the target construct, with additional dimensions implicit in them increasing the chances of predictive effects for ET facets (or for the specific variance of RD facets). The derived outcome-based composites were regressed onto the 15 trait EI facets, using the stepwise method in each analysis. All facets were entered at the first step and subsequently removed successively, starting with the least significant one. Because the stepwise method was used, as required by the method, it was possible for facets already removed to be re-entered at later steps of the analyses. The original composite of all 15 trait EI facets and a composite comprising facets included in the final model in at least one of the six regression analyses were compared in terms of their associations with the outcome-based composites. Facets with significant predictive effects in any of the six samples were included in this composite to account for variations in the outcomes used to derive the outcome-based composites. Steiger's Z tests were computed to examine if there are significant differences in the correlations of these two composites with the outcome-based composites across samples. To differentiate between RD and ET facets, zero-order correlations of any non-predictive facets with a revised composite comprising the predictive facets only were also examined. In theory, RD facets should correlate significantly with the global construct, whereas ET facets should show correlations closer to zero. Dimension reduction of outcome variables Results of the principal component analyses for the outcomes used in each sample are presented in Table 3. The only variable excluded from Samples 1 and 2 was avoidance coping because it had relatively weak loadings (.14 and À.46, respectively) on the first principal component. It also resulted in bifactorial solutions in the initial analyses, loading considerably higher on the second component. For the same reasons, three personality disorders were removed from the final analysis of the Sample 4 outcomes: schizoid, histrionic, and narcissistic. Their respective loadings on the first principal component were .38, .36, and .24, and lower than their loadings on a second or third component. Two variables, verbal aggression and eating-related problems, were excluded from the Sample 6 outcomes. Their loadings on the first principal component were .32 and .27, respectively, and both loaded much higher on additional components. These seven variables were excluded on the grounds that they were too different from the target construct. With these variables omitted, a latent composite was derived from the remaining variables in Samples 1, 2, 4, and 6. All outcome variables assessed in Samples 3 and 5 were included in their respective composites, as they all loaded highly on a single principal component. Regression of outcome-based composites on trait emotional intelligence facets Summaries of the stepwise regression analyses with the outcome-based composites as the dependent variables are presented in Table 4. Because of the large amount of data, we present only results for the initial and final models and beta weights for facets retained in the final model only. While all 15 facets were initially included in the analyses, facets that were not retained in the last step of any of the six regression models are omitted from Table 4. The analyses for Samples 3, 4, and 6 excluded the facet of emotion management, while that for Sample 6 additionally excluded the facets of trait empathy and emotion perception. The reason for omitting these facets is that when initially included, the direction of their explanatory effect was opposite to those of the other facets in the equations. Full results can be requested from the authors. Of the 15 trait EI facets, five did not explain unique variance in the outcome-based composites in any sample and, thus, do not appear in the final regression models. These facets were trait empathy, emotion perception, emotion expression, emotion management, and social awareness. In addition to being manually excluded from Samples 3, 4, and 6, emotion management did not appear in the final regression models in Samples 1, 2, and 5, based on the stepwise method. Likewise, trait empathy and emotion perception, which were manually removed from the Sample 6 regression, were non-predictive in the other samples. Therefore, neither these three facets nor the two nonpredictive facets appear in Table 4. Of the 10 facets showing significant predictive effects, one (stress management) accounted for unique variance in five samples, one (trait happiness) accounted for unique variance in four samples, four (emotion regulation, self-esteem, impulsiveness, and relationships) accounted for unique variance in three samples, two accounted for unique variance in two samples (assertiveness and trait optimism), and two, self-motivation and adaptability, accounted for unique variance in one sample. In comparing the additive predictive effects of all 15 facets included in the initial prediction model (shown as Model 1) against the final set of facets remaining in the last step of each regression analysis (shown as Final model), the appropriate statistic to examine is the adjusted R 2 , which can account for the unequal degrees of freedom. As is apparent across all six samples, the shortened sets accounted for virtually the same amount of the variance as the 15-facet composite. Even the unadjusted change in R 2 from the initial to final model was negligible and non-significant in the six samples. As discussed, however, regression analysis does not reveal the impact of non-predictive facets or facets with atheoretical, inverted effects on the explanatory power of higher-order composites, such as global trait EI. For example, the non-predictive facets of emotion expression and trait empathy can be expected to weaken the convergence of global trait EI with the outcome-based composites, because they are averaged along with the predictive facets into the global trait EI score. Hence, two trait EI composites comprising 15 and 10 facets, respectively, were compared in terms of their associations with the outcome-based composites. Criterion validity of facet-based composites Pearson correlations of the 15-facet and 10-facet trait EI composites with the outcome-based composites are presented in Table 5. Also shown are Steiger Z tests of significant differences in the convergent validity of the two composites. Except for the latent composite derived from the Sample 3 outcomes, associations of both trait EI composites with the outcome-based composites were consistently strong. Unlike the other samples, in which a latent composite of more diverse emotion-related outcomes was used, the outcome-based composite derived from the aggression variables in Sample 3 was fairly homogenous and narrow and, thus, least representative of trait EI. Correlations of the 10facet composite with the outcome-based composites were consistently larger than those of the 15-facet composite. In fact, the Steiger Z results indicate that the 10-facet composite had significantly greater convergent validity in all six samples. Correlations of non-predictive facets with 10-facet composite Correlations between the five non-predictive facets and the 10-facet composite are shown in Table 6. All correlations were significant, and all except one (emotion management in Sample 3) were within a moderate range of .3 to .7, indicating that the facets are redundant, rather than extraneous. DISCUSSION Decades ago, Cronbach and Meehl (1955) noted that there is no adequate criterion for operationally defining personality traits and other psychological constructs, which prompted their concept of construct validity. In the present day, Note: Avoidance coping was excluded from Samples 1 and 2, as it loaded relatively weakly on the first principal component and more strongly on a second component. For the same reason, the IPDE schizoid, histrionic, and narcissistic scales were excluded from Sample 4, and verbal aggression and eating-related problems from Sample 6. IPDE, International Personality Disorder Examination (Loranger et al., 1997). researchers continue to dwell on the level of arbitrariness involved in facet selection (e.g. Petrides & Furnham, 2001). The psychometric method illustrated herein is an effort towards optimizing multi-faceted assessment instruments, including the construct representations on which they are based. As specified throughout the article, its particular aim is to identify RD and ET facets. The method thereby aims to reduce the plethora of facets through which constructs are often represented and to minimize discrepancies between assessment instruments. Summary and interpretation of results Application of the method to trait EI data from six European samples yielded consistent results. Five facets did not explain unique variance in alternative representations of the construct variance, derived from varying sets of validation outcomes administered across the six samples. Removal of these five facets from the global trait EI composite significantly improved its associations with the outcome-based composites in all samples. Collectively, the results indicate that the five non-predictive facets overlap entirely with the predictive facets in their reliable common variance (i.e. variance attributed to the construct of trait EI), apparently compromising the construct validity of the global trait EI composite. It seems that the revised 10-facet composite gives a better representation of trait EI than the original composite. The trait EI facets identified as non-predictive came exclusively from the TEIQue factors of Emotionality and Sociability. Notably, these two factors have shown little success in explaining incremental criterion variance vis-à-vis the other factors in previous research (Mikolajczak, Luminet, & Menil, 2006;Mikolajczak, Roy, Verstrynge, & Luminet, 2009;Mikolajczak et al., 2007;Swami, Begum, & Petrides, 2010;Uva et al., 2010;Siegling, Vesely, Petrides, & Saklofske, accepted). In only one study, one of these two subscales (Sociability) accounted for incremental criterion variance, predicting somatic symptoms amid stress over mental and physical status, together with the Self-Control subscale (Mikolajczak et al., 2006). However, it is important to remember that individual criteria are unlikely to represent the variance of the target construct very well, and therefore, significant predictive effects of redundant and extraneous elements are possible. Although a similar set of predictive facets is likely to emerge in independent samples and across different outcome-based composites, fluctuations in terms of which facets will have significant effects are still possible. A statistical factor to consider is that facets may emerge as significant or non-significant because of chance. Self-motivation may be such a candidate, as it had a significant incremental effect in only one of the six samples and the regression weight for its effect was very small. Although a scenario of all five presumably RD facets being unrepresented in the outcome variables is highly unlikely, it is also possible that some segments of the construct variance were not represented in the outcomes we investigated. Consequently, facets related to any under-represented construct variance would not have reached significance. While we do not expect large fluctuations in the pattern of predictive facets, repeated applications of the method to trait EI data are encouraged to increase confidence in our findings. It is also important to validate the revised composite in independent samples and sets of criteria that have not been previously used to identify non-predictive facets. Empirical characteristics of RD and ET facets are failing to occupy unique construct variance and compromising the construct validity of the global composite. RD facets share the same common variance with one or more of the other facets, giving disproportional weight to particular segments of the construct variance. ET facets lie wholly beyond the target construct's boundaries, thus lacking common variance (i.e. their variance is due to constructs other than the one targeted). Neither of these types of facet is, therefore, able to take up unique variance in the global construct, thus weakening the construct validity of the model that incorporates them and of its operational vehicles. Overall, the results provide preliminary evidence for the efficacy of the proposed method in identifying RD facets, because all of the non-predictive facets seemed to fall into this category. At least in theory, it should also screen out facets that are completely extraneous and somehow found their way into the researcher's model. Implications of method Subject to further validation, the method has utility in the optimization of multi-faceted assessment instruments. As discussed, a unique strength of the proposed strategy lies in its potential to identify RD or ET facets, which conventional approaches do not accomplish. Identification and eventual removal of RD and ET facets would help improve the construct validity of measures to which the method is applied. Similarly, the method has promise in enhancing the unidimensionality or homogeneity of scales intended to assess a single construct, the importance of which has been discussed in detail elsewhere (Smith, McCarthy, & Zapolski, 2009;). On a larger scale, the method would contribute to minimizing the inflation of facets and diversification of measures. Beyond minimizing research costs, optimizing the scaleconstruction process by integrating this method can lead to more valid conclusions about constructs, especially at the earlier stages of research. Without applying the method, a model or measure may comprise ET or RD facets and, thus, have weaker construct validity. Naturally, these facets would also compromise the various specific and empirically testable aspects of construct validity (criterion, predictive, discriminant, etc.) pertaining to the measure or model being scrutinized. By applying the method first in order to gain construct validity, it would be possible to assess and understand the construct's relationships with other constructs and outcomes more accurately. If thoroughly applied, the method would entail realistic benefits for psychology's applications, particularly where quantitative assessment is involved. On a general level, it would enhance the professional and social utility of a range of standardized measures, enabling more accurate assessments of individuals and prediction of their future behaviour. Failing to represent and measure a construct adequately can have consequences, given that psychometric assessment often forms the basis of high-stakes decisions, such as clinical diagnoses, career selection, and people matching. Another benefit of identifying, and eventually removing, RD and ET facets is the reduced length of psychometric measures and shorter assessment times without trade-offs (Smith et al., 2003). In view of these benefits, the method would be ideally integrated at the early stages of scale construction. For constructs that already have an established operationalization, the method can be used either to refine these measures or, should non-predictive facets not emerge, to increase confidence in them and their underlying models. Recommendations and projected developments Particularly when used to construct a new measure, the method should be applied in combination with the established methods, of which one (the rational-theoretical approach) is even a pre-requisite. Furthermore, as indicated throughout the article, it may be wisest to consider the method as an ongoing process, whereby repeated application across samples of participants and outcomes will increase certainty in the identification of RD and ET facets. Beyond the method per se (described here as a five-step procedure), another worthwhile step would be to cross-validate the results, by comparing the revised and original composites in samples of criteria not used during application of the method. Future developments of the method are foreseeable with regard to two of its five steps. The first concerns the process of selecting and testing outcomes for deriving alternative representations of the construct variance at Steps 1 and 2. We anticipate that with theoretical development and repeated application of the method, more specific examples and guidelines for outcome selection will emerge. Second, while the statistical procedures employed in this article (particularly at Step 3) can identify RD and ET facets, they are of limited utility in examining the relative proportions that the remaining facets occupy within the construct variance, because of intercorrelations among predictors. However, new approaches, such as relative weight analysis (Johnson, 2000), may be able to estimate the relative common variances occupied by facets at Step 3. This information would provide insight into the centrality of the different valid facets and further our understanding and conceptualization of the construct. Last, while the generic problems associated with stepwise regression algorithms are of lesser threat to the proposed method, given its multi-sample and replication requirements, additional adjustments may be reasonable (e.g. accounting for chance effects by using different p-value cut-offs or effect size estimates). Limitations and future directions Further validation of the proposed method with respect to other personality constructs is needed to provide definitive evidence for its efficacy. Once a satisfactory level of support has been established, it would be worthwhile to demonstrate that the method also has efficacy within the realm of cognitive abilities, as can be expected. Whereas this article presents the initial application of the method, based on existing data, future studies designed specifically for its evaluation can yield more conclusive results. However, this is not to undermine the utility and relevance of using existing datasets, as the method requires evidence from numerous and relatively large samples. We encourage others who have suitable data (ideally, from multiple samples) to replicate the analyses we performed here and publish the results. In designing future studies specifically for applying the proposed method, it will be important to sample systematically from the entire theoretical range of relevant outcomes to represent the variance of the target construct as comprehensively as possible. A second question to be addressed in further validation studies of the method is whether using the same measurement format for all variables introduces confounding effects in favour of the method. Measuring the outcomes in the same way as the hypothetical facets creates common-method variance (e.g. social desirability), which may contribute to the pattern of results. Alternative methods (i.e. other than self-report) for assessing outcomes relevant to trait EI and other personality constructs include informant ratings, behavioural observations, electronic diaries, and possibly biodata. Converging evidence from applications of the method across various outcome-based composites will eventually help us arrive at a consensus regarding the best set of facets for representing established, yet still partially elusive, individual-differences constructs.
2016-05-04T20:20:58.661Z
2014-11-05T00:00:00.000
{ "year": 2014, "sha1": "d69d00a61ed702b55980e85d698a26a56a0a6913", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/per.1976", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5333e68a11394e26b3e08ebee802ed6acff7b776", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
255287489
pes2o/s2orc
v3-fos-license
Facial Nerve Palsy as Complication in COVID-19 Associated Mucormycosis: A Case Series Mucormycosis is an opportunistic fungal infection indicating a high mortality rate. Among six varieties of involved sites, rhino cerebral mucormycosis (RCM) is not the most uncommon. During the COVID-19 pandemic, with the increase, in predisposing conditions incidence rate of mucormycosis progressed. For aggressive treatment, an early diagnosis can be armored to reduce morbidity and mortality. Clinically RCM poses non-specific symptoms and signs delaying diagnosis. This is associated with orbital cellulitis and sinusitis, one-sided headache behind the eye, diplopia, blurring of visions, nasal congestion, rhinorrhea, epistaxis, nasal hypoesthesia, facial pain and numbness, and a history of black nasal discharge. Not commonly the complications of cranial nerve involvement have been reported. In the present case series, three presentations of facial nerve palsy in COVID-19 associated with mucormycosis are added to the literature database. Introduction During the current pandemic of COVID-19 (Coronavirus disease 2019), a large number of manifestations and complications have emerged including fungal infections, and are being reported in the literature. Lately, fungal infections are emerging complications as a matter of concern in patients diagnosed positive for COVID-19. Mucormycosis is an opportunistic fungal infection with a high mortality rate caused by saprophytic fungi (Phycomycetes, zygomycetes, mucoraceae). Rhizopus oryzae is the most common fungi isolated from patients with mucormycosis and is responsible for 70% of all cases [1]. It is frequently found in soil, residue plants, spoiled food, and the upper respiratory tract of the healthy host. It becomes pathogenic when predisposing factors cause an immunocompromised state with a decreased ability to phagocytize, most commonly observed in diabetes (60-80%), hematological neoplasms, chemotherapy-induced neutropenia, and the use of deferoxamine therapy. The incidence rate of mucormycosis varies from 0.005 to 1.7 per million populations and the case fatality rate is 46% [2]. On the basis of site of involvement, the disease may be classified into six different forms, rhino cerebral, pulmonary, cutaneous, gastro-intestinal, orbital, and disseminated of which rhino cerebral mucormycosis (RCM) is the commonest type accounting for about 30-50% of cases [3]. Clinical presentation of RCM includes nasal stiffness, headache, retro-orbital pain, orbital swelling, and rarely facial palsy. Early diagnosis of RCM is difficult. The incidence of facial nerve palsy in uncontrolled diabetic patients with mucormycosis is 11% [4]. Also, several neurological syndromes like anosmia/ageusia, encephalitis, encephalopathy, cerebrovascular complications, myelitis, Guillain-Barré syndrome, and facial palsy are among the neurological complications manifested in a significant proportion in COVID-19 [5]. Facial palsy is reported in the literature, during the clinical course of the infection or as its first symptom in these patients [6,7]. The present case series intends to raise awareness of this unusual presentation of facial nerve palsy in RCM. Additionally, it pretends to explore the pathophysiology of facial nerve palsy in mucormycosis. Case 1 A 48-year-old man presented with the chief complaint of swelling on the left side of his face for 20 days. Anamnestic, the patient was hospitalized for 15 days after reverse transcription-polymerase chain reaction (RT-PCR) tested positive for COVID-19 and was found to have an elevated blood glucose level of 351mg/dl during treatment. Extraoral examination revealed facial asymmetry and facial palsy involving the left side of the face were present. Clinical features of facial palsy included drooping of the corner of the mouth, the absence of wrinkles in the right half of the forehead, and incomplete closure of the right eye with a barely perceptible motion of the left side of the face. As a result, it appears that the patient may have House Brackman grade V unilateral facial nerve palsy ( Figure 1). Intraorally, necrotized bone was noted in the mandibular anterior teeth and left parasymphysis in association with mobility of 35, 36, and 47. The magnetic resonance imaging (MRI) report showed some poorly defined, T2 hyperintense, peripherally enhanced collections in the deep intramuscular plane of the bilateral pterygoid fossa, extending into the bilateral parapharyngeal space, associated with mild surrounding soft tissue strands suggestive of abscess formation ( Figure 2). The final diagnosis of mucormycosis of the pterygoid fossa with extension into the parapharyngeal space was made on the basis of histologic examination. Case 2 A 50-year-old man presented clinically with swelling on the left side of his face and blackish discharge from the left nasal cavity for 10 days. A history of diabetes mellitus was diagnosed during the treatment of COVID-19. Extraoral examination revealed facial asymmetry, unilateral House Brackman grade V facial nerve palsy on the left side, and tenderness in the left infraorbital and zygomatic regions ( Figure 3). Features of facial palsy seen were drooping of the corner of the mouth, absence of wrinkles on the right half of the forehead, and incomplete closure of the right eye clinically confirming House Brackman grade V unilateral facial nerve palsy. 2022 The final diagnosis of mucormycosis of the rhino-orbital region was made on the basis of histologic examination. On histopathological examination of hematoxylin & eosin (H&E) stained sections shows sinonasal mucosa infiltrated by broad non-septate hyphae branching at right angles along with mixed inflammatory cell infiltrate ( Figure 5). The patient was subjected to endoscopy-assisted bilateral nasal cavity debridement under general anesthesia along with anti-fungal therapy. FIGURE 5: H&E stained section shows mucosa infiltrated by broad non- septate hyphae branching at right angles. Case 3 A 43-year-old male patient presented with the main concern of having swelling on the right side of his face for one month. His medical history included controlled diabetes mellitus for one year and recent hospitalization for COVID-19 infection. Extraoral clinical examination revealed facial asymmetry and features of unilateral facial nerve palsy according to House Brackman (IV) ( Figure 6). 2022 Palpation of the right malar region revealed tenderness. Intraorally, the right upper first molar showed grade 1 mobility. Examination of CT PNS revealed increased mucosal thickening within the right maxillary sinus representing maxillary sinusitis of the right maxillary sinus with pervasive erosion of the entire maxillary sinus wall of the right maxillary sinus (Figure 7). Also, ill-defined mild-enhancing soft tissue extending from the pre-maxillary region to the right parotid space along the parotid duct and the neurovascular bundle was noted ( Figure 8). The patient underwent right maxillary sinus debridement with right side inferior alveolectomy under conscious sedation, right eye tarsorrhaphy, and functional endoscopic sinus surgery. The patient was under Amphotericin B (liposomal) 10mg/kg daily, posaconazole 300mg in addition to ceftriaxone, with metformin 500mg, glimepiride 2mg, and pioglitazone 1.5mg 1/2 BD. The histopathology report was suggestive of mucormycosis. In H&E stained section moderate degree of tissue invasion with the fungus was noted. The fungal invasion was characterized by broad, short, aseptate, obtuse branching and eosinophilic hyphae at multiple sites mostly confined with the necrotic tissue and vessels ( Figure 9). Discussion It is not unusual for Rhizopusoryzae to be isolated from mucormycosis patients. In 70% of cases, it is to blame for mucormycosis. Patients with COVID-19 with uncontrolled diabetes mellitus have seen an increase in the number of mucormycosis cases recently [8]. COVID-19, an emergency global public health event, caused by severe acute respiratory syndrome coronavirus 2 (SARS CoV-2) presented as diffuse alveolar damage and severe inflammatory exudation with an immunosuppression state. Factors that facilitate fungal infection in patients with COVID-19 may include low oxygen (hypoxia), high glucose (diabetes, new-onset hyperglycemia, steroid-induced hyperglycemia), acidic medium (metabolic acidosis, diabetic ketoacidosis (DKA)), high iron levels (increased ferritins) and decreased phagocytic activity due to immunosuppression (SARS-CoV-2 mediated, steroid-mediated or background comorbidities), and several other shared risk factors including prolonged hospitalization with or without mechanical ventilators [9]. Early diagnosis of RCM clinically is difficult. Frequent clinical presentation includes malaise, headache, facial pain, swelling, and mild fever. The involvement of various tissues in the rhino-cerebral area can lead to clinical presentation mimicking other conditions like cerebrovascular accidents (CVA) [10]. Facial nerve palsy as a presentation of RCM has also been reported in a few isolated cases. The exact pathology of the involvement of the facial nerve is unknown. The involvement of pterygopalatine fossa has been reported by some authors as a route for the spread of mucormycosis to the facial nerve. The pterygopalatine fossa is also considered to be a reservoir of mucor from where it spreads to retro global space of orbit and infratemporal space [11]. Recent studies have demonstrated the spread of Mucorales species along peripheral nerves [12]. Another reason for the involvement of facial nerve can be the pathology of resistance arteries in diabetic patients which may cause edema and localize facial nerve ischemia. This would compromise the blood supply to the nerve leading to palsy [13]. Diabetes mellitus is itself an immunocompromised stage, changing the normal immunological response to infection in several ways. A hyperglycemic state stimulates the proliferation of fungus and also decreases chemotaxis and phagocytic efficiency permitting the organism to grow well in the environment. Enzyme ketoreductase is produced by the fungus Rhizopus oryzae, which allows them to utilize ketone bodies in diabetic ketoacidosis patients, and result proliferation of pathogens [14]. And also, the subclinical facial nerve is involved in 6% of the patients with diabetes reported [15]. In a study series of 126 patients with Bell's palsy, chemical or overt diabetes mellitus was found in 39% of the cases. In the same study disturbances of taste were found more in non-diabetic patients (83%), as compared to only 14% of diabetic patients whose taste was affected. Thus, a common site of facial nerve lesions in diabetics appears to be distal to the chorda tympani. This may only be explained by diabetes-related pathogenesis and a vascular rather than a generalized "metabolic" impairment, leading to localized facial nerve ischemia in the distal part of the fallopian canal. Thus, some cases of Bell's palsy may be due to diabetic mononeuropathy [13]. Mucormycosis and hyperglycemia have been considered as the pathophysiology for facial paralysis in the three cases that have been documented. Mucormycosis and hyperglycemia have been considered as the pathophysiology for facial paralysis in the three cases that have been documented. Recent studies also suggested a neuroinvasive capacity of COVID-19 and facial palsy in COVID-19 patients as initial findings, during the course of treatment or after recovery. According to studies this virus has a high affinity for angiotensin-converting enzyme-2 (ACE-2) receptors, which are frequently found in the nervous system, it performs neurotropism by directly causing nerve damage [16]. Dubé et al. reported in their study with animal models that there is axonal transport of human coronavirus (HCoV) OC43 protein into the nervous system [17]. Antifungal therapy, surgical debridement, supportive therapy, and prosthetic devices are all used in the course of treatment. We provide a case series of three mucormycosis cases that were identified based on clinical findings, radiographic findings, and laboratory testing and presented to the outpatient clinic. Patients in every case experienced mucormycosis following recovery from COVID-19 disease. Squamous cell carcinoma, chronic granulomatous infections including tuberculosis, midline lethal granuloma, rhinosporidiosis, tertiary syphilis, and other deep fungal infections should all be considered in the clinical differential diagnosis of the lesion. All three individuals show COVID-19 infection, high blood sugar, and histopathologic evidence of mucormycosis. The pterygoid fossa is thought to be the starting place for the spread of infection, hence the MRI data from case 1 implies that the pterygoid fossa may be involved, which may be the cause of the facial paralysis. In case 3, the CT PNS scan reveals right parotid gland involvement along the neurovascular bundle, which may also be a contributing factor to facial palsy. Conclusions In a few rare cases, mucormycosis has been linked to facial nerve palsy. Contrast-enhanced MRI can show the extent of mucormycosis. The expansion of the pterygopalatine fossa, which is discernible on an MRI, can be used to identify facial nerve involvement. This case series is a contribution to the literature database, but it is still unclear how facial nerve palsy affects individuals with mucormycosis as a whole. Facial nerve palsy must be taken into account as a clinical consequence to improve the quality of life for individuals with mucormycosis. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-12-31T16:07:03.396Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "8f5ea049709860d8fc0ec71d2508e80fb70d033b", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/120611-facial-nerve-palsy-as-complication-in-covid-19-associated-mucormycosis-a-case-series.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "02c8c4f3893c8fc0ad23c108a787197444578737", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
263916862
pes2o/s2orc
v3-fos-license
Placental Abruption Complicated by the Couvelaire Uterus: A High-Risk Obstetric Case at 30 Weeks Gestation Placental abruption, a rare but life-threatening obstetric emergency, presents substantial risks to maternal and fetal well-being. This case report documents the clinical journey of a 35-year-old woman with multiple risk factors who presented at 30 weeks gestation with symptoms suggestive of placental abruption, including colicky lower abdominal pain and vaginal bleeding. Notably, her late initiation of prenatal care and a history of pregnancy-induced hypertension added complexity to the clinical picture. The case revealed a Couvelaire uterus, an uncommon and challenging complication of placental abruption, further emphasizing the need for early recognition and swift intervention. A multidisciplinary approach played a pivotal role in managing this high-risk obstetric case. Imaging and laboratory tests facilitated diagnosis and assessment, guiding surgical intervention and post-operative care. Despite the severity of the condition, the patient experienced a positive outcome for herself and her fetus, highlighting the critical importance of timely and comprehensive medical care. This case report contributes to medical knowledge by shedding light on the rare Couvelaire uterus. It underscores the significance of early diagnosis, coordinated healthcare teams, and patient education in mitigating risks associated with placental abruption. Ultimately, it reinforces the vital role of healthcare providers in safeguarding the lives of expectant mothers and their infants in obstetric emergencies. Introduction Placental abruption, an infrequent yet perilous obstetric emergency, poses substantial threats to the wellbeing of both mother and fetus [1].This case report aims to provide a comprehensive overview of placental abruption, including its clinical manifestations, complications, and outcomes.Additionally, we will delve into the unique and rare presentation of a Couvelaire uterus, a severe manifestation of placental abruption. Placental abruption is a critical obstetric complication characterised by prematurely separating the placenta from the uterine wall before the delivery of the fetus.This condition can lead to significant maternal and fetal morbidity and mortality if not promptly recognised and managed.Therefore, it is essential to understand the signs, symptoms, and potential complications associated with this condition [2]. Abruptio placenta typically presents symptoms such as colicky lower abdominal pain and vaginal bleeding.Recognising these clinical manifestations promptly is crucial, as they often necessitate immediate medical attention.Moreover, understanding the gestational age at which placental abruption occurs can significantly impact maternal and fetal outcomes.Therefore, this case report will explore the gestational age differences and their implications [3]. Placental abruption, defined as the premature detachment of the placenta from the uterine wall after 20 weeks of gestation, represents a condition fraught with grave consequences for both the expectant mother and the developing fetus [2].Its clinical presentation can exhibit considerable variation, and the extent of the abruption may only sometimes be apparent.A Couvelaire uterus, characterised by extensive infiltration of blood into the uterine myometrium, represents an exceedingly uncommon and intricate complication of placental abruption.Timely recognition and swift intervention are paramount in effectively managing this condition [3].This case report offers a comprehensive account of the patient's clinical journey, encompassing her initial presentation with alarming symptoms and deteriorating vital signs, culminating in surgical exploration that resulted in the delivery of the fetus and placenta.Additionally, we delve into the patient's post-operative recovery, emphasising the importance of a multidisciplinary approach in addressing the intricate interplay of factors in cases of severe placental abruption.The favourable outcome underscores the significance of early diagnosis, prompt surgical intervention, and vigilant post-operative care, emphasising the pivotal role of healthcare professionals in handling such high-stakes obstetric scenarios [4]. Case Presentation A 35-year-old woman, gravida three para one, presented to the emergency department during the late night hours when she was at 30 weeks gestation in her second pregnancy.She complained of colicky lower abdominal pain and vaginal bleeding, which had been ongoing for six hours.During the history-taking process, it was revealed that she had previously used one sanitary pad, which was only mildly soaked.Her medical history included one full-term normal vaginal delivery complicated by pregnancy-induced hypertension one year earlier.Notably, she had not received prenatal care at our hospital and was only booked for care at 21 weeks in a private facility.Until this point, she had been relatively asymptomatic, with two antenatal visits.It's noteworthy that she declined maternal serum screening for chromosomal and aneuploidies.Additionally, a fetal anomaly screening at 21 weeks had shown no signs of fetal anomalies. Upon examination, the patient's vital signs were concerning, with maternal tachycardia of 110 beats per minute and a blood pressure reading of 90/70 mmHg.Abdominal examination revealed a contracted and tense abdomen that felt woody-hard.A speculum examination showed active bleeding through the cervical os, and a small clot was evacuated during the examination.Subsequently, the patient was admitted to the hospital for further monitoring.Initial blood tests revealed a haemoglobin level of 8.4 g/dL, a low platelet count of 90,000/cumm, and a deranged coagulation profile characterized by prolonged prothrombin time (PT).They activated partial thromboplastin time (aPTT).An abdominal ultrasound showed a heterogeneous area anterior to the placenta, suggesting blood products, indicating placental abruption. Given the patient's deteriorating condition and the need to explore the unique aspects of this case, we opted for an explorative laparotomy.Under general anaesthesia, a lower segment hysterotomy was performed via a Pfannenstiel incision.This approach allowed us to gain crucial insights into the condition of the patient's uterus and the complications arising from the Couvelaire uterus, a relatively rare and severe manifestation of placental abruption. During the laparotomy, our observations unveiled several noteworthy findings.Notably, we encountered a Couvelaire uterus (Figure 1), a condition marked by the infiltration of blood into the myometrium due to placental abruption.Upon making an incision into the uterus (Figure 2), we observed the presence of clots, further confirming the diagnosis of a Couvelaire uterus.As we proceeded with the surgery, we encountered the fetus and an entirely separated placenta.One of the major complications associated with a Couvelaire uterus is hemoperitoneum, a condition characterised by blood accumulating within the abdominal cavity.Unfortunately, this case presented with hemoperitoneum, which was evident during the procedure.The blood loss is approximately 800 mL (Figure 3), emphasising the severity of this complication. FIGURE 3: Estimated blood loss was 800 ml with larger blood clots The appearance of the uterus during the laparotomy remained consistent with a Couvelaire uterus, and we undertook the necessary steps to ensure hemostasis.The uterus was closed in two layers with meticulous attention to detail, ultimately achieving successful bleeding control (Figure 4). FIGURE 4: Fundal view of Couvelaire uterus with dark purple patches The patient's recovery was satisfactory after the surgery, with blood product replacements administered.Her coagulation profile returned to normal, and on the third post-operative day, her haemoglobin level was measured and found to be 9.2 g/dL, indicating a positive response to treatment.She was discharged in good health three days after the surgery.Subsequent histopathological examination of the placenta confirmed the presence of retroplacental and intramembranous haemorrhage with intervillous haemorrhage, consistent with placental abruption. Discussion This case illuminates an exceptionally rare complication: the presence of a Couvelaire uterus, characterized by the extensive infiltration of blood into the myometrial tissue.This condition often accompanies severe placental abruption, rendering surgical intervention a formidable challenge.Identifying this condition during the surgical procedure underscores the significance of a comprehensive intraoperative assessment and unwavering vigilance [5,6].Effectively managing this case required the collaborative efforts of various medical specialities, including obstetricians, anesthesiologists, haematologists, and neonatologists.The cornerstone of our success lies in the timely recognition of the condition, swift surgical intervention, and the seamless coordination of post-operative care, all collectively contributing to a favourable outcome [7]. Currently, no established diagnostic clinical criterion exists for placental abruption.The New Jersey-Placental Abruption study [8] reports that the most common reasons for a clinical diagnosis of abruption include retroplacental clot(s) or bleeding (77.1%), followed by vaginal bleeding with uterine hypertonicity (27.8%) and vaginal bleeding with nonreassuring fetal status (16.1%).In our patient, a significant volume of bleeding was not observed until the time of surgery, at which point 800ml of blood loss was recorded. Highlighting the urgency of early diagnosis in cases of placental abruption is imperative.In this instance, the delay in seeking medical attention and the tardiness in hospital booking may have exacerbated the severity of the abruption.Therefore, promoting awareness among pregnant women regarding timely prenatal care and recognising warning signs is paramount in averting such adversities [9].Diagnostic modalities, such as trans-abdominal ultrasound, played a pivotal role in confirming the diagnosis and delineating the extent of placental abruption.Likewise, laboratory assessments, including the coagulation profile, proved invaluable in gauging the patient's bleeding risk and guiding the judicious transfusion of blood products [10]. Despite the intricacies of the case and the emergence of a Couvelaire uterus, the patient's expeditious surgical intervention and comprehensive post-operative care culminated in a favourable outcome for both the mother and the fetus.This serves as a poignant reminder of the indispensable role played by a wellcoordinated healthcare team in safeguarding the lives of expectant mothers and their offspring [11]. This case report enriches the body of medical literature by shedding light on the rare complication of a Couvelaire uterus within the context of placental abruption.It underscores the imperativeness of early diagnosis and swift intervention in managing high-risk obstetric scenarios, emphasizing the need for continuous medical education and heightened awareness among healthcare providers.Furthermore, this case underscores the importance of patient education about prenatal care and recognizing warning signs during pregnancy.Healthcare systems should prioritize initiatives to augment patient awareness to mitigate delays in seeking essential medical attention. Conclusions In conclusion, the placental abruption presenting with a Couvelaire uterus in our case report not only underscores the challenges and complexities faced by healthcare providers in managing obstetric emergencies but also brings to light the unique aspects and novelty of this clinical scenario.Our case offers a distinctive perspective by shedding light on the intricate interplay of factors contributing to the development of a Couvelaire uterus, including the delayed presentation of the patient and the ensuing hemoperitoneum -a complication not extensively documented in previous literature.This novel insight into the pathophysiology of a Couvelaire uterus and its association with hemoperitoneum adds to the body of knowledge surrounding this rare condition.Furthermore, our report serves as a testament to the critical role of early diagnosis, a multidisciplinary approach, and comprehensive care in achieving favourable outcomes in high-risk obstetric situations.By presenting this unique case, we aim to highlight the significance of prompt recognition and timely intervention in placental abruption cases with atypical presentations, ultimately emphasizing the critical importance of healthcare professionals in ensuring the well-being of both mothers and infants in such challenging clinical scenarios. FIGURE 1 : FIGURE 1: The dark purple and copper colour patches with ecchymosis and indurations diagnostic of Couvelaire uterus or uteroplacental apoplexy -posterior view FIGURE 2 : FIGURE 2: Blood and blood clots noted on incision into the uterus and after delivery of the fetus
2023-10-13T15:03:50.221Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "c2b5fe034791187bd2447285517864c072595e50", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/192829/20231011-1421-1yqfcg6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e47e8f1958b7b6be194da99e7bc0f9ebd83266a9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
250316965
pes2o/s2orc
v3-fos-license
Grassroots innovation for the pluriverse: evidence from Zapatismo and autonomous Zapatista education The social and environmental failure of successive Western development models imposed on the global South has led local communities to pursue alternatives to development. Such alternatives seek radical societal transformations that require the production of new knowledge, practices, technologies, and institutions that are effective to achieve more just and sustainable societies. We may think of such a production as innovation driven by social movements, organizations, collectives, indigenous peoples, and local communities. Innovation that is driven by such grassroots groups has been theorized in the academic literature as “grassroots innovation”. However, research on alternatives to development has rarely examined innovation using grassroots innovation as an analytical framework. Here, we assess how grassroots innovation may contribute to building alternatives to development using Zapatismo in Chiapas (Mexico) as a case study. We focus on grassroots innovation in autonomous Zapatista education because this alternative to formal education plays a vital role in knowledge generation and the production of new social practices within Zapatista communities, which underpin the radical societal transformation being built by Zapatismo. We reviewed the academic literature on grassroots innovation as well as gray literature and audiovisual media on Zapatismo and autonomous Zapatista education. We also conducted ethnographic fieldwork in a Zapatista community and its school. We found innovative educational, pedagogical, and teaching–learning practices based on the (re)production of knowledge and learning, which are not limited to the classroom but linked to all the activities of Zapatistas. Our findings suggest that innovation self-realized by Zapatistas plays a key role on the everyday construction of Zapatismo. Therefore, we argue that a specific theoretical framework of grassroots innovation for the pluriverse, based on empirical work carried out in different alternatives to development, is an urgent task that will contribute to a better understanding of how such alternatives grassroots groups imagine, design, and build, particularly across the global South. 3 Introduction: how may grassroots innovation contribute to building alternatives to development? Capitalist reproduction involves various forms of imperialism and colonialism that have led to dependency in the global South (Hickel 2021;Veltmeyer and Petras 2015). For instance, many negative consequences arise from extractivism for exports of primary goods to the global North, which usually entails the growth of poverty, inequality, and environmental injustices across extractive zones (Toledo et al. 2013). As a result, a diverse array of grassroots movements, organizations and communities seek to design and build alternatives to development in the global South (Gudynas 2011a, b;Lang et al. 2013;Zibechi 2007). Examples include decolonizing money through local institutions like minga or tequio 1 in Latin America, eco-villages in Mexico and elsewhere, or the Ubuntu philosophy in South Africa (Cabaña and Linares 2022, this issue;Martínez-Luna 2009;Morris 2022, this issue;Ramose 2015). These alternatives are often based on the production of new knowledge and the revitalization of traditional knowledge. Likewise, alternatives to development seek the (re)construction of political and territorial autonomy, reclaiming the commons, the development of innovative forms of collective and economic organization, ecotechnology, sustainable architecture, educational practices and social enterprises, the design and application of critical decolonial 2 pedagogies, and relational 3 ontologies, focusing on the well-being and sustainability of socioeconomics rather than economic growth (Clarence- Smith and Monticelli 2022, this issue;Escobar 2011;Esteva 2019;Medina-Melgarejo 2015). The notion of the "pluriverse" refers to the matrix of alternatives that exist in the world-and particularly across the global South-to the Western development project (Escobar 2012). Therefore, alternatives to development can be seen as paths to the pluriverse (Kothari et al. 2019b). The pluriverse is underpinned by the huge cultural diversity that characterizes our species and can be found in any cultural domain. An early example of the pluriverse in practice can be found in the field of parenting and education. Notably in 'Our Babies, Ourselves', Small (1999) explained how biology and culture shape the way we parent. Her book introduced the new science of ethnopediatrics, which explores "why we raise our children the ways we do and suggests that we reconsider our culture's traditional views on parenting". The message is clear: there is not a single way of parenting, nor are the Western ways inherently better ones. In a more recent contribution, Dieng and O'Reilly (2020) present feminist parenting perspectives from Africa and beyond. Their anthology's main contribution is "to broadcast reflections and experiences that emanate primarily from voices that are often overlooked, even by global feminist discourses: those of African women (and men), living on the continent or in the diaspora, and from others born and raised in the global South". In doing so, these authors aim at "(re)claiming parenting as a necessarily political terrain for subversion, radical transformation and resistance to patriarchal oppression and sexism". These insights call for acknowledging, embracing and fostering the diversity of cultural perspectives that are found worldwide in relation to every single aspect of social life. The diversity of cultural perspectives naturally present in the world-including the pluriverse of non-Eurocentric perspectives-is not recognized by hegemonic institutions such as the United Nations, however. For instance, the Sustainable Development Goal 4 (SDG 4) is education, and aims to "ensure inclusive and equitable quality education and promote lifelong learning opportunities for all". Its intentions seem good, like all other SDGs. However, from a postdevelopment perspective it is very problematic to see education from a single, universal viewpoint, which is the Western mainstream understanding of what education shall be. The modern, Western ontology assumes the existence of one single world, a universe, which is socially constructed based on the Western rationality that is underpinned by modernity, colonialism, capitalism, patriarchy and anthropocentrism, and is materialized and imposed worldwide by the development agenda (Escobar 2011). This is the vision behind the United Nations' SDGs. Nevertheless, this hegemonic vision is questioned by the existence, practice and resistance of many communities and their worldviews around the world. They embody many distinct ways of imagining life, seeking well-being, parenting and education, and so forth. The alternative pathways being built by these communities, which represent breaking points with the dominant rationality could be understood as ontological struggles. They walk toward the "pluriverse", a concept defined by the Zapatistas as "a world where many worlds fit". 3 As noted, the pluriverse has a direct resonance with alternatives to development. Therefore, this idea is becoming increasingly important in the post-development literature where activists and scholars are exploring and studying concrete alternatives to development such as Zapatismo in South Mexico, Buen Vivir in Bolivia and Ecuador, and the Self-Help Groups in rural India (Chuji et al. 2019;Leyva-Solano 2019;Saha and Kasi 2022, this issue), most of which are immersed in socio-political projects of struggle and social and ecological justice in the global South (Baronnet and Stahler-Sholk 2019;Lang 2022, this issue;Zibechi 2012). We can assume that the construction of any alternative to development implies a radical rupture with the dominant capitalist rationality by organizing society in a profoundly different way. Therefore, in such situations, it is essential to generate new ideas, knowledge, practices, beliefs, technologies, norms and institutions. As these generative processes are created and promoted by grassroots groups, they can be thought of as "grassroots innovation" for the pluriverse. Although we can intuitively think of the need for grassroots innovations to create designs for the pluriverse, alternatives to development or transitions to sustainability (Escobar 2011(Escobar , 2017, innovation has barely been the focus of research in these contexts barring few exceptions (e.g., Escobar 2016;Manzini 2015). In addition, the concept of "grassroots innovation" has seldom been applied as an analytical lens in these contexts (Maldonado-Villalpando and Paneque-Gálvez 2022). The bulk of literature on grassroots innovation has rather focused on the analysis of social transformation processes that are far less critical of the dominant capitalist rationality. This literature has been produced mostly in Europe and India, though with distinct flavors in each geographical and cultural context. In Europe scholars have defined grassroots innovation as the generation of novel bottom-up solutions inspired by the local context to tackle social needs and environmental problems, and that are driven mostly by ideology (Seyfang and Smith 2007;Seyfang and Longhurst 2013). Grassroots movements and communities have designed many innovative ideas around such transformations and tend to organize in networks at different scales (Smith et al. 2017). While the literature on grassroots innovation is quickly growing in the global North, in the global South few scholars have paid attention to it. The exception to this observation is India, where the literature refers to the identification of innovative ideas, practices and technologies based on indigenous and local knowledge in marginalized communities, which are materialized in collaboration with academics and public institutions (Gupta et al. 2003;Gupta 2016;Kumar and Bhaduri 2014;Ustyuzhantseva 2015). Since the analytical lens of "grassroots innovation" has not been adopted to research the potential role of innovation in the design and construction of alternatives to development (Maldonado-Villalpando and Paneque-Gálvez 2022), here we argue that it is key to begin exploring the alleged usefulness of this concept regarding the design of paths for the pluriverse. Although some academics may consider grassroots innovation as a Western theoretical framework of little value or relevance in contexts of the pluriverse, we argue that rather than dismissing the concept altogether, it is better to tailor it as necessary to acknowledge, value and foster the innovation that is realized by the grassroots agents who are engaged in the design and construction of the pluriverse. We posit that the analysis of what we call here "grassroots innovation for the pluriverse" must become a key element of the research agenda on the pluriverse because the radical proposals that are being put forward to create new worlds beyond capitalist development are imagined, weaved together, and materialized by communities through their autonomous, bottom-up innovations. Some of the arenas of social life and culture in alternatives to development that may be key to the emergence and diffusion of grassroots innovation for the pluriverse are those concerned with popular education and collective learning, conviviality and communality, 4 political autonomy, and relational ontologies linked with indigenous worldviews (Barkin 2019;Escobar 2014;Esteva 2002;Illich 1973;Martínez-Luna 2016). In this paper, we argue that popular education, autonomous education and collective spaces for free learning may be key spheres of social life to assess how grassroots innovation unfolds and can contribute to building alternatives to development. Our premise is that such alternatives to formal education form historical-political subjects and new subjectivities that are emancipatory of the dominant rationality, especially in contexts of the global South (Barbosa 2013(Barbosa , 2015(Barbosa , 2020. In this paper, our aim is to assess the alleged importance of grassroots innovation for the pluriverse. To that end, we analyze a specific case study, Zapatismo-an alternative to development in Chiapas, Mexico-and take a closer look at the autonomous Zapatista education, which has been designed and implemented by Zapatistas according to their own worldviews. Theoretical framework: grassroots innovation, post-development and Zapatismo Grassroots innovation Theoretical perspectives and studies on grassroots innovation have emerged to a greater extent in the global North, particularly in Europe. Several researchers have defined grassroots innovation as novel networks of activists and organizations that generate bottom-up innovative solutions for sustainable development-e.g., coproduction of knowledge, development of alternative technologies, social learning, changes in consumption behaviors-thus responding to local social-ecological concerns from civil society (Seyfang & Smith 2007;Smith et al. 2017). In the global South, on the contrary, the conceptualization of grassroots innovation has been mostly oriented toward the identification and public promotion of new ideas, technologies and products in rural communities to improve the well-being of the poor (Gupta 2012;Gupta et al. 2019). Table 1 shows a synthesis of some of the main views on grassroots innovation and examples of practices, processes, and goods or services in contexts of the global North and South. None of the main theoretical strands on grassroots innovation are primarily concerned with radical, bottom-up innovations aimed at creating alternatives to development. There are several recent studies on innovation realized by grassroots groups that seek to create radical ruptures with the dominant capitalist rationality (e.g., Apostolopoulou et al. 2022;Boyer 2015). At the same time, the academic literature on post-development, alternatives to development and the pluriverse has barely focused on the analysis of innovation per se, even though innovation is central to the creation of radically new societies. Rather, this literature includes many studies on issues that are related to innovation-often using concepts like creation, design, coproduction, self-organization, autonomy, alternatives, revolutionary, and so forthbut without a fine-grain analysis of innovation and its role. All in all, we identify two major research gaps in relation to innovation in the literature of post-development, alternatives to development and the pluriverse: (1) we know relatively little about how innovations may unfold and contribute to the design and construction of the pluriverse by grassroots groups, particularly across different contexts of the global South, partly because there are few empirical studies concerned with the analysis of innovation; and (2) we lack a specific conceptual-theoretical framework for innovation in this literature and a single appropriate term for this type of innovation, e.g., "grassroots innovation" or a similar one, has not been consistently used (Maldonado-Villalpando and Paneque-Gálvez 2022). A relevant issue that may arise is whether the existing theoretical framework of "grassroots innovation" is well suited to analyze the innovation that is realized by grassroots groups in their designs for the pluriverse, 5 considering that it has not been used for this purpose (see for instance recent reviews by Hossain 2016, and Maldonado-Villalpando and Paneque-Gálvez 2022. Some authors may argue that since this framework has been mostly developed by authors from the global North and is therefore embedded within a Western worldview, it may be unsuitable to explain the radical breaks with the capitalist development rationality that are the basis of alternatives to development in the pluriverse, which are often embedded in indigenous cosmologies. We argue that, rather than dismissing altogether this framework, it would be better to adapt it and tailor it to the case of alternatives to development. We see several advantages to this approach. First, the term "grassroots innovation" is short, clear, and marks unequivocally the agency of those in charge of the innovation, which is something usually neglected by the conventional, Western economic views on innovation (Solis-Navarrete et al. 2021). Second, although most grassroots innovation initiatives across the global North are less radical 6 than their counterparts in alternatives to development across the global South, there are many valuable lessons that can be taken from the current literature on grassroots innovation. Third, using the same term as that used already in transformative contexts of the global North may allow for establishing more fruitful dialogues, learning spaces and alliances across sites, and facilitate comparative studies across different geographical contexts. There are arguably difficulties to employing the concept "grassroots innovation" in the analysis of innovation within the literature of post-development, alternatives to development and the pluriverse. A key problem is that this term has seldom been used when innovation is analyzed in this literature. However, we posit that this limitation can be circumvented by digging into this literature not just for direct but mostly for indirect indications of innovation realized by 5 We paraphrase Escobar's work Designs for the Pluriverse. Radical Interdependence, Autonomy, and the Making of Worlds (2017), where he addresses three designs for the pluriverse in relation to: 1) transitions, 2) social innovation and 3) autonomous design. The first considers post-development, Buen Vivir, the Rights of Nature, and post-extractivism in the global South; the second is oriented toward the relationship between design and social change from the postulates of Manzini (2015); and the third focuses on autonomy as a theory and practice of interexistence and interbeing, and the realization of the communal. In our view, grassroots innovation underpins these three dimensions of design. 6 It is important to note here that many of the experiences analyzed using the framework of grassroots innovation in the literature, both in the North and the South, seek to reform public policies and the negative outcomes of current institutions without seeking to radically transform the workings of society. grassroots groups. In addition, we suggest it is crucial to produce empirical studies on the innovations carried out by grassroots groups engaged in the everyday design and construction of alternatives to development. Such studies, in turn, will allow for the design of an appropriate theoretical framework of grassroots innovation for the pluriverse. Irrespective of whether we analyze grassroots innovation in alternatives to development by conducting a literature review or undertaking a case study, as we do here, it is essential to analyze information related to new collective ideas, designs, processes and outcomes, which generate new knowledge, practices, beliefs, behaviors, products, technologies, local institutions or programs. All these items can be considered as "grassroots innovation". This type of innovation is driven by the exchange of knowledge and learning, based on the political-educational project of grassroots groups. In the global South, grassroots innovation is usually motivated by the defense of territories and life as a condition for (re)producing their livelihoods and cultural identity. In addition to novelty or newness, some characteristics of grassroots innovations in the context of alternatives to development refer to the creation of radical ruptures with capitalist and neocolonial logic, the construction of profound transformations and more just social-ecological transitions, the intercultural dialogue of knowledges, or the construction of community autonomy beyond the State and the neoliberal market. These innovations also incorporate values such as diversity, austerity, ethics and the defense of the commons, relational ontologies, social and ecological justice, horizontal links, the dignity of individual and collective work, care for life or ecological sustainability (Maldonado-Villalpando and Paneque-Gálvez 2022). Post-development studies and grassroots innovation Post-development studies focused initially on the deconstruction of both the dominant and the alternatives of development discourses, moving on to studying alternatives to development imagined--and sometimes enacted and materialized--by social movements, peasant organizations or indigenous peoples as forms of resistance to the extractivist, neocolonial and patriarchal project of modern capitalism (Franzen 2022, this issue;Gudynas 2012;Piccardi and Barca 2022, this issue;Svampa 2012). The current debate in Latin America and other regions of the world is focused on postdevelopment and its articulation with the study of different alternatives to development as pluriversal paths; for example, projects such as post-extractivism, post-growth, postpatriarchy, post-colonialism, or transmodernity (Escobar 2012;Kaul et al. 2022, this issue; Naylor 2022, this issue). These alternatives are closely related to the radical critiques of many indigenous societies as they are not embedded in the ideology of progress and transcend the Western development project, thus having the potential of relational transformations toward communal autonomy and ethics beyond market exchange (Demaria and Kothari 2017;Gudynas 2018;Loh and Shear 2022, this issue). The manifestation of a transformative alternative may occur at several levels (Villoro 2015: 19): (1) at the level of the State it opens the dilemma of gradual, moderate change versus radical, fast-paced change or revolution, (2) at the level of society through enabling people to achieve higher levels of participation that enhance democracy, (3) in culture it may unfold by embracing a plurality of cultures, i.e., multi or interculturalism, (4) at a cosmological level it may be expressed by the idea of the relativity of space time, (5) at the religious or sacred level it may occur through the acceptance of multiple faiths and beliefs. Any alternative to development creates new radically different societal designs that produce new outcomes at the levels mentioned to a lesser or greater extent. As we have argued before, these radical societal transformations depend upon grassroots innovation which are often embedded in non-Western cosmologies. Some empirical examples found through collective strategies or initiatives that are aimed at the transformation and improvement of grassroots communities are the solidarity exchanges in the autonomous rebel zones of Mexico, the matristic culture in Rojava, Buen Vivir as a bottom-up transformation based on indigenous worldviews, and the itinerant schools of the Landless Workers Movement of Brazil, or La Via Campesina (Barbosa 2013;Barkin 2018;Lang 2022, this issue; Piccardi and Barca 2022, this issue). Alternatives to development are characterized by several features, e.g., the suppression of hierarchies and anti-patriarchalism, conviviality and communality, care for life at the center, spirit of sufficiency and simplicity, reciprocity and solidarity, autonomy through self-government, direct participation, and defense of territory to live well (Barkin 2019;Esteva 2002Esteva , 2014Kothari et al. 2019a;Martínez-Luna 2016;Schöneberg et al. 2022, this issue). Likewise, most alternatives to development have high in their political agenda issues concerning environmental sustainability like decarbonization, de-capitalization, degrowth or post-growth, decoloniality, and eliminating corruption from socio-political institutions through radical democracy (Gills and Hosseini 2022, this issue). Grassroots innovation in Zapatismo and autonomous Zapatista education The uprising of the Zapatista Army of National Liberation (EZLN, its Spanish acronym) in 1994 was made up of indigenous Tzotzil, Tzetzal, Chol, Tojolabal and Mame communities of Mayan descent. This process has evolved and matured since then, crystalizing into what is known as Zapatismo, which is recognized as an alternative to development by academics and social activists (Escobar 2017;Leyva-Solano 2019). The Zapatistas have promoted and experimented with novel initiatives as an expression of the movement of struggle and territorial autonomy (EZLN 2015). These include, for example, self-government through the implementation of the seven principles of Mandar Obedeciendo (Governing by Obeying) 7 (EZLN 2013: 22). The reappropriation of geographic space has led to new autonomous territorial delimitation 8 through political organization at three levels of coordination: (1) the Zapatista support base communities, (2) Rebel Autonomous Zapatista Municipalities, and (3) Caracoles 9 (literally translated into English as "snails", a reference to the spiral course of history) and the Juntas de Buen Gobierno (Good-Government Councils) (EZLN 2005(EZLN , 2013González-Casanova 2009a). In 1994, in response to the demands that the State was unable or unwilling to address, the Zapatista indigenous people and peasants decided to implement autonomous Zapatista education as an alternative to the official educational system. This alternative was designed and implemented across Zapatista territories based on novel practices and pedagogies in multiethnic contexts (Baronnet 2015;Baschet 2018a, b). In addition to looking for grassroots innovation in Zapatismo, we examine its occurrence within the autonomous Zapatista education because of its relevance to the defense of life and the construction of collective and territorial autonomy. Additionally, it is an alternative to the official educational system that goes beyond formal education and the classroom in the Zapatista support base communities. These communities create new notions, knowledge, practices, norms, pedagogies and teaching-learning methods in contexts of ethnic interculturality that are key to the (re) production of the cultural and political resistance project of Zapatismo (Barbosa 2020;Baronnet 2011Baronnet , 2013Baronnet and Stahler-Sholk 2019). As with other alternatives to development, scholars of Zapatismo have rarely evaluated innovation explicitly, either in Zapatismo or in autonomous Zapatista education. However, many authors have acknowledged many distinct, new ideas, processes and outcomes that have emerged from Zapatismo, which can be regarded as grassroots innovation following the rationale we presented above. Nonetheless, the contributions of this type of innovation toward more just and sustainable ways of life in contexts of political struggle, resistance and autonomy with respect to neoliberal development remains mostly unexplored in the literature on Zapatismo. Furthermore, grassroots innovation does not seem to have been evaluated in the design and materialization of alternatives to schooling in the global South. Given the potential of alternatives to schooling in the design and everyday construction of alternatives to development, in this paper we evaluate the role that grassroots innovation can play in the case of autonomous Zapatista education. Specifically, we seek to answer this research question: How can grassroots innovation in autonomous Zapatista education contribute to the everyday construction of Zapatismo? After answering this question, we will reflect upon the potential role of grassroots innovation for the design and construction of other alternatives to development and pluriversal paths. Literature review, participatory action-research and ethnography We first analyzed innovations in the design and everyday construction of Zapatismo and autonomous Zapatista education. To do this we reviewed literature and various documentary sources. We applied the search, assessment, synthesis, and analysis framework to the literature selected for its quality and relevance (Grant and Booth 2009). We searched for scientific and gray literature in both English and Spanish over the period 1994-2020 (we selected that period because the Zapatista uprising began on January 1, 1994). To perform the search, we used Web of Science, Scopus, and Google Scholar. We reviewed theories and case studies in publications and book chapters on grassroots innovation (38) as well as post-development and alternatives to development (24). We then looked for grassroots innovation in the literature on Zapatismo and autonomous education (27) In addition to the literature review, we analyzed grassroots innovation in an indigenous Tzeltal Zapatista community. Our research approach combined participatory action-research and ethnography. We conducted fieldwork during several visits throughout 2019-2021, though it was interrupted for most of 2020 and half of 2021 due to the COVID-19 pandemic. This entailed assisting the families of the community in their daily chores (e.g., agricultural tasks, cooking, cleaning, traditional rituals), helping with teaching-learning in the Escuelita (Zapatista school) and living with a family. We also attended important cultural and political Zapatista events outside of the community. Data collection and generation consisted in participant observation, a field diary, photographs and videos, open-ended interviews with family members and community actors, and 7 Seven principles of the Zapatista movement: To serve, not to be served; to represent, not to supplant; to build, not to destroy; to obey, not to command; to propose, not to impose; to convince, not to defeat; and to go down, not up (EZLN 2013: 22). 8 The autonomous territorial delimitation is made up of support base communities and municipalities with new names because they are not officially recognized by the Mexican State. 9 Regional coordinating instances of self-government with its Good-Government Councils. 1 3 many informal conversations with men, women, teenagers, boys, and girls in the community. During fieldwork, we evaluated to what extent the everyday knowledge, practices, beliefs, technologies, norms, institutions and programs created through autonomous Zapatista education are innovative in meeting human needs, improving social relations and empowering community members to better address the environmental problems and territorial conflicts facing the community (we sought here the three dimensions of local innovation proposed by Moulaert et al. 2005). The action-research was manifested in the processes of mutual learning, dialogue, and exchange of knowledge in Spanish and in their Tzeltal Mayan language with all members of the Zapatista community. At the request of the community, we taught literacy, geography, and arts in the Escuelita. Case study: Zapatismo and autonomous Zapatista education in Chiapas, Mexico As part of the pluriverse of alternatives to schooling and decolonial pedagogies in Latin America, autonomous Zapatista education can be understood as a vital building block in the construction of alternatives to development Their Zapatista Caracoles were created in 2003 and govern the Zapatista Autonomous Rebel Municipalities to resolve the conflicts and inequalities that may occur between them. These changes correspond to a very novel and advanced form of political organization and territorial autonomy through the Caracoles and the Good-Government Councils that allow for common languages and increasingly broader consensus (Aguirre-Rojas 2007; González-Casanova 2009b; Romero 2019). In 2019 new Caracoles were created from the declaration "Y rompimos el cerco" ("And we broke the siege"). There are currently twelve Caracoles with their Good-Government Councils, autonomous municipalities, and their Zapatista support base communities. 10 Fig. 1 Maps of Chiapas in Mexico, the Zapatista region, and the municipality where the community we worked with is located. The exact location and name of the community are not shown to maintain their anonymity Our study area is in the Caracol La Garrucha, which includes five municipalities. The Tzeltal indigenous community where we conducted our study is in the municipality of Ocosingo, close to the Lacandon Rainforest (Fig. 1). In the Caracol La Garrucha, the autonomous Zapatista education began in 1999 with the training of educational promoters at municipalities Francisco Gómez and San Manuel. Students are taught to count, read, write, and talk about issues that concern their daily life, including the EZLN's struggle. The study community is made up of five Tzeltal families from the municipality of Oxchuc--in the highlands of Chiapas--and has several wooden houses, a school, an autonomous health post, a chapel, corn plots, coffee plantations, a water spring, and a graveyard. The school is attended by 13 boys and 8 girls aged 3-14, with a temporary teacher assigned by the community. They attend school every morning from Monday to Friday and spend the afternoons with their parents or grandparents helping them with agricultural and domestic activities (e.g., fetching water and firewood, working in the family's cornfield). Their main recreational activities are swimming in the river, fishing and climbing trees to harvest fruits. 11 The political and military contexts across the study area are complex and shape not just Zapatismo and its autonomous educational system, but also the possibilities for doing fieldwork. The entire Zapatista territory is surrounded by the Mexican army. Its presence can be seen from the hilltop of the Tzeltal indigenous community we conducted the study in. The Zapatista territory is discontinuous (Souza 1995), so Zapatistas, supporters and former Zapatista militants coexist. Paramilitary groups funded by local ranchers and possibly the Chiapas State government, and government social programs are used as counterinsurgency strategies against the Zapatista movement (Aquino Moreschi 2013; López y Rivas 2013). In addition, as elsewhere in Mexico, the territories inhabited by the Zapatistas endure the presence of narco cartels. It is unclear to what extent the organized crime groups that try to displace Zapatista and non-Zapatista indigenous communities from their territories are financed by the State. How can grassroots innovation in autonomous Zapatista education contribute to the design and everyday construction of Zapatismo? The construction of the autonomous educational and pedagogical processes after almost thirty years has been both gradual and radical. The transition of autonomous education has two crucial moments: the configuration of the autonomous educational system (1997) and the creation of Caracoles and municipalities (2003). We identify and analyze the following innovative practices of autonomous Zapatista education: (a) Practices of educational autonomy, for example, the co-design of guides and textbooks, selforganization and self-management of educational projects and materials; (b) Political-pedagogical practices of resistance, supported by teaching-learning inside and outside of the classroom through political-militant practices of the Zapatista movement; and (c) Autonomous teaching-learning practices, for example, regarding the needs of community life and Zapatista territorial political autonomy. Below, we present the main characteristics and several examples of the grassroots innovations we have identified in the literature review, during fieldwork and through complementary audiovisual sources on autonomous Zapatista education. Practices of educational autonomy The practices of educational autonomy are constituted in both new and reimagined forms of self-organization and self-management. For example, each of the Caracoles through the Good-Government Councils and the education commissions decide in assembly what type of educational projects will be collectively self-managed using local and international resources, and how they will be implemented in the autonomous municipalities through new regulations that guide educational practices as alternatives to official education in Mexico ( Table 2). The political-pedagogical practices of resistance The political-pedagogical practices of resistance to capitalism and the neoliberal State are constituted by the diversity of Mayan indigenous, traditional, and ideological knowledge of the Zapatista struggle (Table 2). These practices have new and traditional elements whose central axis is the transmission and generation of practical knowledge in the classroom and the community to address the needs of daily life and strengthen individual and collective autonomy. Zapatista resistance pedagogies barely rely on written knowledge and can be planned or arise spontaneously during the teaching-learning processes with the participation of students in the classroom, community, assembly, collective work, and cultural encounters. Raúl Zibechi says with regard to his experience in the Escuelita Zapatista: […] It is a pedagogy of fraternity, a pedagogy in which we are all equal in hierarchies, and we are equal in work, in sharing work that is the most important thing […] and from there, sharing food, sharing housing, sharing the territory […] so I think that there, what is born is another pedagogy that starts from another In the Tzeltal Jungle Zone, through pedagogical autonomy, they invent content and teaching methods through the community assembly, e.g., games, artistic activities, the true history of social fighters Co-production of knowledge and learning, e.g., from age 13 they decide to be education or health promoters, learn trade or political functions New political pedagogies of resistance in everyday life, e.g., Civil services and positions as community representatives and in the autonomous municipal councils More equitable distribution of power relations between the EZLN and the civilian bases Reappropriation of communal lands as autonomous territory Autonomous teaching-learning practices. Development of new learning and knowledge through conviviality and autonomy, e.g., narratives of struggle and autonomy, Caracoles autonomous municipality Ricardo Flores Magón New learning applied to territorial autonomy, e.g., ecological management of their territory as distribution of space, organic cultivation of coffee, corn, beans, and squash, food sovereignty Exercise of indigenous rights without the presence of the State Decentralization, radical democracy, and autonomous government way of doing politics and a new political culture is a fundamental learning. 12 The autonomous teaching-learning practices As for the autonomous teaching-learning practices, they express the militant experience of the indigenous and peasant leaders who initiated the Zapatista political movement ( Table 2). The teenagers and children learn the history and actions of the movement in other spaces beyond the Escuelita, e.g., in everyday family and community spaces. They learn about all organizational levels through direct participation in positions or political actions to sustain life and autonomy in their territories. Comrade Magdalena from Caracol II (Oventik), a member of the general coordination of the educational system of Los Altos de Chiapas region, discusses "the other education" that has been implemented: The other education is one of our demands, which forced us to become rebels against the "bad government" and the "big capitalists" [...] for that reason we began to build the new education for the people based on the humanistic thinking of our ancestors [...] the practice teaches us and what we learn will be what becomes "awareness education" [...] we seek the transformative action of society [...] teaching is for life to better understand our world and within our Zapatista struggle an autonomous education started from the heart and in the thinking of our people. 13 The novel practices of educational autonomy, politicalpedagogical resistance, and autonomous teaching-learning in the Caracol "La Garrucha" and four autonomous municipalities--including that of the study community--are based on the objective of "sharing, learning together and from everyone". Through coordination between Zapatista communities and the NGO Enlace Civil (1995), they implemented the project called Semillita del Sol (Little Seed of the Sun), which is structured in three levels. In the first level students learn to read, write, and draw. In the second, they learn about the Zapatista demands; while in the third, they study the public statements issued by the Zapatistas to communicate their goals, their efforts to construct autonomy, and the opposing social-political strategies of the government. In the Caracol "La Garrucha", Zapatistas are more interested in learning about trade, deprofessionalization and decision-making in Autonomous Government, the self-management of projects demanded by the support bases (indigenous communities) in the Caracoles, and the building of autonomy and Zapatista territorial control. 14 Further insights from the field In the community where we did fieldwork, the dynamics of knowledge and social learning are generated from the construction of the discourse of autonomy and resistance, the defense of the territory and its Tzeltal culture. The autonomous educational, political-pedagogical practices of resistance and innovative teaching-learning identified at the Zapatista movement level, the Caracol "La Garrrucha" and the indigenous Tzeltal community where we did fieldwork, are based on the daily construction of autonomy (see Table 2). Also, they are not limited to the educational promoter. Rather, they involve the participation and interaction of parents and grandparents with the children. Likewise, the adults, teenagers and children of the community create protest art and share knowledge in the Tzeltal language in the kitchen, the milpa (cornfield), the water spring, coffee plantation, temazcal 15 or in rituals. A grandfather and his eldest son commented on the importance of listening, learning, and putting into practice the ideas that are collectively generated and shared: This community has a temporary educational promoter. For that reason, the representatives of the community asked us to participate in some classes of the Escuelita (which has children aged four and older). Within the classroom, teaching-learning and pedagogical practices are not imposed by teachers. Children raise their concerns and voice their opinions with confidence. The creation of knowledge and learning is not authoritarian or imposed. These communities drive change through knowledge and learning in decisionmaking spaces such as the assembly and in the creation of educational content according to the Zapatista principles 15 An ancestral indigenous practice that is performed every day before sleeping in the Zapatista community of study, it consists of a restorative steam bath for the body. The members of the community lie down on the wooden floor and receive the steam given off by redhot stones after the grandfather pours water on them. Shared activity in the study community during fieldwork in 2019-2021. 16 Interview with ex-health care promoter, July 2019. 12 Transcript of video entitled: Entrevista a Raúl Zibechi, La Experiencia de La Escuelita Zapatista (PromediosMexico 2013). 13 Transcript of video entitled: Los Pueblos Zapatistas y La Otra Educación II (Agencia Prensa India 2011). 14 Field diary entries about conversations with a former educational advocate from the study community the first week of January 2020. 3 of Mandar Obedeciendo. They always keep in mind the philosophy of the movement, the Mayan identity, and the everyday construction of territorial autonomy. For example, the importance of autonomous education is expressed in the words of a colleague from the community: […] Our children have to learn how we live, how we organize ourselves. For example, in history: Why was the war raised in 1994 or how did our ancestors live? How was the bad government in 1968? […] After 1994 they have to learn: Why did people organize and how quickly they did so? Zapatista organization is already at the national level and children have to know it. They have to learn our history how it is; they have to learn everything that concerns us, they have to learn to write and count, and they also have to learn their Tzeltal language. 17 The novel practices analyzed in autonomous Zapatista education are innovative to the extent that they generate profound transformations in power relations that are more horizontal than vertical, the resolution of conflicts between Zapatistas and non-Zapatistas, the improvement of life conditions, the reappropriation of land, and the enhancement of environmental management and defense of territory. In addition, Zapatista communities, municipalities, and Good-Government Councils have implemented initiatives and autonomous educational projects oriented toward the construction of self-sufficiency, self-management, and intercultural self-organization. This allows them to inhabit their autonomous territory in harmony with nature and ancestral local knowledge. Zapatistas do not expect the Mexican State to grant them quality of life and they are independent of the national and international markets. Reflections upon the potential of grassroots innovation in autonomous Zapatista education and Zapatismo The findings of our literature review and fieldwork indicate that the potential of grassroots innovations in the Zapatista autonomous education arise from the motivations of political struggle, its social demands and the seven principles of Mandar Obedeciendo (EZLN 2013) as well as from their pluri-ethnic sociocultural context, all of which is expressed in their novel educational practices and learning as alternatives to the official national educational system and the dominant capitalist rationality (Esteva 2002(Esteva , 2014. The conception of autonomous education incorporates the socio-historical vision of political struggle and the construction of individual and collective autonomy from the Escuelita, the family and the community, through the connection between theory and the daily practice of Zapatista militants (Barbosa 2016;Baschet 2018b;EZLN 2015). The materialization of innovations in autonomous education by its promoters, is not limited to teaching-learning in the community schools. This is because pedagogies and didactics have been collectively created to meet needs, address problems and continue the search for radical changes through more horizontal relations in contexts of ethnic diversity and direct democracy (Villoro 2015; Baronnet 2013Baronnet , 2015Baronnet , 2019. We found that, in the practices of educational autonomy, grassroots innovation is manifested in the defense, reappropriation, and management of territorial autonomy. For instance, educational promoters teach children and teenagers about Zapatista territorial political organization and autonomy. The new territorial limits produce new knowledge, learning and pedagogies from the support base communities and schools (Aguirre-Rojas 2007; González-Casanova 2009b). Teaching-learning practices are linked to traditional and local knowledge, and transformative learning of the Zapatista movement. These are, for example, artistic practices such as the creation of murals with natural materials, poems of rebellion, coordination of cultural events, and documentaries. The political-pedagogical practices of resistance are strategies created collectively as political acts of struggle and learning spaces, which aim to go beyond alternatives to schooling. These include free apprenticeships, teaching of trades and knowledge in service of indigenous communities and deprofessionalization (Barkin and Sánchez 2019;Esteva 2014;Pinheiro-Barbosa 2013. The innovative practices identified in autonomous education are linked to the reproduction of traditional knowledge and multiethnic learning and are strengthened by the collective art of resistance as a source of creative liberation for children and teenagers. Likewise, the proposal of autonomous design by Escobar (2017), where "every community practices the design of itself", applies to the new designs and conceptions of autonomous education, but also to all areas of the Zapatista movement that have operated in contexts of autonomy and resistance. For this reason, the innovative educational practices found in the Zapatista autonomous collective design is key in the generation and management of knowledge and social learning for strengthening the relational ontological diversity of native identities and the socialization of values of coexistence with the natural environment across Zapatista territories (Baronnet 2015;Illich 1973;Martínez-Luna 2016;Escobar 2017). Learning and knowledge coproduction are essential in grassroots innovations, especially on sustainability and more critical understandings of nature (Gupta et al. 2003;Kumar and Bhaduri 2014). In addition, whereas for these authors the use of technological innovations and information technologies are central to grassroots innovations, in the Zapatista context this is mostly related to the use of the internet and independent media for the dissemination of Zapatismo regionally and globally. Educational and pedagogical practices are innovative because they enhance horizontal power relations together with economic activities of resistance, self-sufficiency, alternative and traditional health, the organization of autonomous government and justice, and the defense of territorial autonomy (Barkin 2019; Baronnet et al. 2011;Lang 2015;Leyva-Solano 2019). The construction of networks functions as a symbol that unites communities of interest and practice (Seyfang and Smith 2007;Smith et al. 2017). Zapatista grassroots innovations are influential in the creation of international networks such as the alterglobalization movement (Pleyers 2019). The links and alliances built through autonomous Zapatista education are a concrete expression of post-capitalism and decoloniality (Kothari et al. 2019a, b). When analyzing grassroots innovations in autonomous Zapatista education, we find that Baronnet et al. (2015Baronnet et al. ( , 2019 and Barbosa (2013Barbosa ( , 2015Barbosa ( , 2020) reflect on innovation in educational processes and practices. Baronnet recognizes that it is necessary to deepen the understanding of these issues. However, neither of them conceptualize innovation in autonomous education, nor do they analyze Zapatismo in terms of an alternative to development, but in terms of the importance of critical political praxis and the need for a radical social transformation. In addition, they focus on the decolonial aspects of autonomous Zapatista education, and the importance of epistemic referents in educational processes as generators of creative potentiality through their language and their Mayan cosmovisions. Escobar (2017: 151-164) proposes designs for processes of transition, autonomy, and orientation of social change toward sustainability from a social innovation approach (Manzini 2015). Although it is unlikely that professionals or academics can help in the construction of Zapatista autonomy, they could analyze the autonomous collective designs co-created from the ethnic and ecological diversity across Zapatista territories (Escobar 2017). However, it is necessary to build a specific theoretical framework of innovation beyond the existing Western conceptions of social innovation or grassroots innovations and from the relational ontologies and cosmologies of indigenous and peasant societies that are engaged in the creation of a pluriverse of alternatives to development-as observed in several Latin American experiences (Escobar 2011(Escobar , 2014. Grassroots innovation may play a key role in the design and everyday construction of alternatives to development and pluriversal paths In this paper we have identified grassroots innovations and assessed how they may contribute to building Zapatismo--a specific alternative to development in Chiapas, Mexico--by analyzing the case of autonomous Zapatista education. We have analyzed how new knowledges, practices, beliefs, technologies, norms, institutions and programs are created through this autonomous educational system, which appears to be a constant source of grassroots innovation. This alternative to the national system of education enables the collective acquisition and learning of knowledge and skills that are key to achieving more just and sustainable socionatures, which is a central political outcome of Zapatismo. It is important to emphasize that the pedagogical conception of an educational process from the Zapatista perspective exerts a radical critique of the colonial character of the official Mexican educational system. Through this case study we have learned that grassroots innovations are more intangible than tangible during the construction of Zapatista political and territorial autonomy, consisting of self-organized and self-managed collective practices that seek radical transformations for better living, and that are based on indigenous Mayan cosmovisions, the dialogue of intercultural knowledge in the assemblies and the Good-Government Councils in the Zapatista Caracoles, and a more horizontal redistribution of power from the grassroots level. We have also observed that the spread of grassroots innovations present in Zapatismo and its autonomous education fosters new and expanded networks of solidarity and anti-systemic resistance among national and international social movements and collectives (e.g., adherents to the Sixth Declaration of The Lacandon Rainforest of EZLN and sympathizers anywhere on Earth), thus contributing to healthier, more just, ethical, and ecologically sustainable ways of life that enrich the pluriverse. In addition, we have unveiled new collective designs and educational-pedagogical conceptions in the innovative autonomous educational practices. These practices have helped advance Zapatistas as new historical-political subjects that are better equipped not just to resist the neoliberal development project orchestrated by the Mexican State in alliance with other governments, multilateral and financial institutions, but to actively transform and improve their reality. In imagining, designing and materializing their own world through a large and diverse array of radical epistemic, ontological and political building blocks, Zapatista's grassroots innovations are key to the everyday construction of Zapatismo as part of the pluriverse. Based on our work, we argue that "grassroots innovation for the pluriverse" could be understood as new ideas, processes, autonomous designs and transitions, and principles of collective ethical-political life that are transformed into new forms of political and territorial organization, knowledge and learning strategies, social practices, more horizontal relationships, multi-scale networks, and sustainable coexistence with more-than-human natures in contexts of social and environmental struggle by grassroots movements and communities across the global South. In this sense, grassroots innovation for the pluriverse can be distinguished by actively seeking a rupture with the ideologies of capitalist development. It does so by creating solutions that explicitly question the central assumptions of the hegemonic development discourse, and by encompassing a set of ethics and values that are radically different from those underpinning the current capitalist system. This can be partly explained because grassroots innovation in alternatives to development is often embedded in indigenous cosmologies and relational ontologies. Finally, we suggest that using grassroots innovation as a conceptual lens can be useful for analyzing the autonomous societal designs of grassroots groups to transition toward more socially and ecologically just societies. Future research should be oriented towards deepening the theoretical conceptualization of grassroots innovation for the pluriverse and further assessing its potential in specific experiences of alternatives to development. Such efforts would in our view contribute to a better understanding of how such alternatives are designed and constructed, and how they can lead to large-scale societal transformations and transitions to just sustainabilities, particularly in contexts of the global South where most of such alternative are flourishing. In addition, it would be important to create new methodological approaches for a more consistent identification and operationalization of the analysis of grassroots innovation in empirical case studies. This methodological improvement would allow for undertaking comparative analyses across different pluriversal paths which, in turn, would improve the construction of a theoretical framework of grassroots innovation for the pluriverse.
2022-07-07T13:32:53.733Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "2aa106cb65b8a4132fe943d2c45c819167439fc3", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11625-022-01172-5.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "2aa106cb65b8a4132fe943d2c45c819167439fc3", "s2fieldsofstudy": [ "Education", "Sociology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
90528627
pes2o/s2orc
v3-fos-license
Safety and efficacy of aryl‐substituted primary alcohol, aldehyde, acid, ester and acetal derivatives belonging to chemical group 22 when used as flavourings for all animal species Abstract Following a request from the European Commission, the EFSA Panel on Additives and Products or Substances used in Animal Feed (FEEDAP) was asked to deliver a scientific opinion on the safety and efficacy of 18 compounds belonging to chemical group (CG) 22. They are currently authorised as flavours in food. The FEEDAP Panel concludes that: cinnamaldehyde [05.014] is safe at the maximum use level of 125 mg/kg complete feed for salmonids, veal calves and dogs, and at 25 mg/kg for the remaining target species; cinnamyl alcohol [02.017], 3‐phenylpropan‐1‐ol [02.031], 3‐(p‐cumenyl)‐2‐methylpropionaldehyde [05.045], α‐methylcinnamaldehyde [05.050], 3‐phenylpropanal [05.080], cinnamic acid [08.022], cinnamyl acetate [09.018], cinnamyl butyrate [09.053], 3‐phenylpropyl isobutyrate [09.428], cinnamyl isovalerate [09.459], cinnamyl isobutyrate [09.470], ethyl cinnamate [09.730], methyl cinnamate [09.740] and isopentyl cinnamate [09.742] are safe at the proposed maximum use level of 5 mg/kg complete feed for all target species; 2‐phenylpropanal [05.038], α‐pentylcinnamaldehyde [05.040] and α‐hexylcinnamaldehyde [05.041] are safe at the proposed maximum dose level of 5 mg/kg complete feed for all target species except cats, for which 1 mg/kg is safe. No safety concern would arise for the consumer from the use of these compounds up to the highest proposed level in feeds. Irritation and sensitisation hazards for skin and irritation for eye are recognised for the majority of the compounds under application. Respiratory exposure may also be hazardous. For the majority of the compounds belonging to CG 22, the maximum proposed use levels are considered safe for the environment. For α‐pentylcinnamaldehyde and α‐hexylcinnamaldehyde, a use level up to 0.1 mg/kg feed would not cause a risk for the terrestrial and fresh water compartments. Because all the compounds under assessment are used in food as flavourings and their function in feed is essentially the same as that in food, no further demonstration of efficacy is necessary. establishes the rules governing the Community authorisation of additives for use in animal nutrition. In particular, Article 4(1) of that Regulation lays down that any person seeking authorisation for a feed additive or for a new use of a feed additive shall submit an application in accordance with Article 7, and in addition, Article 10(2) of that Regulation also specifies that for existing products within the meaning of Article 10(1), an application shall be submitted in accordance with Article 7, within a maximum of 7 years after the entry into force of this Regulation. According to Article 7(1) of Regulation (EC) No 1831/2003, the Commission forwarded the application to the European Food Safety Authority (EFSA) as an application under Article 4(1) (authorisation of a feed additive or new use of a feed additive) and under Article 10(2) (re-evaluation of an authorised feed additive). During the course of the assessment, the applicant withdrew the application for the use of chemically defined flavourings in water for drinking. 4 EFSA received directly from the applicant the technical dossier in support of this application. The particulars and documents in support of the application were considered valid by EFSA as of 20 September 2010. According to Article 8 of Regulation (EC) No 1831/2003, EFSA after verifying the particulars and documents submitted by the applicant shall undertake an assessment in order to determine whether the feed additive complies with the conditions laid down in Article 5. EFSA shall deliver an opinion on the safety for the target animals, consumer, user and the environment, and on the efficacy of cinnamyl alcohol [ Additional information All 19 substances have been assessed by the Joint Food and Agriculture Organization of the United Nations (FAO)/World Health Organization (WHO) Expert Committee on Food Additives (JECFA; WHO, 2005) and were considered safe for use in food. No acceptable daily intake values were established. Subsequently, the EFSA Panel on Food Contact Materials, Enzymes, Flavourings and Processing Aids (CEF) assessed the same compounds and concluded that 18 out of the 19 compounds under application do not give rise to safety concerns when used as flavour in food (EFSA, 2008a(EFSA, ,b, 2009a but raised a concern for genotoxicity for 5-methyl-2-phenylhex-2-enal [05.099] and requested additional genotoxicity data (EFSA, 2008a;EFSA CEF Panel, 2013). Consequently, the FEEDAP Panel will not proceed with an assessment of this compound until the outstanding issue has been addressed. The remaining 18 compounds are currently listed in the European Union (EU) database of flavouring substances 5 and in the EU Register of Feed Additives, and thus authorised for use in food and feed in the EU. They have not been previously assessed by EFSA as feed additives. Regulation (EC) No 429/2008 6 allows substances already approved for use in human food to be assessed with a more limited procedure than for other feed additives. However, the use of this procedure is always subject to the condition that food safety assessment is relevant to the use in feed. 2. Data and methodologies Data The present assessment is based on data submitted by the applicant in the form of a technical dossier 7 in support of the authorisation request for the use of the compounds belonging to CG 22 as feed additives. The technical dossier was prepared following the provisions of Article 7 of Regulation (EC) No 1831/2003, Regulation (EC) No 429/2008 and the applicable EFSA guidance documents. The EFSA Panel on Additives and Products or Substances used in Animal Feed (FEEDAP) has sought to use the data provided by the applicant together with data from other sources, such as previous risk assessments by EFSA or other expert bodies, peer-reviewed scientific papers and experts' knowledge, to deliver the present output. EFSA has verified the European Union Reference Laboratory (EURL) report as it relates to the methods used for the control of flavourings of the 'aryl-substituted primary alcohol/aldehyde/acid/ ester/acetal derivatives, including unsaturated ones' in animal feed. The Executive Summary of the EURL report can be found in Annex A. 8 Methodologies The approach followed by the FEEDAP Panel to assess the safety and the efficacy of 'arylsubstituted primary alcohol/aldehyde/acid/ester/acetal derivatives, including unsaturated ones' is in line with the principles laid down in Regulation (EC) No 429/2008 and the relevant guidance documents: Guidance for the preparation of dossiers for sensory additives (EFSA FEEDAP Panel, 2012a), Technical Guidance for assessing the safety of feed additives for the environment (EFSA, 2008c), Guidance for the preparation of dossiers for additives already authorised for use in food (EFSA FEEDAP Panel, 2012b), Guidance for establishing the safety of additives for the consumer (EFSA FEEDAP Panel, 2012c) and Guidance on studies concerning the safety of use of the additive for users/workers (EFSA FEEDAP Panel, 2012d). Characterisation of the flavouring additives The molecular structures of the 18 additives under application are shown in Figure 1 and their physicochemical characteristics in Table 1. These substances are produced by chemical synthesis. Batch-to-batch variation data were provided for five batches of each additive except 2-phenylpropanal [05.038], for which only one batch was available due to the low use volume. 10 The content of the active substance for all compounds exceeded the JECFA specifications (Table 2). Potential contaminants are considered as part of the product specification and are monitored as part of the Hazard Analysis and Critical Control Point procedure applied by all consortium members. The parameters considered include residual solvents, heavy metals and other undesirable substances. However, no evidence of compliance was provided for these parameters. Stability The shelf-life for the compounds under assessment ranges from 12 to 24 months when stored in closed containers under recommended conditions. This assessment is made on the basis of compliance with the original specification over this storage period. Conditions of use The applicant proposes the use of all of the 18 additives in feed for all animal species without withdrawal. For cinnamaldehyde [05.014], the applicant proposes a normal use level of 25 mg/kg feed and a high use level of 125 mg/kg. For the remaining 17 additives, the applicant proposes a normal use level of 1 mg/kg feed and a high use level of 5 mg/kg. Safety The assessment of safety is based on the highest use level proposed by the applicant (125 mg/kg complete feed for cinnamaldehyde and 5 mg/kg complete feed for the remaining compounds). 3.2.1. Absorption, distribution, metabolism and excretion (ADME) and residue studies 3.2.1.1. Uptake, distribution and excretion [ 14 C]-Radiolabelled cinnamyl alcohol, cinnamaldehyde and cinnamic acid (335, 330 and 370 mg/kg body weight (bw), respectively) were individually applied by gavage to male Fischer 344 rats. Between 77% and 83% of the radioactivity was excreted in the urine within 24 h, and 0.9-16% occurred in the faeces. More than 90% of the administered dose of any of the three substances was recovered in the urine and faeces within 72 h. Administration of the same compounds to groups of CD-1 mice by intraperitoneal (i.p.) injection resulted in a similar pattern of excretion in the urine and faeces at 24 h (75-93%) and 72 h (> 93%) (Nutley, 1990;as quoted by WHO, 2001b). The tissue distribution and excretion of cinnamaldehyde were studied in male Fischer 344 rats pretreated with single daily oral doses of 5, 50 or 500 mg/kg bw of cinnamaldehyde by gavage for 7 days and the same single oral dose of [ 14 C]-cinnamaldehyde 24 h later. Further groups received no pretreatment but the same single doses. After 24 h, > 80% of the radiolabel was recovered in the urine and < 7% in the faeces from all rats, regardless of dose; after 72 h, the recovery in the urine and faeces was about 90-95%. The radioactivity level in blood was less than 0.15% of the dose after 24 h for all doses tested. The radiolabel was distributed primarily to the gastrointestinal tract, kidneys and liver in all groups. A small amount of the dose was distributed to fat (0.2-0.9%) and < 0.3% cumulatively to the brain, heart, lung, spleen, and testes. The elimination half-life for [ 14 C] was 5-9 h for the whole blood and liver, 5-8 h for muscle, and was considerably longer for fat tissue, ranging from 17.3 h at 5 mg/kg to 73 h at 500 mg/kg. Radiolabel was still detectable in the fat of animals killed 3 days after receiving 50 or 500 mg cinnamaldehyde/kg bw (Sapienza et al., 1993). In another study, disposition and excretion of [ 14 C]-cinnamaldehyde administered to Fischer 344 rats and CD-1 mice (2 or 250 mg/kg bw) were not sex-and dose-related (Peters and Caldwell, 1994). To investigate the effect of dose on the disposition of cinnamic acid, five doses in the range 0.08-400 mg/kg bw of [ 14 C]-cinnamic acid were administered orally to male Fischer 344 rats or by i.p. injection to male CD-1 mice. Excretion of radiolabel was essentially the same in both species and was not influenced by dose size. After 24 h, the recovery in the urine was 73-88% for rats and 78-93% for mice; after 72 h, 85-100% of the radiolabel was recovered from rats and 89-100% from mice, mainly in the urine. Only trace amounts of radiolabel were present in the carcass (< 1%), indicating that cinnamic acid was readily and quantitatively excreted at all doses (Caldwell and Nutley, 1986). Cinnamic alcohol, cinnamaldehyde and cinnamic acid therefore appear to undergo rapid absorption and excretion, independently of the dose up to 250 mg/kg bw, species, sex and mode of administration. The pharmacokinetic of cinnamic acid was also studied in human volunteers. Eleven individuals each received a single intravenous dose of cinnamic acid equivalent to 5 mg/kg bw. Analysis of blood showed that none of the administered dose was detectable after 20 min (WHO, 2001b). Rapid absorption and excretion was demonstrated also for the saturated 3-phenylpropionic acid in humans (Pollit, 1974). Metabolism Generally, the enzymatic pathways involved in the metabolism of CG 22 compounds are: (i) hydrolysis of esters by carboxylesterases, producing the alcohol (cinnamyl or analogue) and the corresponding acid, (ii) oxidation of alcohols and aldehydes to acids, (iii) b-oxidation of the side chain of acids, leading to benzoic acid (iv) conjugation of acids with glucuronic acid or conjugation of aldehydes with glutathione (minor pathways), and (v) conversion of cinnamic acid to acyl CoA esters with subsequent conjugation with amino acids and elimination in urine (WHO, 2001b;EFSA, 2008b). Hydrolysis of esters An oral dose of 240 mg/kg bw methyl cinnamate was rapidly and almost completely (95%) absorbed from the gut of rats. It was partially hydrolysed to cinnamic acid in the stomach (9%) and subsequently in the intestinal wall. The rate of absorption of cinnamic acid and methyl cinnamate from the gut was similar. No ester was detected in the peripheral blood of dosed rabbits or rats. Only traces were detected in portal and heart blood taken from the rats, indicating that almost complete hydrolysis of methyl cinnamate had occurred during intestinal absorption. The ability of intestinal esterases to hydrolyse methyl cinnamate was also demonstrated in vitro (Fahelbum and James, 1977). Ethyl cinnamate administered subcutaneously to a cat also produced only cinnamic acid metabolites in the urine (Dakin, 1909). The other esters present in this opinion are also likely to be hydrolysed and thus transformed to the respective alcohols and acids. Oxidative-and phase II metabolism When cinnamyl alcohol was administered orally to four rats at a dose of 335 mg/kg bw, 52% was recovered in the urine within 24 h as hippuric acid. Ten minor metabolites cumulatively accounted for about 10% of the dose. In mice, hippuric acid was the major urinary metabolite of cinnamyl alcohol when administered by i.p. injection (Nutley, 1990 as quoted by WHO, 2001b). To investigate the effects of sex, dose size and route of administration on the excretion pattern and metabolic profile, trans-[ 14 C]cinnamaldehyde was given at doses of 2 and 250 mg/kg bw by i.p. injection to male and female Fischer 344 rats and CD-1 mice, and at doses of 250 mg/kg bw by oral gavage to male rats and mice only. In both species, and independently of sex, dose size and administration route, the major urinary metabolites were formed from oxidation of cinnamaldehyde to cinnamic acid, which was subsequently oxidised to benzoic acid. The major urinary metabolite was hippuric acid (71-75% in mice and 73-87% in rats). Small amounts of 3-hydroxy-3-phenylpropionic acid (0.4-4%), benzoic acid (0.4-3%) and benzoyl glucuronide (0.8-7.0%) were also detected. Cinnamic acid was excreted as glycine conjugate (4-13%) in mice only. In both species, approximately 6-9% of the dose was excreted within 24 h as glutathione conjugates of cinnamaldehyde (Peters and Caldwell, 1994). In another study examining specifically the metabolites produced by conjugation of cinnamaldehyde with glutathione, approximately 15% of a dose of 250 mg/kg bw administered to rats by gavage was excreted in the urine as mercapturic acid derivatives. Two sulfur-containing metabolites were isolated from rat urine and identified as N-acetyl-SÀ(1-phenyl-3-hydroxypropyl)-cysteine and N-acetyl-SÀ(1phenyl-2-carboxyethyl)-cysteine in a 4:1 ratio. The hydroxypropyl mercapturate was also isolated from the urine of rats administered with cinnamyl alcohol (125 mg/kg bw) and accounted for 9% of the dose (Delbressine et al., 1981). [ 14 C]-Cinnamic acid (six doses in the range 0.08-400 mg/kg bw) were administered orally to male Fischer 344 rats or by i.p. injection to male CD-1 mice. The metabolites identified at all doses included the major metabolite hippuric acid (44-77%), benzoyl glucuronide, 3-hydroxy-3-phenylpropionic acid, benzoic acid and unchanged cinnamic acid. Acetophenone and cinnamoylglycine were detected only in the urine of mice. As the dose size increased (> 80 mg/kg bw), the percentage of hippuric acid decreased in rat urine while the percentages of benzoyl glucuronide (0.5-5%) and benzoic acid (0.4-2%) increased, suggesting that some limitation of the glycine conjugation pathway occurs at these doses. The fact that the excretion of 3-hydroxy-3-phenylpropionic acid differed little over the dose range (0.2-0.9%) supports the conclusion that the capacity of the b-oxidation pathway is not limited at doses of cinnamic acid up to 400 mg/kg bw in male rats. At all doses, mice excreted only a small proportion of benzoyl glucuronide, indicating that this conjugation reaction is of minimal importance in this species (Nutley et al., 1994). After administration of a single oral dose of ring-deuterated 3-phenylpropionic acid to one individual, the total amount of administered deuterobenzoic acid was isolated from alkaline hydrolysed urine within 100 min (Pollit, 1974). This finding shows that the side chain of the saturated counterpart of cinnamic acid is also degraded to the corresponding benzoic acid, which is further metabolised mainly to hippuric acid. Branched side chains of cinnamyl derivatives can alter the oxidative metabolism. Compounds containing a-methyl substituents are extensively metabolised via b-oxidation, yielding mainly the corresponding hippuric acid derivative. When b-oxidation is inhibited to some extent by the presence of larger substituents located at the aor b-position, the carboxylic acid may be directly conjugated with glucuronic acid and excreted (WHO, 2001b). A benzoic acid metabolite was isolated from the urine of dogs given either a-methylcinnamic acid or a-methylphenylpropionic acid (Kay and Raper, 1924 as quoted by WHO, 2001b). While a-methylcinnamic acid undergoes oxidation to benzoic acid, a-ethyland a-propylcinnamic acids are excreted as such (Carter, 1941 as quoted by WHO, 2001b). a-Ethylcinnamic alcohol and a-ethylcinnamaldehyde administered orally to rabbits resulted in urinary excretion of a-ethylcinnamic acid and of small amounts of benzoic acid (Fischer & Bielig, 1940 as quoted by WHO, 2001b). These observations suggest that a-methylcinnamaldehyde undergoes oxidation to benzoic acid, while higher homologues may be excreted primarily unchanged or as the conjugated form of the cinnamic acid derivative. Controlled ADME studies of CG 22 compounds in target species are not available. Out of the 18 compounds under assessment, eight are esters and are expected to be hydrolysed. Carboxylesterases, responsible for the hydrolysis of esters, are present in the gut, especially of ruminants, and the liver of several animal species (cattle, pigs, broiler chicks, rabbits and horses), operating the hydrolysis of esters and originating the respective alcohols and acids (Gusson et al., 2006). Carboxylesterase activity also plays a significant role in detoxification processes in fish (Li and Fan, 1997;Di Giulio and Hinton, 2008). After hydrolysis, the metabolism of the compounds under assessment can be represented by the fate of cinnamic acid and its analogue 3-phenylpropionic acid resulting from oxidation of the precursors alcohol and aldehydes and hydrolysis of esters. Carboxylic acid:CoA ligases that convert these acids to the respective CoA esters (the first step to proceed to b-oxidation and amino acid conjugation) were shown to be expressed in the liver and kidney of ruminants (Vessey and Hu, 1995), the gut of pigs (Vessey, 2001), and in the liver and kidney of fish (Schlenk et al., 2008). In ruminants, the metabolism of these compounds largely starts in the rumen. When cinnamic acid was infused in the rumen or abomasum of ruminants, 70% was recovered in the urine as benzoic acid conjugates (Martin, 1982a). In the rumen, 3-phenylpropionic acid originated by microbial metabolism of hydroxycinnamic acids, is absorbed and oxidised in organism and eliminated as benzoic acid in urine (Martin, 1982b). Also in sheep, Pagella et al., 1997; showed that 3-phenylpropionic acid (oxidation product of the two CG 22 compounds 3-phenylpropanol and 3-phenylpropanal) infused in the rumen was excreted in the urine mainly as hippuric acid. Conjugation of carboxylic acids with amino acids exhibits some species specificity. After oral administration of 50 mg/kg of radiolabelled benzoate to several animal species, rabbit, pig, cat, and dog eliminated almost all the initial dose in the urine after 24 h as hippuric acid. In the dog, approximately 20% was excreted as benzoyl glucuronide (Bridges et al., 1970). Many other target species can also form glucuronides, although this is generally a minor route of excretion. Several types of birds, including chickens, excrete benzoic acid as ornithuric acid (Baldwin et al., 1960;Letizia et al., 2005). In fish, benzoic acid is conjugated mainly with taurine (Schlenk et al., 2008). Although at a minor rate, glucuronide derivatives can be formed and conjugation pathway with glucuronic acid can also be carry out by all target species (Watkins and Klaassen, 1986;James, 1987;Gusson et al., 2006). Therefore, mammals, fish and birds, have the Chemical group 22 for all animal species www.efsa.europa.eu/efsajournal ability to metabolise and excrete the flavouring substances from CG 22, and there is no evidence that they or their metabolites would accumulate in tissues. The FEEDAP Panel notes that for feline species the capacity for conjugation is limited (Shrestha et al., 2011;Court, 2013 In a 13-week study, the potential toxicity of trans-cinnamaldehyde [05.014] was investigated in Fischer 344/N rats. Groups of 10 male and 10 female rats were given diets containing 0%, 1.25%, 2.5%, 5.0% or 10% microencapsulated trans-cinnamaldehyde, corresponding to 0, 620, 1,250, 2,500 or 5,000 mg/kg bw per day. There were no early deaths and no treatment-related clinical toxicity. Average body weights of animals at the three higher doses were decreased compared to controls. The food consumption of treated animals was depressed during the first week, possibly because of unpalatability. No effects were seen on haematological parameters. A treatment-related increase in bile salt concentration and alanine transaminase (ALAT) activity (in males and females at the highest dose) suggested mild cholestasis. Necropsies were performed on all survivors, and tissues from animals at the two highest doses and the control group were examined histologically. Microscopic examination showed no morphological alterations to the liver. Gross and microscopic examination of the stomach and forestomach indicated irritation at all doses of trans-cinnamaldehyde. From this study, a no observed effect level (NOEL) of 620 mg/kg bw per day was derived. However, this NTP study is unpublished and is available only in summary form as described in JECFA (WHO, 2001b). In a second NTP report (2004), trans-cinnamaldehyde was tested in a repeated-dose subchronic study of 14 weeks and in a 2-year carcinogenicity study in both rats and mice. In the subchronic study, 20 male and female F344/N rat and B6C3F1 mice were fed microencapsulated transcinnamaldehyde for 14 weeks. The daily doses were approximately 650, 1,320, 2,550, and 5,475 mg/kg bw per day for male mice and 625, 1,380, 2,680, and 5,200 mg/kg bw per day for female mice. The corresponding doses for rats were approximately 275, 625, 1,300, and 4,000 mg/kg bw per day for males and 300, 570, 1,090 and 3,100 mg/kg bw per day for females. Another 20 rats and mice received untreated feed (untreated controls) or feed containing placebo microcapsules (vehicle controls). A no observed adverse effect level (NOAEL) of 625 mg/kg bw, the lowest dose tested in mice, was derived based on olfactory epithelial degeneration of the nasal cavity in mice given higher doses. From the rat study, a NOAEL of 275 mg/kg bw can be derived based on a treatment-related decrease in ALAT and serum albumin in females and multifocal to diffuse white nodules of the forestomach mucosa in males and females exposed to higher doses. In the 2-year feeding study, groups of 50 male and 50 female F344/N rat and B6C3F1 were fed diets containing microencapsulated trans-cinnamaldehyde at doses of 50, 100 or 200 mg/kg bw per day for male and female rats and 125, 270 or 540 (males) or 570 (females) mg/kg bw per day for mice. Under the conditions of the study, there was no evidence of carcinogenic activity of trans-cinnamaldehyde in male or female F344/N rats and in male or female B6C3F1 mice (NTP, 2004). In a subchronic study with Osborne-Mendel rats, groups of 10 males and 10 females were maintained on a diet containing cinnamaldehyde (isomer not specified) at a concentration of 0 (control), 1,000, 2,500 or 10,000 mg/kg, equivalent to 50, 120 and 500 mg/kg bw per day, for 16 weeks. No differences were observed on body weight, food intake, haematological parameters, organ weights and gross pathology. Histological examination of three to four male and female animals at the high dose revealed slight hepatic cellular swelling and slight hyperkeratosis of the squamous epithelium of the stomach. The NOEL was therefore 120 mg/kg bw per day (Hagan et al., 1967; not available, quoted by JECFA, FAS46). This experiment confirms the findings of the NTP study which showed that the NOAEL is < 500 mg/kg bw. The FEEDAP Panel retains the NOAEL of 275 mg/kg bw per day derived from the 14-week study with trans-cinnamaldehyde (NTP, 2004) as a group NOAEL for cinnamaldehyde and related cinnamyl derivatives. 2-Phenylpropanal [05.038] was administered to rats (males/females, 15 animals/group) by gavage at doses of 0, 10, 50 and 500 mg/kg bw per day for 15-weeks (Pelling et al., 1976). No differences were observed on body weight gain, feed consumption and of the renal function. At the highest dose tested, an increase in the relative weight of several organs (liver, kidney, stomach, pituitary gland in both sexes, heart and spleen in females only) was observed. These changes were not associated with histopathological changes. From this study, a NOAEL of 50 mg/kg bw per day could be derived based on increased relative organ weights at the highest dose level. In a 14-week study in rats (males/females, 15 animals/group), a-pentylcinnamaldehyde [05.040] was administered with the diet at doses of 0, 80, 400 and 4,000 mg/kg feed (equivalent to intakes of 6.1/6.7, 29.9/34.9 and 287.3/320.3 (males/females) mg/kg bw per day). An increase in the relative liver and kidney weight was observed at the highest dose tested, but they were not associated with any histopathological changes. The NOAEL derived from this study is 30 mg/kg bw per day (Carpanini et al., 1973). Secondary references refer to a 90-day study in rats (five males/five females) fed amethylcinnamaldehyde [05.050] at doses of 0, 58, 120 or 220 mg/kg bw for 90 days. Growth and food intake were recorded weekly, as were the results of regular examinations for physical appearance, behaviour and efficiency of food use. At week 12, urine samples were collected and analysed for the presence of sugar and albumin, and blood samples were taken for determination of haemoglobin. No statistically significant differences were found between treated and control animals, and no differences in the liver or kidney weights were seen. Thus, the NOAEL for this study is 220 mg/ kg bw, the highest dose applied. (Trubeck Laboratories, 1958c, as described by WHO, 2001b). Although the study report was not available, the NOAEL of 220 mg/kg bw is supported by the NOAEL of 275 mg/kg bw per day taken for cinnamaldehyde and related compounds. Safety for the target species The first approach to the safety assessment for target species takes account of the applied use levels in animal feed relative to the maximum reported exposure of humans on the basis of the metabolic body weight. The data for human exposure in the EU (EFSA, 2009a,b) ranges from 2.4 to 2,400 lg/person per day, corresponding to 0.11-112.3 lg/kg 0.75 per day. Table 3 summarises the result of the comparison with human exposure for representative target animals. The body weight of target animals is taken from the default values shown in Table 4. Table 3 shows that for all compounds the intake by the target animals exceeds that of humans resulting from use in food. As a consequence, safety for the target species at the feed concentration applied cannot be derived from the risk assessment for food use. As an alternative, the maximum feed concentration considered as safe for the target animal can be derived from the lowest NOAEL available. Applying an UF of 100 to the respective NOAELs, the maximum safe intake for the target species was derived for the 18 compounds following the EFSA Guidance for sensory additives (EFSA FEEDAP Panel, 2012a), and thus the maximum safe feed concentration was calculated (Tables 4 and 5). For 3-phenylpropanal [05.080] and 3-(p-cumenyl)-2-methylpropionaldehyde [05.045], the UF is increased by a factor of 2 because of presumed greater reactivity compared to cinnamaldehyde. The UF for cats is increased by an additional factor of 5 because of the reduced capacity of glucuronidation and glycine conjugation (Court and Greenblatt, 1997). Safety for the consumer The safety for the consumer of the compounds in CG 22, used as food flavours, has already been assessed by JECFA (WHO, 2001a) and EFSA (EFSA, 2009a,b). All these compounds are presently authorised as food flavourings without limitations. 5 Given the use levels of CG 22 compounds to be applied in feed and the extensive metabolism and excretion in target animals (see Section 3.1.1), the FEEDAP Panel considers that the possible residues in food derived from animals fed with these flavourings would not appreciably increase the human intake levels of these compounds. Consequently, no safety concern would arise for the consumer from the use of these 18 compounds up to the highest proposed level in feeds. (c): The uncertainty factor for cats is increased by an additional factor of 5 because of the reduced capacity of glucuronidation. Safety for the user No specific data on the safety for the user were provided. In the material safety data sheets, 11 hazards for skin and eye contact and respiratory exposure are recognised for the majority of the compounds under application. Most are classified as irritating to the respiratory system. The available literature on cinnamaldehyde indicates that irritating and allergic reactions are commonly reported among those handling the product in the workplace (Opdyke, 1979) or otherwise exposed (Rademaker and Forsyth, 1989). Safety for the environment The additions of naturally occurring substances that will not result in a substantial increase in the concentration in the environment are exempt from further assessment. Therefore, these compounds are excluded from further assessment. For the remaining two compounds, namely a-pentylcinnamaldehyde [05.040] and a-hexylcinnamaldehyde [05.041], the predicted environmental concentration calculation for soil (PEC soil ) was calculated based on the maximum proposed use level (Table 5) and compared with the trigger values for compartments set in the phase I of EFSA guidance (EFSA, 2008c). PEC soil values are above the threshold of 10 lg/kg (EFSA, 2008c). The PEC for pore water, (PEC pore water ) is dependent on the sorption, which is different for each compound. For these calculations, the substance-dependent constants, organic carbon sorption constant (K oc ), molecular weight, vapour pressure and solubility, are needed. These were estimated from the Simplified Molecular Input Line Entry Specification (SMILES) notation of the chemical structure using EPIWEB 4.1 (Table 6). 13 This program was also used to derive the SMILES notation from the CAS numbers. The K oc value derived from the first-order molecular connectivity index was used, as recommended by the EPIWEB program (Table 7). The half-life (DT 50 soil) was calculated using BioWin3 (Ultimate Survey Model), which gives a rating number. This rating number r was translated into a half-life using the formula by Arnot et al. (2005): This is the general regression used to derive estimates of aerobic environmental biodegradation half-lives from the BioWin3 model output. The two substances in Table 6 have PEC pore water above 0.1 lg/L and a PEC soil above 10 lg/kg. Therefore, these two substances are subjected to phase II risk assessment. In the absence of experimental data, the phase II risk assessment was performed using ECOSAR v1.11, which estimates the half-maximal effective concentration (EC 50 ) or lethal concentration (LC 50 ) for ecotoxicologically relevant organisms from the SMILES notation of the substance. The predicted no effect concentration (PNEC) for aquatic compartment (PNEC aquatic ) was derived from the lowest toxicity value for freshwater environment by applying a UF of 1,000. For a-pentylcinnamaldehyde and a-hexylcinnamaldehyde, the maximum and the normal proposed use levels would result PEC/PNEC ratio for surface water > 1 (Table 8). For both compounds, a use level of 0.1 mg/kg feed would not cause a risk to the fresh water environment, as shown in Table 9. For both compounds, a use level of 0.1 mg/kg feed would result in a PEC soil below the trigger value, thus a risk for the terrestrial compartment is not expected. If used in fish feed at the highest proposed use level of 5 mg/kg complete feed in land-based aquaculture systems, none of the additives under assessment would result in a predicted environmental concentration of the additive (parent compound) in surface water (PEC swaq ) above the trigger value of 0.1 lg/L when calculated according to the guidance (EFSA, 2008c). For sea cages, a dietary concentration of 0.05 mg/kg would ensure that the threshold for the predicted environmental concentration of the additive (parent compound) in sediment (PEC sed ) of 10 lg/kg is not exceeded when calculated according to the EFSA guidance (EFSA, 2008c). 3.2.6.1. Conclusions on safety for the environment For all the compounds belonging to CG 22, except a-pentylcinnamaldehyde and a-hexylcinnamaldehyde, the maximum proposed use levels (see Section 3.2.3) are considered safe for the environment. For the marine environment, the safe use level is estimated to be 0.05 mg/kg feed. For a-pentylcinnamaldehyde and a-hexylcinnamaldehyde, a use level up to 0.1 mg/kg feed would not cause a risk for the terrestrial and fresh water compartments. Efficacy Since all 18 compounds are used in food as flavourings, and their function in feed is essentially the same as that in food no further demonstration of efficacy is necessary. Conclusions The No safety concern would arise for the consumer from the use of these compounds up to the highest proposed safe use levels in feeds. Irritation and sensitisation hazards for skin and irritation for eye are recognised for the majority of the compounds under application. Respiratory exposure may also be hazardous. For all the compounds belonging to CG 22, except a-pentylcinnamaldehyde and a-hexylcinnamaldehyde, the maximum proposed use levels are considered safe for the environment. For a-pentylcinnamaldehyde and a-hexylcinnamaldehyde, a use level up to 0.1 mg/kg feed would not cause a risk for the terrestrial and fresh water compartments. Because all the compounds under assessment are used in food as flavourings and their function in feed is essentially the same as that in food, no further demonstration of efficacy is necessary. The Chemically Defined Flavourings -Group 22 (Aryl-substituted primary alcohol/aldehyde/acid/ ester/acetal derivatives including unsaturated ones), in this application comprises nineteen substances, for which authorisation as feed additives is sought under the category "sensory additives", functional group 2(b) "flavouring compounds", according to the classification system of Annex I of Regulation (EC) No 1831/2003. Documentation provided to EFSA In the current application submitted according to Article 4(1) and Article 10(2) of Regulation (EC) No 1831/2003, the authorisation for all species and categories is requested. The flavouring compounds of interest have a purity ranging from 95% to 99% and 90% for 3-(p-Cumenyl)-2methylpropionaldehyde. Mixtures of flavouring compounds are intended to be incorporated only into feedingstuffs or drinking water. The Applicant suggested no minimum or maximum levels for the different flavouring compounds in feedingstuffs. For the identification of volatile chemically defined flavouring compounds CDG22 in the feed additive, the Applicant submitted a qualitative multi-analyte gas-chromatography mass-spectrometry (GC-MS) method, using Retention Time Locking (RTL), which allows a close match of retention times on GC-MS. By making an adjustment to the inlet pressure, the retention times can be closely matched to those of a reference chromatogram. It is then possible to screen samples for the presence of target compounds using a mass spectral database of RTL spectra. The Applicant maintained two FLAVOR2 databases/libraries (for retention times and for MS spectra) containing data for more than 409 flavouring compounds. These libraries were provided to the EURL. The Applicant provided the typical chromatogram for the CDG22 of interest. In order to demonstrate the transferability of the proposed analytical method (relevant for the method verification), the Applicant prepared a model mixture of flavouring compounds on a solid carrier to be identified by two independent expert laboratories. This mixture contained twenty chemically defined flavourings belonging to twenty different chemical groups to represent the whole spectrum of compounds in use as feed flavourings with respect to their volatility and polarity. Both laboratories properly identified all the flavouring compounds in all the formulations. Since the substances of CDG22 are within the volatility and polarity range of the model mixture tested, the Applicant concluded that the proposed analytical method is suitable to determine qualitatively the presence of the substances from CDG22 in the mixture of flavouring compounds. Based on the satisfactory experimental evidence provided, the EURL recommends for official control for the qualitative identification in the feed additive of the individual (or mixture of) flavouring compounds of interest the GC-MS-RTL (Agilent specific) method submitted by the Applicant. As no experimental data were provided by the Applicant for the identification of the active substance(s) in feedingstuffs and water, no methods could be evaluated. Therefore the EURL is unable to recommend a method for the official control to identify the active substance(s) of interest in feedingstuffs or water. Further testing or validation of the methods to be performed through the consortium of National Reference Laboratories as specified by Article 10 (Commission Regulation (EC) No 378/2005) is not considered necessary.
2019-04-02T13:04:24.900Z
2017-02-01T00:00:00.000
{ "year": 2017, "sha1": "ea5a666e44412bc2894a0d54a5f2a8120e5b6095", "oa_license": "CCBYND", "oa_url": "https://efsa.onlinelibrary.wiley.com/doi/pdfdirect/10.2903/j.efsa.2017.4672", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a3f22878a7ca6b875efe9d9a6c5b05cd2f533d6e", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
2443261
pes2o/s2orc
v3-fos-license
The relationship between positive peritoneal cytology and the prognosis of patients with FIGO stage I/II uterine cervical cancer Objective The purpose of this study was to assess whether peritoneal cytology has prognostic significance in uterine cervical cancer. Methods Peritoneal cytology was obtained in 228 patients with carcinoma of the uterine cervix (International Federation of Gynecology and Obstetrics [FIGO] stages IB1-IIB) between October 2002 and August 2010. All patients were negative for intraperitoneal disease at the time of their radical hysterectomy. The pathological features and clinical prognosis of cases of positive peritoneal cytology were examined retrospectively. Results Peritoneal cytology was positive in 9 patients (3.9%). Of these patients, 3/139 (2.2%) had squamous cell carcinoma and 6/89 (6.7%) had adenocarcinoma or adenosquamous carcinoma. One of the 3 patients with squamous cell carcinoma who had positive cytology had a recurrence at the vaginal stump 21 months after radical hysterectomy. All of the 6 patients with adenocarcinoma or adenosquamous carcinoma had disease recurrence during the follow-up period: 3 with peritoneal dissemination and 2 with lymph node metastases. There were significant differences in recurrence-free survival and overall survival between the peritoneal cytology-negative and cytology-positive groups (log-rank p<0.001). Multivariate analysis of prognosis in cervical cancer revealed that peritoneal cytology (p=0.029) and histological type (p=0.004) were independent prognostic factors. Conclusion Positive peritoneal cytology may be associated with a poor prognosis in adenocarcinoma or adenosquamous carcinoma of the uterine cervix. Therefore, the results of peritoneal cytology must be considered in postoperative treatment planning. www.ejgo.org 91 nostic implications of positive peritoneal cytology in cervical cancer [1]. The present retrospective study was undertaken to clarify the prognostic significance of peritoneal cytology in surgically treated patients with International Federation of Gynecology and Obstetrics (FIGO) stage IB-IIB cervical cancer. MATERIALS AND METHODS Between 2002 and 2010, 228 patients undergoing radical hysterectomy and pelvic lymph node dissection for FIGO stage IB-IIB cervical cancer were treated at Shizuoka Cancer Center Hospital. This study included patients who met the following criteria: proven invasive carcinoma of the uterine cervix, and FIGO stage IB, IIA, or IIB disease without para-aortic lymph node metastases. Para-aortic lymph nodes were evaluated by computed tomography (CT) and/or positron emission tomography (PET)-CT. All of the patients underwent radical abdominal hysterectomy. The patients had no macroscopic extrauterine disease disseminating over the surface of the peritoneum or organs in the abdominal cavity at the time of primary surgery. Patients with microscopic peritoneal dissemination in the abdominal cavity that was proven by pathological analysis of the adnexa were excluded. Those who had other simultaneous carcinomas or other epithelial tumors, including endometrial cancer, ovarian cancer, and tubal cancer, were also excluded. Cytopathologic diagnosis was performed according to the following procedure. Cytological specimens were obtained by laparotomy immediately upon entering the peritoneal cavity. Approximately 20 mL of sterile saline was instilled into the pelvis over the uterus and then aspirated with a syringe. The samples were subjected to cytocentrifugation onto slide glasses at 1,500 rpm at room temperature for 60 seconds. After fixation with 95% ethanol, the following stains were applied: Papanicolaou, Alcian blue, Giemsa stain, and immunohistological stains for carcinoembryonic antigen and BER-EP4. Immunohistological staining was used as an ancillary diagnostic tool when the diagnosis was not clear with Papanicolaou, Alcian blue, and Giemsa stains. Two cytologists independently examined all slides. Our standard surgical procedure for FIGO stage IB-IIB cervical cancer patients is abdominal radical hysterectomy and pelvic lymphadenectomy. Para-aortic lymph node biopsy was not performed. With respect to adjuvant therapy, patients with pelvic lymph node metastases or parametrial invasion received concurrent chemoradiotherapy (CCRT). Patients with 2 or more of 3 risk factors (lymphovascular space invasion, deep stromal invasion, and bulky tumor) received radiotherapy [8]. Patients with positive peritoneal cytology were treated under the same protocol as those with negative peritoneal cytology. The associations of positive peritoneal cytology with pathological features were evaluated by Fisher exact test. Recurrence-free survival (RFS) and overall survival (OS) were calculated using the Kaplan-Meier method, and the survival curves were compared by the log-rank test. A p-value <0.05 was considered significant. Factors that were independently associated with survival in cervical cancer were identified by multivariate analysis using the Cox proportional hazards model. Table 1 shows the characteristics of the 228 patients in this study. Of these, 139 had SCC and 89 had ADC. The median follow-up period was 51 months (range, 4 to 115 months). Twenty-eight (23 SCC and 5 ADC) patients received platinumbased neoadjuvant chemotherapy. No patients received radiotherapy as neoadjuvant therapy. Peritoneal cytology was positive in 9/228 (3.9%) patients: 3/139 (2.2%) of SCC and 6/89 (6.7%) of ADC cases. Of the patients with positive peritoneal cytology, one received neoadjuvant chemotherapy. Among the cases with negative cytology, 27 patients received neoadjuvant chemotherapy. Of the ADC cases, one patient was lost to follow-up after 4 months. Table 2 shows the characteristics of patients with positive cytology. Of the 3 patients with SCC, 1 had FIGO stage IB1, and 2 had stage IIB cancer. Of the 6 patients with ADC, 3, 2, and 1 had stage IB1, IB2, and IIA cancer, respectively. With regard to histological type, 3 tumors were mucinous adenocarcinomas, 1 was a clear-cell adenocarcinoma, and 2 were adenosquamous carcinomas. After surgery, 5 patients received CCRT as adjuvant therapy: 4 patients received 4 cycles of cisplatin plus 5-fluorouracil therapy, and 1 patient received 6 cycles of cisplatin with whole pelvic irradiation. Two patients received chemotherapy alone as adjuvant therapy consisting of 3 to 5 cycles of carboplatin and paclitaxel. One patient received radiotherapy to her whole pelvis. The associations between pathologic parameters and peritoneal cytology status are shown in Table 3. Positive peritoneal cytology was associated with lymph node metastases, lymphovascular space invasion, parametrial invasion, and deep stromal invasion (≥10 mm or ≥1/3). One of the 3 patients with SCC had a recurrence at the vaginal stump 21 months after radical hysterectomy and recovered completely. All 5 patients with ADC (100%) who had positive cytology had recurrence during the 10-month follow-up period: 3 (60%) with peritoneal dissemination and 2 (40%) with lymph node metastases. On the other hand, 11 (13.3%) recurred among the 83 ADC patients with negative cytology and only 1/11 (9.1%) had peritoneal dissemination. Patients with ADC with positive cytology showed a higher incidence of peritoneal dissemination (p=0.063). The 3-year RFS (cytology negative/positive) was 86.7%/37.5%, and OS was 94.4%/50.0%. When restricted to ADC cases, 3-year RFS was 88.1%/20.0%, and OS was 90.8%/20.0%. Significant differences in RFS and OS were found between the peritoneal cytologynegative and cytology-positive groups, both for total cases and when the analysis was limited to ADC cases (p<0.001 for both) (Figs. 1, 2). Table 4 shows the results of the Cox proportional hazards regression analysis. Peritoneal cytology and histological type were found to be independent prognostic factors (p=0.029 and 0.004, respectively), whereas lymph node metastases, lymphovascular space invasion, parametrial invasion, deep DISCUSSION The literature contains very few reports of cases of positive peritoneal cytology in cervical cancer. The rate of positive peritoneal cytology in cervical cancer, however, differs for SCC and ADC. Compared with a rate of 0% to 1.8% for SCC [1,[3][4][5]9], it is more common in ADC, with a positive rate of 11% to 15% [1][2][3]. In the present study, the rates of positive peritoneal cytology were 2.2% for SCC and 6.7% for ADC, with no significant histological differences. However, there were previous reports demonstrating that the rate of positive peritoneal cytology was significantly higher in ADC than in SCC. Table 5 shows previous reports of the relationship between www.ejgo.org 95 positive peritoneal cytology and prognosis. Most previous studies reported that patients with positive cytology had clearly lower survival rates than those with negative cytology. However, positive cytology overlapped with other risk factors. No consensus was reached on whether positive cytology is an independent risk factor. Kashimura et al. [7] reported that peritoneal cytology, pelvic lymph nodes, and para-aortic lymph nodes are independent prognostic factors in stages I-IV, irrespective of the histological type. Kasamatsu et al. [2] found that peritoneal cytology, lymph node metastasis, histological grade, and ovarian metastasis were independent prognostic factors in stage I and II ADC. In the present study, peritoneal cytology and histological type were found to be independent prognostic factors. However, Takeshima et al. [1] found that, although muscle layer invasion, lymph node metastases, and cardinal ligament invasion were prognostic factors for stages I and II ADC, peritoneal cytology was not. Morris et al. [10] evaluated stage IB disease and concluded that the prognostic significance of peritoneal cytology was overshadowed by other risk factors. A power analysis of the prognostic value of peritoneal cytology was performed in the present study, and the power was low. This is due to the small sample size, which was a limitation of this study. Similarly, the log-rank test also revealed low confidence. In other previous reports, a similarly small sample size was used, and different results were obtained. The site of recurrence in patients with positive peritoneal cytology was inconsistent for the SCC cases in this study, which it is difficult to confirm owing to the small number of cases. Kasamatsu et al. [2] reported that in cases of ADC, 62.5% of recurrences involved peritoneal dissemination, which is significantly higher than that observed in patients with negative peritoneal cytology. In the present study, peritoneal recurrence of ADC among patients with positive peritoneal cytology occurred in 60% of cases; this percentage tended to be higher than that of patients with negative cytology. Takeshima et al. [1] found that peritoneal recurrence occurred in only 28.6% of patients, even for ADC, with no significant difference compared to patients with negative peritoneal cytology. Although there are 2 conceivable pathways for the migration of cancer cells to the abdominal cavity, either via the fallopian tubes or by hematogenous or lymphatic spread, the detailed mechanism for this migration remains unclear. All patients with positive peritoneal cytology in the present study also had vascular invasion and deep interstitial infiltration. The frequency of lymph node metastases and parametrial invasion was also higher among patients with positive peritoneal cytology. Cervical cancer may therefore possess higher metastatic and invasive potential in patients with positive peritoneal cytology. We do not currently take peritoneal cytology into account when deciding postoperative adjuvant treatment policies. We perform postoperative CCRT or radiotherapy according to the risk factors. However, all 5 patients with ADC developed recurrence from peritoneal dissemination or para-aortic lymph node metastasis, and they died thereafter. These recurrent sites are not "local." If we conclude that cancer cells appear in the abdominal cavity by hematogenous or lymphatic spread, then positive cytology would indicate systemic disease. Rather than administering adjuvant therapy with the aim of local control, systemic chemotherapy should be the treatment of choice for patients with positive peritoneal cytology, particularly for those with ADC. In present study, it should be noted that peritoneal cytology in cervical cancer is of value with respect to the prognosis of uterine cervical cancer. This study did not clearly show the significance of peritoneal cytology in cases of SCC. However, patients with ADC frequently have positive peritoneal cytology, and because a positive result indicates a high recurrence rate, it may constitute an important risk factor. Therefore, we suggest that positive peritoneal cytology is also a factor that should be taken into account when making decisions concerning postoperative adjuvant therapy.
2017-08-15T22:45:47.455Z
2014-04-01T00:00:00.000
{ "year": 2014, "sha1": "e3cc4c0c3347a4f10ee6d479493e49a2a4fac0f7", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3996270?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "e3cc4c0c3347a4f10ee6d479493e49a2a4fac0f7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54897117
pes2o/s2orc
v3-fos-license
THE NUMBER OF PICKERS AND STOCK-KEEPING UNIT ARRANGEMENT ON A UNI-DIRECTIONAL PICKING LINE The order picking process is often the single largest expense in a distribution centre (DC). The DC considered in this paper uses a picking line configuration to perform order picking. The number of pickers in a picking line, and the initial arrangement of stock-keeping units (SKUs), are two important factors that affect the total completion time of the picking lines. In this paper, the picking line configuration is simulated with an agent-based approach to describe the behaviour of an individual picker. The simulation is then used to analyse the effect of the number of pickers and the SKU arrangement. Verification and validation of this model shows that the model represents the real-world picking line to a satisfactory degree. Marginal analysis (MA) was chosen to determine a ‘good’ number of pickers by means of the simulation model. A look-up table is presented to provide decision support for the choice of a ‘good’ number of pickers to improve completion times of the picking line, for the properties of a specific picking line. The initial SKU arrangement on a picking line is shown to be a factor that can affect the level of picker congestion and the total completion time. The greedy ranking and partitioning (GRP) and organ pipe arrangement (OPA) techniques from the literature, as well as the historical SKU arrangements used by the retailer under consideration, were compared with the proposed classroom discipline heuristic (CDH) for SKU arrangement. It was found that the CDH provides an more even spread of SKUs that are picked most frequently, thus decreasing congestion and total completion time. INTRODUCTION Distribution centres (DCs) play a vital role in the supply chain of most retail companies, linking the manufacturers, suppliers, and consumers.The costs associated with DCs account for a significant part of the total supply chain cost of most companies, and therefore it is important to optimise the DC operations to minimise these costs.Order picking is usually the most labour-intensive and cost-inducing process in a DC: it can account for as much as 55 per cent of the total DC operating cost [4].So there has been much research into different order picking policies and systems to optimise this process.DCs have different requirements due to the facility setup, product range, and order properties.These different requirements have led to a number of variations in order picking policies and systems being implemented in different DCs.These policies and systems determine the manner and sequence in which stock-keeping units (SKUs) are picked to complete an order.This paper considers the picking operations of a DC owned by PEP Stores Ltd, the biggest single brand retailer in South Africa [13]. The three main order picking systems implemented around the world are the 'picker-toparts system', the 'parts-to-picker' system, and the 'put system' [4].There are also automated picking systems, yet these are less frequently used, being reserved for more specialised types of products.The majority of order picking systems employ people.PEP uses the most commonly implemented system, the 'picker-to-parts system'.In this system the picker(s) move along aisles to bays where SKUs are situated.PEP, however, does not pick directly from the storage locations, but takes a set of SKUs to a designated picking area (called a picking line) to perform the order picking on that set of SKUs. The picking lines in PEP's DC are designed with 56 locations situated around a conveyor belt, as shown in Figure 1.The picking line is built prior to order picking, according to a set of distributions received from head office.A distribution contains a group of SKUs and the quantities that each branch requires of each SKU in that group.A set of distributions is clustered together; all the SKUs in that cluster are put together on a picking line, and are then picked as a wave.If the SKUs for all the branches are picked, the leftover stock is removed from the line and a new set of distributions is assigned to a line; this results in picking waves.During a wave, each location is allocated a maximum of one type of SKU.Picking line managers make the decisions about the placement of SKUs on the line.They usually attempt to place the SKUs in such a way that the SKUs that are picked most often (i.e.go to the most branches) during the picking process are spread out over the different lines.This spreading of the most-picked SKUs is done to avoid unbalanced workload amongst pickers.Moreover, pickers hinder each other if the most-picked SKUs are placed close together on a picking line; this leads to picker congestion.Congestion refers to any picker standing still or moving slower because they cannot easily pass another picker who walks at a slower speed or is stationary in front of them because they are busy picking themselves.Congestion may potentially increase the total completion time of a picking line.The first objective of this paper is thus to investigate the effect of the SKU arrangement on picker congestion, and to find SKU arrangements that may lead to less congestion.Since it has been shown that the SKU arrangement does not significantly affect the number of cycles walked by pickers to pick all the orders, this study uses the SKU arrangement to decrease picker congestion and ultimately the total picking time.The pickers assigned to a picking line walk around the conveyor belt, picking different SKUs from locations to complete specific orders.The pickers are each equipped with an audio headpiece that uses a voice recognition system (VRS).This software leads them to pick the correct quantity of a specific SKU for a particular order they have been assigned.Pickers only pick one order at a time, and do this in a clockwise direction at all times.Once an order (for a branch) is completed, the picker places that order on to the conveyor belt to be transported to a station for quality control checks, and then to dispatch for shipping to its branch.A picker who has completed an order receives a new order to pick.This process is repeated until all the orders have been completed.The picking line managers decide on the number of pickers assigned to a specific picking line during a wave of picking.In practice, this number is determined based on the picking line manager's common sense and experience.Approximately eight pickers (depending on the size of a line) are normally assigned to a picking line. A carousel is a circular setup of shelving that contains products and brings them to a picker, instead of a picker walking to the product.The carousel can rotate (usually) in both directions to present the products to a picker [10].The picking line setup described above may be viewed as a type of uni-directional carousel system, where the carousel is stationary, but the pickers rotate.There are, however, two major differences between the picking lines considered here and carousels described in the literature: (a) the picking lines considered here result in a uni-directional carousel as opposed to the bi-directional case described most often in the literature; and (b) the carousels in the literature are optimised for an expected (stochastic) set of orders, while these picking lines are optimised for a known (deterministic) set of orders.The existing literature on carousels is thus not applicable to the picking line configuration considered in this paper. The main aim of this paper is accurately to simulate the picking process at PEP's Durban DC, and to provide, by means of the simulation, an analysis on the impact of the placement of SKUs and the number of pickers on the congestion in a picking line.Furthermore, the analysis should provide a 'good' number of pickers and SKU arrangement to improve the total completion times of the picking lines. The remainder of the paper is structured as follows: in Section 2 the simulation model, the input data, and the validation and verification of the simulation model are discussed.The results of the simulation model are presented in Section 3, in terms of the SKU arrangement and the number of pickers in a picking line.The paper is concluded in Section 4. THE MODELLING APPROACH Simulations are used widely to optimise DC operations, and have been proven to provide practical and implementable results [7].The interactions between pickers on the picking lines in the DC considered here are too complex to be modelled analytically, and thus a simulation model was implemented.Agent-based modelling and simulation (ABMS) is a relatively new tool that is very useful in modelling the dynamic nature of a complex system with a collection of autonomous decision-making 'agents' [11].These agents are able to dynamically assess their current situation and act accordingly.These actions are governed by behavioural traits.ABMS allows agents to make decisions and act independently of each other, yet also allows agents to interact and adapt to certain environmental pressures. ABMS can be computationally expensive because it does not look at the aggregate level of a system, but instead deals with the system's constituent entities [2].In dealing with these, ABMS provides a method in which many systems can be modelled in their most natural states.For example, it is more natural to model a traffic intersection through the behaviour of individual cars than to describe the congestion through a range of complex equations, because the congestion is a direct result of the behavioural traits of individual cars that want to use the intersection at the same time. One of the benefits of ABMS is the flexible nature of the model.It provides a framework in which the complexity of the agents' behavioural characteristics and the rules of inter-agent relationships can be changed and tweaked in a dynamic manner [2].This fine-tuning of a model typically occurs in a verification and validation process to simulate a real-life system more accurately. Simulation model Macal and North [11] suggest that any ABMS model has three elements.It is crucial that a developer identifies these elements correctly in the process of building a model.The first element is a set of agents with their characteristics and behavioural traits.The second element is a set of agent relationships that define the manner in which agents interact with each other and the environment.The third element is the environment within which the agents exercise their behaviour and inter-agent relationships. In the modelling process presented here, the pickers are seen as the agents.They are assigned a set of characteristics and behavioural traits that define their actions and their interactions with other agents.They are situated in an environment (the picking line) that determines the boundaries within which they operate. The Oxford Advanced Learner's Dictionary [4] defines 'maturity' as "the state of being fully grown or developed".When this concept is applied to a project, it could imply a situation where an organisation has standards and procedures in place that would assist it in reaching its objectives.An organisation is mature, therefore, when it is in a position to deal perfectly with its projects [5]. Agent characteristics The three major activities that any picker performs are walking, picking, and packing.The 'walking' activity involves walking from one specific location to another, where the next SKU (in a clockwise direction) required by the current order is situated.'Picking' is the activity of selecting the correct number of that specific SKU from the location to be packed in a carton.The 'packing' activity involves packing the picked SKUs for an order into the carton in an orderly manner (pickers are rewarded for packing tightly), placing full boxes or completed orders on to the conveyor belt, and preparing new boxes for the next order. Walking occurs at different velocities that depend on the specific picker and the picker's direct environment.Similarly, the picking and packing occurs at picker-specific times.The picking and packing times for any specific picker vary from pick to pick and pack to pack.Both of these times are expected to be exponentially distributed.The walking velocity, the picking and packing times, and the current location in the picking line are defined as the agent's characteristics.At any point in time, an order to be completed and a next location to be visited are assigned to a picker; thus these are also viewed as characteristics. The visual analysis performed in the DC revealed that the current velocity of a picker is largely dependent on the current velocity and activities of the pickers who are working close to one another.From time studies of video footage taken in the DC, it was concluded that a picker can be in one of four velocity states: default velocity, following velocity, passing velocity, and congested velocity. The 'default' velocity state refers to when a chosen picker is not within a critical distance to any other picker, and thus can walk at their default velocity.This default velocity is assigned to the picker during the initialisation process of the simulation.If the chosen picker is within a critical distance of another picker, the behaviour of the picker in front of them is observed and a decision is made accordingly; the picker in front could be walking at their default velocity, picking, packing, or in a congested state. If the picker in front of the current picker is walking, then the current picker will assume a 'following' velocity state, taking on the default velocity of the picker in front of them, and following that picker until there is a change in the status quo.If the picker in front is either picking or packing, and thus stationary, there are two possibilities to consider: the picker in front could be either picking or packing where the current picker intends to pick their next SKU.If the picker in front is not picking or packing where the current picker intends to pick next, the chosen picker assumes the 'passing' velocity state and passes the picker in front at a reduced velocity: this is set as a percentage of the chosen picker's default velocity.If the picker in front is indeed picking where the current picker intends to pick next, then the current picker assumes the 'congested' velocity state, waiting for the picker in front to move on: this occurs as pickers cannot pick from the same location simultaneously.Finally, if the picker in front is in a 'congested' velocity state, the chosen picker also assumes the 'congested' velocity state because there is no possibility of passing.This decision process is represented by the flow diagram in Figure 2. Environment The environment emulates the layout of the picking line in PEP's Durban DC.The environment is designed and scaled precisely, and dictates the allowable whereabouts of the picker, along with the uni-directional nature of the picking line.From the scaled distances in the environment, the distance between pickers is evaluated, and behavioural decisions are made accordingly. Implementation The simulation model, which was developed by XJ Technologies [17] and is based on the Java [16] computer programming language, was built and implemented in Anylogic version 6.5.1.The distance between agents is monitored at regular time intervals, and thus behavioural decisions are also taken at regular time intervals, governed by a set of nested 'if' statements.The inputs for the simulation include a list of locations and the respective SKUs, a list of store orders with all the SKUs required by each, and the number of pickers to pick these orders.In the verification and validation process of the model, the distributions of the specific picker velocities, the pick and pack times, and the system entry times are required as input in order to compare the simulation output with the real-life situation. The data capturing process The analysis of video recordings was used to determine the proportion of time spent picking, packing, and walking by each picker.Using these proportions, it is possible from the picker-specific time stamp data supplied by PEP to calculate the average picking and packing times, and walking velocity for a certain picker, not taking congestion into consideration.In the calculation of the average picking and packing times and average walking velocity for a picker, cycles without excessive time delays were considered.An excessive time delay would be a gap of more than ten minute between two pick time completions, which could be the result of a toilet or lunch break, or a manager calling a picker out of a line.Over these cycles, it can be calculated how many bays were passed, how many picks were made, how many times the picker packed, and the total time spent picking these cycles. Historical time stamp data and video footage of order picking at the DC was used to analyse the system and to perform the verification and validation.The Kolmogorov-Smirnoff 'goodness of fit' test was used to determine whether the picking and packing times measured from the video footage are in fact exponentially distributed.The hypothesis that the picking and packing times are exponentially distributed is not rejected at a significance level of α = 0.05. Verification and validation A significant and critical process in any simulation is the verification and validation of the model [14].Without thorough verification and validation, there can be no confidence in the simulation and the respective results.Verification is the process of testing whether the real-world system has been transformed into an accurate computer model, and validation is the substantiation that the model has sufficient accuracy for the purpose at hand [15].According to Robinson [14], there are four main processes involved with the verification and validation of a simulation model: conceptual model verification and validation, data verification and validation, white-box verification and validation, and black-box verification and validation. Conceptual model verification and validation is concerned with whether or not the model contains all the relevant detail to meet the proposed objectives: is the detail involved in the ABMS sufficient to analyse the effect of the number of pickers and the SKU arrangement on the picking line's congestion?It was observed (in real life and in the simulation model) that the SKU arrangement and the number of pickers are both large influencers of the amount of congestion present in picking lines. During data verification and validation, the model builder determines whether or not the data required to run the model is accurate and sufficient.The data input into the Anylogic program was obtained from the historical data automatically captured by the warehouse management system and/or the historical data from the video footage.These two sets of data were then compared by inspection and Kolmogorov-Smirnov 'goodness of fit' tests (at a significance level of α = 0.05) for properties such as the picker's walking times, and was found to yield approximately the same distributions for all input data; thus it was concluded that the input data is accurate. The white-box verification and validation process determines whether or not the constituent entities accurately model their real-world counterparts.In performing the white-box verification and validation, a few areas were considered: • Does a single picker walk at the correct velocity and pick and pack at the correct times? • Does a single picker pick all the orders in the correct sequence?• Do pickers stay within the boundaries of the environment and walk in the correct direction?• Do pickers exhibit the correct inter-agent behaviour? The agent behaviour was also observed in the Anylogic run time visual and shown to adhere to all specified rules.It was found that the entities accurately emulated the behaviour displayed by pickers in real life. Black-box verification and validation determines whether or not the simulation models the real world sufficiently at an aggregate level.This is the most difficult but most important part of the verification and validation process.Once the correctness of the model has been ascertained and observed, a comparison of the model's output data with the historical data must be performed.In the black-box verification and validation, a historical dataset was considered.This includes the bay numbers, the corresponding SKU numbers, and the list of orders.It also includes the list of pickers and the relevant picker-specific data. The times used to calculate the picking and packing times and walking velocity include congestion, and thus need to be adjusted to take congestion into account.Average congestion was calculated over several runs and then removed to calculate the pickerspecific picking and packing times and walking velocities.What must be taken into consideration in this verification and validation are the outliers.Outlying times arise for various reasons such as bathroom or lunch breaks.A few elements that were significantly different (by two standard deviations) were removed from the dataset as outliers for verification and validation purposes, as these were not considered to be normal picker behaviour. Using the picker-specific data and taking congestion and outliers into account, the blackbox verification and validation could be performed.There are two measures of the model's accuracy: the first is a specific picker's total completion time over all the respective orders, and the second is the picker's specific completion times for every individual store order picked. Simulation runs were performed, and the differences between the actual times and the simulated times for order completion and total completion were recorded and calculated.The Kolmogorov-Smirnoff 'goodness of fit' test may be used to test if the differences are significantly different from a normal distribution with mean zero.The hypothesis that the differences are normally distributed, with a mean of zero, was not rejected at a significance level of α = 0.05; thus it is concluded from the verification and validation that the simulation does indeed have an acceptable level of accuracy that is sufficient for modelling the real-world picking lines considered here. Number of replications The number of simulation replications influences the accuracy of the solutions acquired. Burghout [3] suggests the following formula to determine the required number of replications.This equation is derived from a statistical (1-α) per cent confidence level ttest on the simulated mean.The number of simulation runs needed is: where () is the number of replications, � () is the estimate of the real completion time using simulation runs, () is the standard deviation over simulations, is the percentage error of the simulated mean, and −1,1−/2 is the two-tailed t-distribution critical value for − 1 degrees of freedom at an significance level. It is not possible to use the historical total completion times when using formula (2) in this study.This is because the pickers who start a line often do not complete that line: pickers are added and removed from lines by line managers as they see fit.Therefore the completion times for specific pickers were used for this calculation.The standard deviation of the total completion times over twenty simulation replications is 377 seconds, which is 1.5 per cent of the simulated mean completion time of 21,241 seconds.The actual historical completion time was 21,330 seconds.If we accept an error of a minute per hour, it renders = 0.016.The t-distribution critical value is 2.093.Substituting this into formula (2) calculates the required number of replications as approximately 4.9.Although roughly five simulation replications are adequate in providing a solution that is sufficiently close to the actual completion time of a picker, ten replications were used throughout this study.This amounts to a maximum error of approximately 42 seconds per hour or 1.17 per cent. RESULTS Sixteen real-life datasets (of picking lines) are considered: they are the same set of picking lines that were used by Matthews and Visagie [12].The picking lines were divided into eight large lines, four medium lines, and four small lines according to the number of orders associated with the picking lines.These real-life datasets were selected to compare the simulation results with the historical values.Due to a steep increase in run times, only up to sixteen pickers are considered for a 'good' number of pickers; this is because management has confirmed that it is unrealistic for PEP to assign more than double the number of pickers currently being used. SKU arrangement The picking lines considered in this paper can be viewed as a type of carousel picking system.The optimal SKU arrangement of carousel systems in the literature mainly focuses on placing the SKUs in a way that minimises the total distance walked by pickers, or equally to minimise the total time to complete all orders [10].The organ-pipe arrangement (OPA) and the greedy ranking and partitioning (GRP) are the most commonly used to solve the SKU location problem (SLP).Both the OPA and GRP methods classify and arrange SKUs according to their pick frequency.The pick frequency of a SKU refers to the number of orders that contain that SKU; simply put, the number of times a SKU is picked while completing orders. In the case considered here, the pick frequency refers to the number of times a SKU is picked to complete all of the orders during a specific wave of picking.When using the OPA to build a carousel, one would first place the most frequently-picked SKU on the carousel, the next most frequently-picked SKU adjacent to the first, the third most frequently-picked SKU to the other side of the first, and continue in this fashion [10].The OPA method was introduced by Lim et al. [9], and in the case of bi-directional carousels with an expected set of stochastic orders, Litvak [10] has proved that the OPA is an optimal SKU arrangement under these conditions.This optimality refers to the minimisation of the distance travelled by the carousel.The OPA is probably a good method to use when a typical order on a carousel system is small, with respect to the total number of orders [1].The OPA has the advantage of being easy to solve and set up.An example of a typical OPA SKU location solution is shown in Figure 3. The GRP SKU arrangement method is very similar to the OPA.The GRP, which was introduced by Lim et al. [9], also classifies and arranges SKUs according to their pick frequency.The GRP method differs from the OPA in that it orders the SKUs in decreasing order of pick frequency.The GRP has also been shown to provide optimal SKU arrangements in certain conditions, with respect to the distance travelled by a carousel.An example of a typical GRP SKU location solution is shown in Figure 3. Figure 3: An example of typical OPA and GRP SKU location arrangements The literature focuses on solving the SLP (for a carousel system) to minimise the distance rotated by the carousel.This is equivalent to the distance walked by the pickers in the picking line system considered here.The distance walked (or rotated) is, under the right conditions, optimal for normal bi-directional carousel systems, but unfortunately this optimality does not hold in the case of a unidirectional carousel (or picking line as considered here).The methods mentioned above led to minimal or no savings in distance travelled, relative to a random arrangement on the type of picking lines considered here. The focus is thus shifted to the congestion caused by the high-pick-frequency SKUs, and to the potential time that could be saved due to a reduction in congestion.In an attempt to spread out the SKUs with a high pick frequency as evenly as possible, a novel arrangement called the classroom discipline heuristic (CDH) is introduced here. Classroom discipline heuristic The idea behind the CDH is to spread the SKUs out as evenly as possible (in terms of pick frequency) in a heuristic manner.The CDH mimics the dynamics of a classroom, where there are usually students with varying levels of discipline.If individual students with poor discipline are placed together in the classroom, the combined poor discipline will be increased as those students provoke each other; whereas if a student with poor discipline is adjacent to a student with good discipline, the combined discipline will be better.A teacher who has carte blanche over where the children are placed generally strives to create an even spread of less disciplined students among the better disciplined students.This principle may be adapted to the SKU arrangement on a picking line.The CDH regards the SKUs as students and the pick frequency of SKUs as the level of discipline. The method is applied as follows: first, the SKU with the highest pick frequency is placed in the middle of the picking line.Second, the SKU with the second-highest pick frequency is placed in the middle of the left half of the picking line, and the SKU with the third-highest pick frequency is placed in the middle of the right half of the picking line.The following four SKUs (in terms of pick frequency) are then placed in the middle of the four openings between the already-placed SKUs, from right to left.The unplaced SKUs are repeatedly placed in the middle of the open spaces, from largest to smallest pick frequencies, moving alternately from the left to the right and then the right to the left.An example of the CDH applied to the set of SKUs with the following picking frequencies {15, 13, 12, 9, 7, 6, 3} can be seen in Figure 4. SKU arrangement results The comparison between the OPA, GRP, historical, and CDH SKU arrangements is performed considering the total completion time and the fraction of time any picker stands still due to congestion.In general, the total completion times achieved using the CDH were shown to be less than the OPA, GRP, and historical SKU arrangements.The mean percentage time savings gained over all sixteen datasets by the CDH compared with the historical, OPA, and GRP SKU arrangements was 0.15 per cent, 1.81 per cent, and 0.5 per cent respectively (if eight pickers are used).This result is demonstrated in Figure 5 for the Large Dataset 1.For this dataset, CDH performs better by 43 minutes than the GRP/historical when 13 pickers are used: this results in a saving of 5.27 per cent.If only seven pickers are used, a saving of 22 minutes is achieved, which translates to a 1.98 per cent saving.As expected, the percentage saved increases with an increase in pickers, as the total congestion increases with an increase in pickers.Similar savings were achieved with the other large datasets.It is interesting to see that the GRP actually performs better at eight pickers for the Large Dataset 1, even though there is a more congestion.The reason for this is that the picking process involved a smaller number of cycles walked, meaning that the GRP SKU arrangement leads to fewer cycles walked in this case.The general savings when using the CDH were found to be less pronounced when working with smaller datasets: once again this is to be expected, as smaller lines have less congestion to resolve. The fraction of time that pickers are congested, plotted over the number of pickers in the picking line for Large Dataset 1, is graphed in Figure 6.This figure shows that there is a split between the fraction congestion differentiating the methods.The CDH SKU arrangement performs the best: it is about 5 per cent better than the GRP SKU arrangement when eight pickers are picking the line, and almost 12.5 per cent better than the GRP SKU arrangement when 16 pickers are used.The CDH SKU arrangement is also about 3 per cent better than the historical SKU arrangement at eight pickers, and almost 8 per cent better than the historical SKU arrangement at 16 pickers.On average (over all sixteen datasets for eight pickers), the total savings in congestion time using CDH relative to the historical, GRP, and OPA are 2.35 per cent, 6.73 per cent, and 10.73 per cent respectively. During the simulation runs, the amount of congestion at each bay was also captured.When considering the congestion, it might be expected that there would be a strong correlation between the percentage picks and the percentage congestion at a specific location.Moreover, it was found that there is a strong correlation between the location with a high pick frequency and the location directly in front of it.This is due to pickers waiting for other pickers to finish their picking at the next location.It is thus desirable to have locations with lower pick frequencies in front of locations with higher pick frequencies.This is automatically achieved by means of the CDH.This effect is illustrated in the percentage congestion per location (for the first 10 locations), shown in Table 1. The number of pickers in a picking line In the more commonly-used bi-directional carousal systems, only one picker is used per carousel, and thus the number of pickers is never investigated.No literature could be found dealing with the number of pickers to use when picking orders.Furthermore, the existing literature focuses on a situation where there are no picking lines; instead the pickers traverse aisles in the whole DC [4].These aisles can be split into zones in which the pickers solely pick, increasing the efficiency in the DC.Gray et al. [8] maximise the picker use during any day where a specific number of orders is to be picked.The use of pickers is calculated by the expected number of orders for any day, the estimated time for a picker to complete a cycle through a zone, the length of a picking day, and the number of pickers assigned to a zone. The approach that Gray et al. [8] suggest is not applicable in providing practical analysis for the picking line under consideration here.This is because the DCs in the literature do not have a deterministic number of orders to complete in any single day, and congestion is not taken into consideration (as zone picking largely circumvents congestion).Furthermore, the picker's picking and walking times are modelled too generally.The DC under consideration would want to complete any picking line as fast as possible to increase the DC flow, and thus to decrease the lead time to stores. Two factors are considered while analysing the number of pickers in a picking line by means of the simulation model: minimising the total picking line completion time, while keeping the congestion at a controllable level. Three techniques are considered to determine a 'good' number of pickers, with each approaching the problem in a different manner.The first, an absolute minimum approach, minimises only the completion time.The second technique, a critical limit approach, considers only the congestion.The third, a marginal analysis approach, considers the completion time and the picking line congestion simultaneously.The factors that affect a 'good' number of pickers for a specific picking line are the density of the picking line, the bay locations of the SKUs, and the SKU types. In this study, the density of a picking line is defined in terms of the average number of bays between every pick, if one picker were to go through the picking line; thus pickers stop and pick more frequently as the density of picks increase.The frequency of picks affects the total amount of congestion and thus also affects what is considered to be the 'good' number of pickers.The SKU locations could be arranged so that SKUs that are picked the most frequently are adjacent to each other.This adjacency would lead to increased congestion around those SKUs and thus to a potential increase in total congestion affecting a 'good' number of pickers.The historical SKU locations were used to determine the 'good' number of pickers, as this allowed comparison with actual (and not simulated) results. Total completion time To see the overall effect of the number of pickers on the total completion time, an increasing number of pickers are inserted into the picking line, iteratively, over separate runs.It is expected that the total completion time will decrease as the number of pickers increases, and that at the same time the congestion will increase.It is further expected that at some point the amount of congestion, due to the number of pickers, will increase to the point where the total completion time starts increasing again.Tests using the simulation show that this expected pattern does occur, as can be seen in Figure 7. It is interesting to note that the total completion time hits a relatively flat base at about ten pickers and stays there for quite a while, before starting to increase again at 40 to 45 pickers.This phenomenon is not completely counter-intuitive: as the number of pickers increases, there is naturally more congestion, yet there are also more pickers to complete orders.What is unexpected is that the number of pickers has to increase significantly (to around 40) before there is a noticeable increase in total completion. Critical limit on congestion The average fraction of time that a picker is in the congested state during the completion of the picking line is also measured during the simulation runs.In this approach, a user can decide on an acceptable percentage of congestion (the critical limit).The simulation data reveals that the percentage increases roughly linearly, to the point that when there are 45 pickers, there is almost 75 per cent percent congestion.An example of the percentage congestion as a function of the number of pickers is provided in Figure 8. From such a graph, a user can determine the 'good' number of pickers that will keep the percentage congestion below a predetermined critical level.For the example in Figure 8, the critical level of congestion was set at 15 per cent.This level is reached at around 8 pickers working in the line. Marginal analysis The marginal analysis (MA) method is proposed as the most realistic method, as it considers both congestion and total completion time.The absolute minimum technique, choosing the number of pickers that minimised the completion time, provided a solution with an unacceptable amount of congestion.The critical limit technique of choosing the number of pickers, once the congestion went over a certain critical limit, provided realistic results.However, it is difficult to determine an acceptable amount of congestion. MA is a process in which the additional benefits and costs caused by the inclusion of an extra unit are weighed up against each other.Using MA, one would typically add an extra unit to the system repeatedly, until the point where the additional benefit is less than the cost.At this point the number of units is the best for the system. In determining the number of pickers with MA, the additional benefit is defined as the marginal total completion time decrease that comes with the addition of one picker.Simply put, if one inserts an extra picker into the picking line, by how much does the total completion time decrease?The additional cost is seen as the increase in total congestion time that occurs with the addition of one extra picker into the picking line. An example of the MA performed on a historical data instance is shown in Figure 9.For this example, it can be seen that the additional picking time gained by an extra picker becomes less than the additional congestion time added by an extra picker between six and seven pickers; thus it can be said that a 'good' number of pickers for this line is about six or seven.The results, over all datasets, are summarised in Table 2.These results show that there is, as expected, a correlation between the density of a picking line and a 'good' number of pickers.The correlation coefficient in this case is 0.84 over all the historical datasets, which is significant. In addition, the number of pickers determined by MA is very close to the number of pickers (around 7 to 10) that the picking line managers at PEP found, from experience, to be a 'good' number of pickers for a picking line.These results enable picking line managers to predetermine what will be a 'good' number.Currently the number of pickers is determined by adding and removing pickers until picking speed and congestion seem to be in balance.The main objective of this paper was to simulate the picking process at PEP's Durban DC by means of ABMS, and then to provide analysis on two separate problems using this simulation.It was built to include the observed individual and inter-picker behaviour of actual order pickers at PEP, and to model the environment within which they pick.Data was collected from historical datasets and video footage to determine picker-specific characteristics such as walking velocity and picking and packing times, and was included in the simulation.The simulation was verified and validated visually and against historical data; the verification and validation revealed that the simulation does indeed model the real-life picking line with a satisfactory degree of accuracy. The problem related to the initial SKU arrangement while building a picking line compared methods from the literature, the historical SKU arrangement, and a novel method called the CDH.The CDH proved to provide an efficient SKU arrangement for the situation at the DC because it causes, on average, the lowest levels of congestion. The problem related to the calculation of a 'good' number of pickers in a picking line considered three different techniques.It was shown that MA provided the most realistic results.It provides a tool for PEP to predetermine a 'good' number of pickers for a picking line, based on the picking line properties before picking commences.The results in this paper were presented to PEP, and their implementation is in the planning phase. For future research, it might be beneficial to use this model to simulate different layouts. For example, it is clear from the results that smaller picking lines, with regard to the number of orders, can be picked with a greater number of pickers, as less congestion is created.It would be an interesting study to consider the financial viability of increasing the size of the warehouse, and creating more (and thus smaller) picking lines so that the average size of the picking lines is smaller.It would also be interesting to consider the physical size of the picking lines in relation to the number of bays, and determine what the best number of bays in a picking line should be. Figure 1 : Figure 1: A schematic representation of a 56-bay picking line at PEP's DC. Figure 2 : Figure 2: A flow diagram defining the velocity state of a picker Figure 5 : Figure 5: A comparison of the OPA, GRP, CDH, and historical SKU arrangements with respect to total completion time, according to the number of pickers for Large Dataset 1 Figure 6 : 1 Table 1 : Figure 6: A comparison of the OPA, GRP, CDH, and historical SKU arrangements with respect to fraction congestion, according to the number of pickers for Large Dataset 1 Figure 7 : Figure 7: A plot of the total completion time in, over the number of pickers, for Large Dataset 2. Figure 8 : Figure 8: The critical level on Large Dataset 2 Figure 9 : Figure 9: Marginal analysis performed on Large Dataset 3, comparing the fraction decrease of average congestion for any picker against the fraction increase in total time Table 2: A summary of the results over all historical datasets to determine a 'good' number of pickers for picking lines
2018-12-05T12:29:58.405Z
2014-09-10T00:00:00.000
{ "year": 2014, "sha1": "cac7a4f25f3daa41a6640f352e90f80653733834", "oa_license": "CCBY", "oa_url": "http://sajie.journals.ac.za/pub/article/download/886/567", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "769b161ad174675d377dd69013057cd3876817e2", "s2fieldsofstudy": [ "Engineering", "Business" ], "extfieldsofstudy": [ "Engineering" ] }
225676961
pes2o/s2orc
v3-fos-license
A Multi-Item Replenishment Problem with Carbon Cap-And-Trade under Uncertainty : Recently, as global warming has become a major issue, many companies have increased their efforts to control carbon emissions in green supply chain management (GSCM) activities. This paper deals with the multi-item replenishment problem in GSCM, from both economic and environmental perspectives. A single buyer orders multiple items from a single supplier, and simultaneously considers carbon cap-and-trade under limited storage capacity and limited budget. In this case we can apply a can-order policy, which is a well-known multi-item replenishment policy. Depending on the market characteristics, we develop two mixed-integer programming (MIP) models based on the can-order policy. The deterministic model considers a monopoly market in which a company fully knows the market information, such that both storage capacity and budget are already determined. In contrast, the fuzzy model considers a competitive or a new market, in which case both of those resources are considered as fuzzy numbers. We performed numerical experiments to validate and assess the efficiency of the developed models. The results of the experiments showed that the proposed can-order policy performed far better than the traditional can-order policy in GSCM. In addition, we verified that the fuzzy model can cope with uncertainties better than the deterministic model in terms of total expected costs. Introduction Since the 1997 Kyoto protocol, many countries and organizations have presented legislation or policies about managing carbon emissions as global warming destroys the Earth's ecosystem. Accordingly, any company can concentrate on reducing and managing its carbon emissions in a variety of areas such as supply chain management (SCM), production, contracts, inventory, and replenishment, with the result that green supply chain management (GSCM) is quickly spreading [1]. The main purpose of GSCM is to minimize supply chain costs and simultaneously reduce carbon emissions. In line with this goal, many governments have implemented carbon cap-and-trade regulation, which is well known as an effective economic-based mechanism [2]. Under such regulation, each company is allocated limited carbon emissions credits from its government, and it can buy or sell rights to emit carbon emissions with other companies in the carbon trading market [3,4]. In the European Union, the European Union Emissions Trading Scheme, which is the largest carbon trading market, has covered almost 50% of total carbon emissions [5]. Therefore, it becomes important to consider GSCM in the context of carbon cap-and-trade regulation. Beyond governments' efforts to reduce carbon emissions, some companies have tried to manage their own carbon emissions through their supply chains. For example, the retailers Asda, Tesco, Wal-Mart, and H&M require their suppliers to reduce carbon emissions during multi-item replenishment activities [6,7]. In this way, a company considers carbon emissions simultaneously with the multiitem replenishment problem under limited resources, such as storage capacity and budget [8]. However, a company could face two realistic situations based on either known or unknown market information [9]. When a company is in a monopoly market, it knows all of the relevant market information so it can easily decide on storage capacity and budgets. In contrast, when a company is in a competitive or a new market, the needed market information is difficult to grasp. In this case, because of weak market information, neither proper storage capacity nor budgets can be estimated in a stochastic sense. It is difficult to predefine them [10]. Therefore, this paper focuses on the multiitem replenishment problem with limited resources under two market information cases, the certain and the uncertain. Given current real-world practices, this paper considers the multi-item replenishment problem with carbon cap-and-trade under limited storage capacity and budget. This work (1) develops two mixed-integer problem (MIP) models based on a periodic can-order policy, which is a well-known multi-item replenishment policy; (2) includes carbon cap-and-trade for GSCM; and (3) covers two market information cases: certain and uncertain information. This paper makes the following contributions. First, we develop a deterministic model with carbon cap-and-trade for GSCM with certain (known) market information. In this model, both storage capacity and budget are already predefined. The deterministic model can be applied in a monopoly market case. Based on this model, we develop a fuzzy model applying fuzzy constraints. Because of uncertain market information, both storage capacity and budget are considered as fuzzy numbers. The fuzzy model can be applied in a competitive market case. Second, we suggest both MIP models based on the periodic can-order policy. Thus, each model can obtain optimal results under replenishment planning and inventory control. The structure of this paper is as follows. A literature review is presented in Section 2. Notation, assumptions, and problem definitions are introduced in Section 3. The deterministic model and the fuzzy model are developed in Sections 4 and Section 5, respectively. We present numerical experiments in Section 6. Academic, managerial, and environmental insights are presented in Section 7. Finally, conclusions are presented in Section 8. Literature Review This paper is related to three elements of the relevant literature: a multi-item replenishment problem for GSCM with carbon emissions, the can-order policy, and GSCM with fuzzy constraints. The research on multi-item replenishment for GSCM with carbon emissions has been approached in various ways. Konur [11] suggested an integrated inventory-transportation model which considers a carbon cap and the emissions characteristics of trucks during transportation. Nia, Far, and Niaki [8] considered an economic order quantity model under the green vendor-managed inventory (VMI) policy, which includes limited warehouse capacity, pallets, deliveries, and greenhouse-gas emissions. Mokhtari and Rezvan [12] applied VMI policy to solve a multi-item replenishment problem in GSCM. In their model, each retailer decides on a replenishment plan based on a limited amount of total GHG emissions. Noh and Kim [1] considered a single-setup/multiple-delivery policy for a green supply chain contract under an uncertain demand situation. They proved that the cooperative contract is useful to improve the performance of GSCM. Cui et al. [13] focused on a business-to-consumer (B2C) e-business company with distribution centers, and they utilized the strategy of multi-item joint replenishment-distribution. The theory of the can-order policy was first established by Balintfy [14]. Based on that study, Silver [15] focused on an inventory replenishment problem with Poisson demand and non-zero lead time. He established that a can-order policy performed better than an independent order policy. Liu and Yuan [16] developed a Markov model for a two-item inventory system with coordinated replenishment and a heuristic method for solving the problem. Kayiş et al. [17] presented a continuous can-order policy model with two items with Poisson demand under a semi-Markov decision process, and developed a simple enumeration algorithm to solve the problem. Tsai et al. [18] developed an association clustering method that gathers items with similar demand in a hierarchal way to evaluate the correlated demand for handling a large number of items. Kouki et al. [19] considered a continuous review can-order policy, developed as a Markov process with perishable items and zero lead time. According to the basic theory of the can-order policy, the inventory system should be continuously reviewed. However, because the supplier has limited replenishment opportunities, Johansen and Melchiors [20] suggested a can-order policy model with a periodic review system. To simulate their idea, they suggested a new method based on Markov decision theory to obtain a nearoptimal solution. Nagasawa et al. [21] presented a periodic can-order policy model that uses multiobjective programming to obtain the optimal can-order level. Most previous studies dealt with the continuous review system and focused on deriving the reorder level, can-order level, and order-upto level based on various algorithms. In the real world, a supplier might ship items once or twice a day, so a company might not receive items as frequently as desired [20]. Although those researchers assumed that those levels cannot be fixed to certain values, a decision maker usually sets those levels according to inventory strategy and service level, respectively. Besides, they did not consider carbon cap-and-trade and limited resources, which are storage capacity and budget in the uncertain market information case. In a competitive or a new market, it is impossible to describe the market information as a specific stochastic distribution because of this uncertainty. To handle this problem, many researchers apply the fuzzy method, a common way to handle uncertainty and non-stochastic situations [22]. Some researchers have tried to apply the fuzzy method to replenishment models. Sadeghi et al. [23] considered the economic production quantity policy under the consignment stock policy in a fuzzy demand situation, and they applied particle swarm optimization to solve their model. Nia et al. [24] presented a fuzzy resource nonlinear integer programming model that regards customer demand, storage capacity, and the budget as fuzzy numbers. Sadeghi et al. [25] focused on a nonlinear integer programming model with a trapezoidal fuzzy number for demand. However, for handling uncertain resource constraints, no previous studies have applied the fuzzy method to GSCM which focuses on a multi-item replenishment problem. Notation Index: item, = 1, … , period, = 1, … , Decision variables: binary variable indicating the order during period inventory level of item at the end of period on-hand inventory of item at the end of period backorder level of item at the end of period order amount for item in period the order-up-to level of item in period if drops below , then = 1, otherwise = 0 if drops below , then = 1, otherwise = 0 if drops below and at least one item is ordered in period , then = 1, otherwise = 0 if minor setup is done for item during period t, then = 1, otherwise = 0 amount of buying carbon credit in period t amount of selling carbon credit in period t Parameters: major ordering cost in period ($/order) minor ordering cost of item in period ($/order) per unit backorder cost of item in period ($/unit) ℎ per period holding cost of item in period ($/unit) carbon tax ($/ton) carbon cap for entire planning horizon ℎ amount of carbon emissions when a buyer holds inventory of item in period amount of carbon emissions when a buyer orders inventory of item in period demand for item during period volume of item storage capacity during period purchase price of item amount of budget during period can-order level of item reorder level of item big M, very big number 1. A single buyer orders multiple items from a single supplier and simultaneously considers carbon cap-and-trade under limited storage capacity and budget. 2. The system considers a periodic review can-order policy to obtain the order-up to level. The supplier can utilize limited transportation, so the review period is dependent on the contract period between the buyer and the supplier. 3. Both the buyer and the supplier share the demand information of the items in real time. Thus, the supplier can deliver multiple items with no lead time. Also, the demand for each item is known. 4. The reorder level and the can-order level are assumed as constant. 5. The storage capacity and budget are assumed as constant in the deterministic model and fuzzy numbers in the fuzzy model. 6. The buyer's carbon emissions occur throughout the ordering item and holding inventory. A buyer has a carbon cap and could buy or sell its own carbon credit to other company depending on the carbon emissions. Problem Definition In this GSCM, a single buyer orders multiple items from a single supplier, considering storage capacity, budget, and carbon emissions. Because of limited market information, both the storage capacity and the budget could have some level of uncertainty. For developing multi-item replenishment, we apply a can-order policy to GSCM. Figure 1 compares the inventory patterns with the traditional can-order policy and the proposed can-order policy for two items. By the proposed can-order policy, the order-up-to level , where = − , is decided individually through the planning horizon. The inventory level of both items in period explains the initial inventory level. In period , only the inventory level of item 1 is lower than reorder level , so item 1 is replenished up to . In period , the inventory levels of items 1 and 2 are lower than the reorder level and can-order level , respectively, so both items are replenished up to and . In period , both items are replenished because both of their inventory levels are lower than the reorder level. In contrast, in the traditional can-order policy, when the inventory level of an item drops to or below the reorder level , an order is placed and the order amount can make the inventory level up to . Thus, the order-up-to level , where = − , is always the same through the planning horizon. Deterministic Model In this section we use MIP to develop a deterministic model for GSCM. The total cost consists of the major ordering cost, minor ordering cost, backorder cost, inventory holding cost, and carbon tax cost. The major ordering cost, incurred when an order is placed during period , is given by: (1) where = 0 or 1, = 1,2, ⋯ , . The minor ordering cost of each item occurs when item is ordered in period , and it cannot be incurred unless the major ordering cost is also incurred. It is given by: The holding cost can be derived as the sum of the order-up-to level and the on-hand inventory at the end of each period. At the beginning of each period, the order-up-to level is delivered, and that amount is decreed by the amount of on-hand inventory at the end of the preceding period. Thus, the expected inventory of each item in each period is approximated as the average of those amounts. The backorder cost of item at the end of period is: A buyer has its own carbon cap. If the buyer emits less carbon than the given carbon cap, it would get rewarded by receiving money. Otherwise, the buyer pays penalties in the form of a carbon tax [26]. The buyer's carbon cap cost at period is: Based on Equations (1)- (5), the following mathematical model can be developed: ≤ + + − , = , , ⋯ , , = , , ⋯ , , ≤ , = , , ⋯ , , = , , ⋯ , , ≤ , = , , ⋯ , , = , , ⋯ , , ≤ , = , , ⋯ , , = , , ⋯ , , ≤ , = , , ⋯ , , = , , ⋯ , , ≤ + , = , , ⋯ , , = , , ⋯ , , ≥ − + , = , , ⋯ , , = , , ⋯ , , Equation (6) presents the objective function that minimizes the total inventory holding cost, backorder cost, major ordering cost, and minor ordering cost. Equation (7) ensures that the inventory position at the end of period is equal to the difference between the order-up-to level and demand. Equation (8) ensures that the inventory at the end of period is equal to the difference between the on-hand inventory and the backorder level. Equation (9) presents the constraint of incurring the major ordering cost whenever at least one item is ordered. Equations (10) and (11) regulate ordering when the inventory position falls below the reorder level. If , the initial inventory level at , drops below , then = 1, otherwise = 0. According to Equations (12) and (13), if the inventory position at the end of the previous period is below the reorder level, the item is ordered in the amount equal to the difference between the order-up-to level and the inventory position at the end of the previous period. Equations (14) and (15) regulate whether the inventory position is below the canorder level. If , the initial inventory level at , drops below , then = 1, otherwise = 0. According to Equations (16)- (20), if the inventory position of at least one item is below its reorder point, the following order includes other items whose inventory positions are below their can-order level. According to Equations (21) and (22), if the inventory position at the end of the previous period is below the can-order level, that item is ordered in the amount equal to the difference between its order-up-to level and its inventory position at the end of the previous period. Equation (23) regulates the ordering of items whose inventory position is below the reorder or can-order level. Equations (24) and (25) indicate that the order-up-to level is set based on the inventory position, which is below the reorder level or the can-order level. Equation (26) indicates that any item below its reorder level or can-order level incurs the minor ordering cost. Equation (27) ensures the carbon cap-and-trade constraint. The carbon emissions are incurred from the ordering and holding activities. Based on the amount of emissions, the buyer could buy or sell the carbon credit from the participating companies [26]. Equation (28) ensures that the sum of the inventory position for each item at the beginning of each period and the order-up-to level will not exceed the storage capacity. Equation (29) ensures that the buyer cannot order items over its budget. Fuzzy Model We here develop a fuzzy model based on the deterministic model. A company that is in a competitive market or moves into a new market usually has less information about that market. In this case, to fit the information as a specific stochastic distribution is usually impossible. It is difficult to predetermine the amount of resources, such as limited storage capacity and/or budget. This uncertain market information can be handled by using the fuzzy method. To apply the fuzzy method to our deterministic model, which is also called a crisp model, the model must first be transformed into a fuzzy model. In order to obtain a crisp value from the fuzzy model, the fuzzy model should be converted to a new crisp model through a defuzzification process. Although many researchers have developed various defuzzification methods, we used the symmetry method introduced by Zimmermann [22]. The crisp objective function with fuzzy constraints, where ≤ and ≤ are a set of fuzzy and crisp constraints, respectively, is formulated as: Now, using a membership function for a fuzzy set, an element of is mapped to a value between 0 and 1. The membership function for the fuzzy sets that represent the fuzzy constraints can be defined as: In Equation (31), ( ) could be 1 when the constraints are well satisfied, otherwise 1. The is the tolerance interval which is assumed a constraint to be linearly increasing. Defining the membership function of the objective function requires solving the following two problems. The first problem is the original crisp model, and the optimal result is set as = ( ) = . ( ) = , In short, and are the minimum total cost with and without tolerances for resources, respectively. Based on that, the membership function of the objective function is obtained as: Based on Equations (30)-(34), both the objective function and the constraints have 'symmetry' such that the crisp model by the defuzzification process is transformed. As mentioned earlier, we considered the storage capacity and the budget as fuzzy numbers. The following table methodically shows the fuzzy constraints of Equations (28) Finally, the equivalent crisp model, transformed from the fuzzy model using the method of Zimmermann [22], is obtained: We denote , and , as the 'tolerance intervals' of , and , respectively. Numerical Experiments To validate our proposed two MIP models, we tested three kinds of numerical experiments. First, we conducted an efficiency test for the deterministic model by comparing it to the traditional can-order policy with predetermined values, ( , , ). Second, we compared the traditional canorder policy with the proposed can-order policy. In this experiment, we determined which the policy best fit with GSCM. Third, we tested the fuzzy model with various tolerances. In all the experiments we set the very large number at 1,000,000. To solve the MIP models, we used LINGO 17.0 software. Efficiency Test To show the efficiency of the proposed can-order policy, we compared it to the traditional canorder policy with predetermined values. In the traditional can-order policy, a decision maker sets the values for ( , , ) based on previous data or their experience. Thus, this comparison is necessary to show whether the proposed deterministic model can reduce the total cost compared to the traditional can-order policy with predetermined values. Table 1 presents the basic input parameters for three items in 12 periods. The major ordering cost ( ) is $1000/order. It is difficult to apply constraints, such as storage capacity, budget, or carbon cap-and-trade, to it in the traditional can-order policy with predetermined values, so we did not consider those constraints in this test. Table 1. Input parameters for the basic test. Notes: ( * , * ): uniform distribution; ( * , * ): normal distribution Table 2 shows the initial inventory position, the reorder level, and the can-order level that we used during the test of the traditional can-order policy with predetermined values. The order-up-to level for the traditional can-order policy with predetermined values was assumed to be = 697, = 590, and = 184, the highest demand for each item during the 12 periods. We used Excel 2017 for calculating the traditional can-order policy with predetermined values. Table 2. Initial inventory position, can-order level, and reorder level. Table 3 shows that the proposed model led to a total cost of $757,120.50. To see the precise efficiency of the proposed model, we tested it against the traditional can-order policy with predetermined values. Otherwise, it is difficult to discover whether the order-up-to level is biased in predetermined values. We set the order-up-to level from 95% to 75% of the highest demand in 5% decrements. Table 4 shows the results of this test. As shown in Table 4, the set of order-up-to level = 662, = 561, and = 175 obtains the best result, $928,417.00. The total cost is increased with a lower order-up-to-level because the company cannot cope with the demand. The proposed model strongly outperforms the traditional can-order policy with predetermined values. This result shows that predetermined values, which are biased by a decision maker, rarely set the order-up-to level properly, which increases the total cost. Comparison Test between the Proposed can-order Policy and the Traditional can-order Policy To determine the better policy, we compared the proposed can-order policy to the traditional can-order policy. Based on the deterministic model, we developed a new model about the traditional can-order policy, which can obtain the same order-up-to level through the planning horizon. We conducted this test using an experimental design in which we varied the number of items and periods from 6 to 10 and 10 to 20, respectively. The major ordering cost was $1000/order and the initial inventory position was zero. The amount of carbon emissions for holding and ordering were assumed as 10% of each cost. Both carbon tax and carbon cap were assumed as $2 and $80,000, respectively. The storage capacity was set as 4,000 and the amount of budget was set as $80,000. Table 5 presents each item's input data. The demands for each item were generated using a normal distribution. The reorder level was set as 99% of the safety stock, and the can-order level was assumed as 20% above the reorder level. Table 6 shows the results of the comparison test. In all cases, the proposed can-order policy outperformed the traditional can-order policy because the proposed policy produced less total cost than the traditional policy. In some of the five-period tests the total costs are negative values, which is an interesting point. Those cases illustrate that a company sells carbon caps to other companies so that it receives profits. The result proves that the proposed policy benefits the company that replenishes multi-items under carbon cap-and-trade. The proposed policy could be efficacious in the large-scale multi-item replenishment problem in GSCM. Thus, based on these discussions, the proposed model is promising for practical applications. Fuzzy Model Test We also tested the fuzzy model to demonstrate the effects of changing the tolerances. To initialize and solve the model in Equation (38), we first solved the two models in Equations (33) and (35) such that we could obtain the values of and . We considered three items during 12 periods and used the parameters in Section 6.2. Table 7 shows the tolerances for the parameters of the storage capacity and the budget. The base values of the storage capacity and the budget were 1000 m and $8000, respectively. Table 8 also shows the value of which is the total cost of the deterministic model, and the result of the fuzzy model. Table 8 shows that all the results of the fuzzy model test produced better results than the deterministic model, $1,400,277. Two factors explain this. First, the uncertain market information on storage capacity and budget might lead to large on-hand inventories to avoid large backorders. Second, the strict constraints of storage capacity and budget in the deterministic model are strongly fixed. Based on the value of λ, the decision maker can obtain the values of the storage capacity and budget, being (1 − λ) , and (1 − λ) , , respectively. This test illustrates that the fuzzy model not only handles the uncertainty but also improves the system performance. Thus, the fuzzy model is a good option for a decision maker when the market information has uncertainty. Academic Insights We developed two MIP models dealing with the periodic can-order policy for GSCM with limited storage capacity, limited budget, and carbon cap-and-trade. We also developed a deterministic model under certain (known) market information, based on which we suggested a fuzzy model that considers the fuzzy numbers of storage capacity and budget. This is the first study to develop both deterministic and fuzzy models of the can-order policy under carbon cap-and-trade for GSCM. Thus, our study can be considered initial research which considers multi-item replenishment with carbon cap-and-trade for GSCM. Managerial Insights A company interested in GSCM could benefit from our study. The deterministic and fuzzy models developed here can help a company systematically replenish multi-items with carbon capand-trade regulation. The deterministic model suggests practical insights on multi-item replenishment for minimizing the total cost under limited resources and carbon cap-and-trade regulation. It can also help a company in a monopoly market make sound investment decisions. For a company that has uncertain market information, the fuzzy model can support preparation of appropriate storage capacity and budget, and also planning for cost minimization. Thus, the correct implementation of these models will give a company better decisions in managing GSCM and reducing its total cost. Environmental Insights This paper incorporates carbon cap-and-trade regulation into GSCM. Considering carbon capand-trade regulation, the companies can increase resource-use efficiently and grow together. This sustainable situation is regarded on the 2030 Agenda and the Sustainable Development Goal 9 (SDG 9), which is industry, innovation, and infrastructure. Thus, using this paper, the company can help protect environment while earning profits. Conclusions This paper presents two MIP models, a deterministic model and a fuzzy model, which address the multi-item replenishment problem with carbon cap-and-trade for GSCM under limited resources. We developed the two models based on the can-order policy, which is one of the well-known multiitem replenishment policies. Reflecting real-world situations, we considered limited storage capacity, budget, and carbon cap-and-trade regulation. The deterministic model can be used when a decision maker has solid market information, while the fuzzy model can be applied when a decision maker faces uncertain market information in a competitive or a new market. In this model, both limited storage capacity and budget are denoted as fuzzy numbers. We carried out three experiments to test the efficiency of the two models. First we compared our deterministic model with the traditional can-order policy which already has predetermined values ( , , ), and proved that our deterministic model was significantly better because it resulted in lower total costs. In the second experiment, by using our deterministic model, we compared the proposed can-order policy with the traditional can-order policy. The result showed that the proposed can-order policy outperformed the traditional can-order policy by, again, resulting in lower total costs. Finally, we quantified the effects of the fuzzy model with various tolerances. The results showed that applying fuzzy constraints is useful to make decisions under uncertain situations. We demonstrated the validity and practicality of our models in those experiments, confirming that our models can be useful for multi-item replenishment in GSCM. There are some research limitations in this study and also some indications for possible future works. First, this paper only considered a single supplier and a single buyer. In the real world there are many suppliers, so it is an important decision to select one among several. Our current models could be applied to extend GSCM with this supplier-selection problem. Also, this paper assumed the deterministic demand. For replenishment planning, forecasting demand theory could be considered.
2020-06-18T09:07:50.040Z
2020-06-15T00:00:00.000
{ "year": 2020, "sha1": "abb71ad446aa460d4562d0787405bedfc3b92620", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/12/4877/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a42bf336e4e4bfc68a7257405f30c551bc094c70", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
215790670
pes2o/s2orc
v3-fos-license
The mutational constraint spectrum quantified from variation in 141,456 humans Genetic variants that inactivate protein-coding genes are a powerful source of information about the phenotypic consequences of gene disruption: genes that are crucial for the function of an organism will be depleted of such variants in natural populations, whereas non-essential genes will tolerate their accumulation. However, predicted loss-of-function variants are enriched for annotation errors, and tend to be found at extremely low frequencies, so their analysis requires careful variant annotation and very large sample sizes1. Here we describe the aggregation of 125,748 exomes and 15,708 genomes from human sequencing studies into the Genome Aggregation Database (gnomAD). We identify 443,769 high-confidence predicted loss-of-function variants in this cohort after filtering for artefacts caused by sequencing and annotation errors. Using an improved model of human mutation rates, we classify human protein-coding genes along a spectrum that represents tolerance to inactivation, validate this classification using data from model organisms and engineered human cells, and show that it can be used to improve the power of gene discovery for both common and rare diseases. Genetic variants that inactivate protein-coding genes are a powerful source of information about the phenotypic consequences of gene disruption: genes that are crucial for the function of an organism will be depleted of such variants in natural populations, whereas non-essential genes will tolerate their accumulation. However, predicted loss-of-function variants are enriched for annotation errors, and tend to be found at extremely low frequencies, so their analysis requires careful variant annotation and very large sample sizes 1 . Here we describe the aggregation of 125,748 exomes and 15,708 genomes from human sequencing studies into the Genome Aggregation Database (gnomAD). We identify 443,769 high-confidence predicted loss-of-function variants in this cohort after filtering for artefacts caused by sequencing and annotation errors. Using an improved model of human mutation rates, we classify human protein-coding genes along a spectrum that represents tolerance to inactivation, validate this classification using data from model organisms and engineered human cells, and show that it can be used to improve the power of gene discovery for both common and rare diseases. The physiological function of most genes in the human genome remains unknown. In biology, as in many engineering and scientific fields, breaking the individual components of a complex system can provide valuable insight into the structure and behaviour of that system. For the discovery of gene function, a common approach is to introduce disruptive mutations into genes and determine their effects on cellular and physiological phenotypes in mutant organisms or cell lines 2 . Such studies have yielded valuable insight into eukaryotic physiology and have guided the design of therapeutic agents 3 . However, although studies in model organisms and human cell lines have been crucial in deciphering the function of many human genes, they remain imperfect proxies for human physiology. Obvious ethical and technical constraints prevent the large-scale engineering of loss-of-function mutations in humans. However, recent exome and genome sequencing projects have revealed a surprisingly high burden of natural pLoF variation in the human population, including stop-gained, essential splice, and frameshift variants 1,4 , which can serve as natural models for inactivation of human genes. Such variants have already revealed much about human biology and disease mechanisms, through many decades of study of the genetic basis of severe Mendelian diseases 5 , most of which are driven by disruptive variants in either the heterozygous or homozygous state. These variants have also proved valuable in identifying potential therapeutic targets: confirmed LoF variants in the PCSK9 gene have been causally linked to low levels of low-density lipoprotein cholesterol 6 , and have ultimately led to the development of several inhibitors of PCSK9 that are now in clinical use for the reduction of cardiovascular disease risk. A systematic catalogue of pLoF variants in humans and the classification of genes along a spectrum of tolerance to inactivation would provide a valuable resource for medical genetics, identifying candidate disease-causing mutations, potential therapeutic targets, and windows into the normal function of many currently uncharacterized human genes. Several challenges arise when assessing LoF variants at scale. LoF variants are on average deleterious, and are thus typically maintained at very low frequencies in the human population. Systematic genome-wide discovery of these variants requires whole-exome or whole-genome sequencing of very large numbers of samples. In addition, LoF variants are enriched for false positives compared with synonymous or other benign variants, including mapping, genotyping (including somatic variation), and particularly, annotation errors 1 , and careful filtering is required to remove such artefacts. Population surveys of coding variation enable the evaluation of the strength of natural selection at a gene or region level. As natural selection purges deleterious variants from human populations, methods to detect selection have modelled the reduction in variation (constraint) 7 or shift in the allele frequency distribution 8 , compared to an expectation. For analyses of selection on coding variation, synonymous variation provides a convenient baseline, controlling for other potential population genetic forces that may influence the amount of variation as well as technical features of the local sequence. A model of constraint was previously applied to define a set of 3,230 genes with a high probability of intolerance to heterozygous pLoF variation (pLI) 4 and estimated the selection coefficient for variants in these genes 9 . However, the ability to comprehensively characterize the degree of selection against pLoF variants is particularly limited, as for small genes, the expected number of mutations is still very low, even for samples of up to 60,000 individuals 4,10 . Furthermore, the previous dichotomization of pLI, although convenient for the characterization of a set of genes, disguises variability in the degree of selective pressure against a given class of variation and overlooks more subtle levels of intolerance to pLoF variation. With larger sample sizes, a more accurate quantitative measure of selective pressure is possible. Here, we describe the detection of pLoF variants in a cohort of 125,748 individuals with whole-exome sequence data and 15,708 individuals with whole-genome sequence data, as part of the Genome Aggregation Database (gnomAD; https://gnomad.broadinstitute.org), the successor to the Exome Aggregation Consortium (ExAC). We develop a continuous measure of intolerance to pLoF variation, which places each gene on a spectrum of LoF intolerance. We validate this metric by comparing its distribution to several orthogonal indicators of constraint, including the incidence of structural variation and the essentiality of genes as measured using mouse gene knockout experiments and cellular inactivation assays. Finally, we demonstrate that this metric improves the interpretation of genetic variants that influence rare disease and provides insight into common disease biology. These analyses provide, to our knowledge, the most comprehensive catalogue so far of the sensitivity of human genes to disruption. In a series of accompanying manuscripts, other complementary analyses of this dataset are described. Using an overlapping set of 14,237 whole genomes, the discovery and characterization of a wide variety of structural variants (large deletions, duplications, insertions, or other rearrangements of DNA) is reported 11 . The value of pLoF variants for the discovery and validation of therapeutic drug targets is explored 12 , and a case study of the use of these variants from gnomAD and other large reference datasets is provided to validate the safety of inhibition of LRRK2-a candidate therapeutic target for Parkinson's disease 13 . By combining the gnomAD dataset with a large collection of RNA sequencing data from adult human tissues 14 , the value of tissue expression data in the interpretation of genetic variation across a range of human diseases is reported 15 . Finally, the effect of two understudied classes of human variation-multi-nucleotide variants 16 and variants that create or disrupt open-reading frames in the 5′ untranslated region of human genes-is characterized and investigated 17 . A high-quality catalogue of variation We aggregated whole-exome sequencing data from 199,558 individuals and whole-genome sequencing data from 20,314 individuals. These data were obtained primarily from case-control studies of common adult-onset diseases, including cardiovascular disease, type 2 diabetes and psychiatric disorders. Each dataset, totalling more than 1.3 and 1.6 petabytes of raw sequencing data, respectively, was uniformly processed, joint variant calling was performed on each dataset using a standardized BWA-Picard-GATK pipeline 18 , and all data processing and analysis was performed using Hail 19 . We performed stringent sample quality control (Extended Data Fig. 1), removing samples with lower sequencing quality by a variety of metrics, samples from second-degree or closer related individuals across both data types, samples with inadequate consent for the release of aggregate data, and samples from individuals known to have a severe childhood-onset disease as well as their first-degree relatives. The final gnomAD release contains genetic variation from 125,748 exomes and 15,708 genomes from unique unrelated individuals with high-quality sequence data, spanning 6 global and 8 sub-continental ancestries (Fig. 1a, b), which we have made publicly available at https://gnomad.broadinstitute.org. We also provide subsets of the gnomAD datasets, which exclude individuals who are cases in case-control studies, or who are cases of a few particular disease types such as cancer and neurological disorders, or who are also aggregated in the Bravo TOPMed variant browser (https://bravo.sph.umich.edu). Among these individuals, we discovered 17.2 million and 261.9 million variants in the exome and genome datasets, respectively; these variants were filtered using a custom random forest process (Supplementary Information) to 14.9 million and 229.9 million high-quality variants. Comparing our variant calls in two samples for which we had independent gold-standard variant calls, we found that our filtering achieves very high precision (more than 99% for single nucleotide variants (SNVs), over 98.5% for indels in both exomes and genomes) and recall (over 90% for SNVs and more than 82% for indels for both exomes and genomes) at the single sample level (Extended Data Fig. 2). In addition, we leveraged data from 4,568 and 212 trios included in our exome and genome call-sets, respectively, to assess the quality of our rare variants. We found that our model retains over 97.8% of the transmitted singletons (singletons in the unrelated individuals that are transmitted to an offspring) on chromosome 20 (which was not used for model training) (Extended Data Fig. 3a-d). In addition, the number of putative de novo calls after filtering are in line with expectations 20 (Extended Data Fig. 3e-h), and our model had a recall of 97.3% for de novo SNVs and 98% for de novo indels based on 375 independently validated de novo variants in our whole-exome trios (295 SNVs and 80 indels) (Extended Data Fig. 3i, j). Altogether, these results indicate that our filtering strategy produced a call-set with high precision and recall for both common and rare variants. These variants reflect the expected patterns based on mutation and selection: we observe 84.9% of all possible consistently methylated CpG-to-TpG transitions that would create synonymous variants in the human exome (Supplementary Table 14), which indicates that at this sample size, we are beginning to approach mutational saturation of this highly mutable and weakly negatively selected variant class. However, we only observe 52% of methylated CpG stop-gained variants, which illustrates the action of natural selection removing a substantial fraction of gene-disrupting variants from the population (Fig. 1c-h). Across all mutational contexts, only 11.5% and 3.7% of the possible synonymous and stop-gained variants, respectively, are observed in the exome dataset, which indicates that current sample sizes remain far from capturing complete mutational saturation of the human exome (Extended Data Fig. 4). Identifying loss-of-function variants Some LoF variants will result in embryonic lethality in humans in a heterozygous state, whereas others are benign even at homozygosity, with a wide spectrum of effects in between. Throughout this manuscript, we define pLoF variants to be those that introduce a premature stop (stop-gained), shift-reported transcriptional frame (frameshift), or alter the two essential splice-site nucleotides immediately to the left and right of each exon (splice) found in protein-coding transcripts, and ascertain their presence in the cohort of 125,748 individuals with exome sequence data. As these variants are enriched for annotation artefacts 1 , we developed the loss-of-function transcript effect estimator (LOFTEE) package, which applies stringent filtering criteria from first principles (such as removing terminal truncation variants, as well as rescued splice variants, that are predicted to escape nonsense-mediated decay) to pLoF variants annotated by the variant effect predictor (Extended Data Fig. 5a). Despite not using frequency information, we find that this method disproportionately removes pLoF variants that are common in the population, which are known to be enriched for annotation errors 1 , while retaining rare, probable deleterious variations, as well as reported pathogenic variation (Fig. 2a). LOFTEE distinguishes high-confidence pLoF variants from annotation artefacts, and identifies a set of putative splice variants outside the essential splice site. The filtering strategy of LOFTEE is conservative in the interest of increasing specificity, filtering some potentially functional variants that display a frequency spectrum consistent with that of missense variation (Fig. 2b). Applying LOFTEE v1.0, we discover 443,769 high-confidence pLoF variants, of which 413,097 fall on the canonical transcripts of 16,694 genes. The number of pLoF variants per individual is consistent with previous reports 1 , and is highly dependent on the frequency filters chosen (Supplementary Table 17). Aggregating across variants, we created a gene-level pLoF frequency metric to estimate the proportion of haplotypes that contain an inactive A fr ic a n /A fr ic a n A m e r ic a n indicate an enrichment of lower frequency variants, which suggests increased deleteriousness. e, f, The proportion of possible variants observed for each functional class for each mutational type for exomes (e) and genomes (f). CpG transitions are more saturated, except where selection (for example, pLoFs) or hypomethylation (5′ untranslated region) decreases the number of observations. g, h, The total number of variants observed in each functional class for exomes (g) and genomes (h). Error bars in c-f represent 95% confidence intervals (note that in some cases these are fully contained within the plotted point). copy of each gene. We find that 1,555 genes have an aggregate pLoF frequency of at least 0.1% across all individuals in the dataset (Extended Data Fig. 5c), and 3,270 genes have an aggregate pLoF frequency of at least 0.1% in any one population. Furthermore, we characterized the landscape of genic tolerance to homozygous inactivation, identifying 4,332 pLoF variants that are homozygous in at least one individual. Given the rarity of true homozygous LoF variants, we expected substantial enrichment of such variants for sequencing and annotation errors, and we subjected this set to additional filtering and deep manual curation before defining a set of 1,815 genes (2,636 high-confidence variants) that are likely to be tolerant to biallelic inactivation (Supplementary Data 7). The LoF intolerance of human genes Just as a preponderance of pLoF variants is useful for identifying LoF-tolerant genes, we can conversely characterize the intolerance of a gene to inactivation by identifying marked depletions of predicted LoF variation 4,7 . Here, we present a refined mutational model, which incorporates methylation, base-level coverage correction, and LOFTEE (Supplementary Information, Extended Data Fig. 6), to predict expected levels of variation under neutrality. Under this updated model, the variation in the number of synonymous variants observed is accurately captured (r = 0.979). We then applied this method to detect depletion of pLoF variation by comparing the number of observed pLoF variants against our expectation in the gnomAD exome data from 125,748 individuals-more than doubling the sample size of ExAC, the previously largest exome collection 4 . For this dataset, we computed a median of 17.9 expected pLoF variants per gene (Fig. 2c) and found that 72.1% of genes have more than 10 pLoF variants (powered to be classified into the most constrained genes) (Supplementary Information) expected on the canonical transcript ( Fig. 2d), an increase from 13.2% and 62.8%, respectively, in ExAC. The smaller sample size in ExAC required a transformation of the observed and expected values for the number of pLoF variants in each gene into the pLI: this metric estimates the probability that a gene falls into the class of LoF-haploinsufficient genes (approximately 10% observed/expected variation) and is ideally used as a dichotomous metric (producing 3,230 genes with pLI > 0.9). Here, our refined model and substantially increased sample size enabled us to directly assess the degree of intolerance to pLoF variation in each gene using the continuous metric of the observed/expected ratio and to estimate a confidence interval around the ratio. We find that the median observed/expected ratio is 48%, which indicates that, as noted previously, most genes Fig. 1c, d) is shown by LOFTEE designation and filter. Variants filtered out by LOFTEE exhibit frequency spectra that are similar to those of missense variants; predicted splice variants outside the essential splice site are more rare, and high-confidence variants are very likely to be singletons. Only SNVs with at least 80% call rate are included here. Error bars represent 95% confidence intervals. c, d, The total number of pLoF variants (c), and proportion of genes with more than ten pLoF variants (d) observed and expected (in the absence of selection) as a function of sample size (downsampled from gnomAD). Selection reduces the number of variants observed, and variant discovery approximately follows a square-root relationship with the number of samples. At current sample sizes, we would expect to identify more than 10 pLoF variants for 72.1% of genes in the absence of selection. Article exhibit at least moderate selection against pLoF variation, and that the distribution of the observed/expected ratio is not dichotomous, but continuous (Extended Data Fig. 7a). For downstream analyses, unless otherwise specified, we use the 90% upper bound of this confidence interval, which we term the loss-of-function observed/expected upper bound fraction (LOEUF) (Extended Data Fig. 7b, c), and bin 19,197 genes into deciles of approximately 1,920 genes each. At current sample sizes, this metric enables the quantitative assessment of constraint with a built-in confidence value, and distinguishes small genes (for example, those with observed = 0, expected = 2; LOEUF = 1.34) from large genes (for example, observed = 0, expected = 100; LOEUF = 0.03), while retaining the continuous properties of the direct estimate of the ratio (Supplementary Information). At one extreme of the distribution, we observe genes with a very strong depletion of pLoF variation (first LOEUF decile aggregate observed/expected approximately 6%) (Extended Data Fig. 7e), including genes previously characterized as high pLI (Extended Data Fig. 7f). By contrast, we find unconstrained genes that are relatively tolerant of inactivation, including many that contain homozygous pLoF variants (Extended Data Fig. 7g). We note that the use of the upper bound means that LOEUF is a conservative metric in one direction: genes with low LOEUF scores are confidently depleted for pLoF variation, whereas genes with high LOEUF scores are a mixture of genes without depletion, and genes that are too small to obtain a precise estimate of the observed/expected ratio. In general, however, the scale of gnomAD means that gene length is rarely a substantive confounder for the analyses described here, and all downstream analyses are adjusted for the length of the coding sequence or filtered to genes with at least ten expected pLoFs (Supplementary Information). Validation of the LoF-intolerance score The LOEUF metric allows us to place each gene along a continuous spectrum of tolerance to inactivation. We examined the correlation of this metric with several independent measures of genic sensitivity to disruption. First, we found that LOEUF is consistent with the expected behaviour of well-established gene sets: known haploinsufficient genes are strongly depleted of pLoF variation, whereas olfactory receptors are relatively unconstrained, and genes with a known autosomal recessive mechanism, for which selection against heterozygous disruptive variants tends to be present but weak 9 , fall in the middle of the distribution (Fig. 3a). In addition, LOEUF is positively correlated with the occurrence of 6,735 rare autosomal deletion structural variants overlapping protein-coding exons identified in a subset of 6,749 individuals with whole-genome sequencing data in this manuscript 11 (r = 0.13; P = 9.8 × 10 −68 ) (Fig. 3b). Biological properties of constraint We investigated the properties of genes and transcripts as a function of their tolerance to pLoF variation (LOEUF). First, we found that LOEUF correlates with the degree of connection of a gene in protein-interaction networks (r = −0.14; P = 1.7 × 10 −51 after adjusting for gene length) (Fig. 4a) and functional characterization (Extended Data Fig. 8a). In addition, constrained genes are more likely to be ubiquitously expressed across 38 tissues in the Genotype-Tissue Expression (GTEx) project (Fig. 4b) (LOEUF r = −0.31; P < 1 × 10 −100 ) and have higher expression on average (LOEUF ρ = −0.28; P < 1 × 10 −100 ), consistent with previous results 4 . Although most results in this study are reported at the gene level, we have also extended our framework to compute LOEUF for all protein-coding transcripts, allowing us to explore the extent of differential constraint of transcripts within a given gene. In cases in which a gene contained transcripts with varying levels of constraint, we found that transcripts in the first LOEUF decile were more likely to be expressed across tissues than others in the same gene (n = 1,740 genes), even when adjusted for transcript length (Fig. 4c) (constrained transcripts are on average 6.34 transcripts per million higher; P = 2.2 × 10 −14 ). Furthermore, we found that the most constrained transcript for each gene was typically the most highly expressed transcript in tissues with disease relevance 24 (Extended Data Fig. 8c), which supports the need for transcript-based variant interpretation, as explored in more depth in an accompanying manuscript 15 . Finally, we investigated potential differences in LOEUF across human populations, restricting to the same sample size across all populations to remove bias due to differential power for variant discovery. As the smallest population in our exome dataset (African/African American) has only 8,128 individuals, our ability to detect constraint against pLoF variants for individual genes is limited. However, for well-powered genes (expected pLoF ≥ 10) (Supplementary Information), we observed a lower mean observed/expected ratio and LOEUF across genes among African/African American individuals, a population with a larger effective population size, compared with other populations (Extended Data Fig. 8d, e), consistent with the increased efficiency of selection in populations with larger effective population sizes 25,26 . Constraint informs disease aetiologies The LOEUF metric can be applied to improve molecular diagnosis and advance our understanding of disease mechanisms. Disease-associated genes, discovered by different technologies over the course of many years across all categories of inheritance and effects, span the entire spectrum of LoF tolerance (Extended Data Fig. 9a). However, in recent years, high-throughput sequencing technologies have enabled the identification of highly deleterious variants that are de novo or only inherited in small families or trios, leading to the discovery of novel disease genes under extreme constraint against pLoF variation that could not have been identified by linkage approaches that rely on broadly inherited variation (Extended Data Fig. 9b). This result is consistent with a recent analysis that shows a post-whole-exome/whole-genome sequencing era enrichment for gene-disease relationships attributable to de novo variants 27 . Rare variants, which are more likely to be deleterious, are expected to exhibit stronger effects on average in constrained genes (previously shown using pLI from ExAC 28 ), with an effect size related to the severity and reproductive fitness of the phenotype. In an independent cohort of 5,305 individuals with intellectual disability or developmental disorders and 2,179 controls, the rate of pLoF de novo variation in cases is 15-fold higher in genes belonging to the most constrained LOEUF decile, compared with controls (Fig. 5a), with a slightly increased rate (2.9-fold) in the second highest decile but not in others. A similar, but attenuated enrichment (4.4-fold in the most constrained decile) is seen for de novo variants in 6,430 patients with autism spectrum disorder (Extended Data Fig. 9c). Furthermore, in burden tests of rare variants (allele count across both cases and controls = 1) of patients with schizophrenia 28 , we find a significantly higher odds ratio in constrained genes (Extended Data Fig. 9d). Finally, although pLoF variants are predominantly rare, other more common variation in constrained genes may also be deleterious, including the effects of other coding or regulatory variants. In a heritability partitioning analysis of association results for 658 traits in the UK Biobank and other large-scale genome-wide association study (GWAS) efforts, we find an enrichment of common variant associations near genes that is linearly related to LOEUF decile across numerous traits (Fig. 5b). Schizophrenia and educational attainment are the most enriched traits (Fig. 5c), consistent with previous observations in associations between rare pLoF variants and these phenotypes [29][30][31] . This enrichment persists even when accounting for gene size, expression in GTEx brain samples, and previously tested annotations of functional Article regions and evolutionary conservation, and suggests that some heritable polygenic diseases and traits, particularly cognitive or psychiatric ones, have an underlying genetic architecture that is driven substantially by constrained genes (Extended Data Fig. 10). Discussion In this paper and accompanying publications, we present the largest, to our knowledge, catalogue of harmonized variant data from any species so far, incorporating exome or genome sequence data from more than 140,000 humans. The gnomAD dataset of over 270 million variants is publicly available (https://gnomad.broadinstitute.org), and has already been widely used as a resource for estimates of allele frequency in the context of rare disease diagnosis (for a recent review, see Eilbeck et al. 32 ), improving power for disease gene discovery [33][34][35] , estimating genetic disease frequencies 36,37 , and exploring the biological effect of genetic variation 38,39 . Here, we describe the application of this dataset to calculate a continuous metric that describes a spectrum of tolerance to pLoF variation for each protein-coding gene in the human genome. We validate this method using known gene sets and data from model organisms, and explore the value of this metric for investigating human gene function and discovery of disease genes. We have focused on high-confidence, high-impact pLoF variants, calibrating our analysis to be highly specific to compensate for the increased false-positive rate among deleterious variants. However, some additional error modes may still exist, and indeed, several recent experiments have proposed uncharacterized mechanisms for escape from nonsense-mediated mRNA decay 40,41 . Furthermore, such a stringent approach will remove some true positives. For example, terminal truncations that are removed by LOFTEE may still exert a LoF mechanism through the removal of crucial C-terminal domains, despite the escape of the gene from nonsense-mediated decay. In addition, current annotation tools are incapable of detecting all classes of LoF variation and typically miss, for instance, missense variants that inactivate specific gene functions, as well as high-impact variants in regulatory regions. Future work will benefit from the increasing availability of high-throughput experimental assays that can assess the functional effect of all possible coding variants in a target gene 42 , although scaling these experimental assays to all protein-coding genes represents a huge challenge. Identifying constraint in individual regulatory elements outside coding regions will be even more challenging, and require much larger sample sizes of whole genomes as well as improved functional annotation 43 . We discuss one class of high-impact regulatory variants in a companion manuscript 17 , but many remain to be fully characterized. Although the gnomAD dataset is of unprecedented scale, it has important limitations. At this sample size, we remain far from saturating all possible pLoF variants in the human exome; even at the most mutable sites in the genome (methylated CpG dinucleotides), we observe only half of all possible stop-gained variants. A substantial fraction of the remaining variants are likely to be heterozygous lethal, whereas others will exhibit an intermediate selection coefficient; much larger sample sizes (in the millions to hundreds of millions of individuals) will be required for comprehensive characterization of selection against all individual LoF variants in the human genome. Such future studies would also benefit substantially from increased ancestral diversity beyond the European-centric sampling of many current studies, which would provide opportunities to observe very rare and population-specific variation, as well as increase power to explore population differences in gene constraint. In particular, current reference databases including gnomAD have a near-complete absence of representation from the Middle East, central and southeast Asia, Oceania, and the vast majority of the African continent 44 , and these gaps must be addressed if we are to fully understand the distribution and effect of human genetic variation. It is also important to understand the practical and evolutionary interpretation of pLoF constraint. In particular, it should be noted that these metrics primarily identify genes undergoing selection against heterozygous variation, rather than strong constraint against homozygous variation 45 . In addition, the power of the LOEUF metric is affected by gene length, with approximately 30% of the coding genes in the genome still insufficiently powered for detection of constraint even at the scale of gnomAD (Fig. 2d). Substantially larger sample sizes and careful analysis of individuals enriched for homozygous pLoFs (see below) will be useful for distinguishing these possibilities. Furthermore, selection is largely blind to phenotypes emerging after reproductive age, and thus genes with phenotypes that manifest later in life, even if pLoF variants in the most constrained decile of the genome are approximately 11-fold more likely to be found in cases compared to controls. Error bars represent 95% confidence intervals. b, Marginal enrichment in per-SNV heritability explained by common (minor allele frequency > 5%) variants within 100-kb of genes in each LOEUF decile, estimated by linkage disequilibrium (LD) score regression 48 . Enrichment is compared to the average SNV genome-wide. The results reported here are from random effects meta-analysis of 276 independent traits (subsetted from the 658 traits with UK Biobank or large-scale consortium GWAS results). Error bars represent 95% confidence intervals. c, Conditional enrichment in per-SNV common variant heritability tested using regression of linkage disequilibrium score in each of 658 common disease and trait GWAS results. P values evaluate whether per-SNV heritability is proportional to the LOEUF of the nearest gene, conditional on 75 existing functional, linkage disequilibrium, and minor-allele-frequency-related genomic annotations. Colours alternate by broad phenotype category. severe or fatal, may exhibit much weaker intolerance to inactivation. Despite these caveats, our results demonstrate that pLoF constraint divides protein-coding genes in a way that correlates usefully with their probability of disease impact and other biological properties, and confirm the value of constraint in prioritizing candidate genes in studies of both rare and common diseases. Examples such as PCSK9 demonstrate the value of human pLoF variants for identifying and validating targets for therapeutic intervention across a wide range of human diseases. As discussed in more detail in an accompanying manuscript 12 , careful attention must be paid to a variety of complicating factors when using pLoF constraint to assess candidates. More valuable information comes from directly exploring the phenotypic effect of LoF variants on carrier humans, both through 'forward genetics' approaches such as gene mapping to identify genes that cause Mendelian disease, as well as 'reverse genetics' approaches that leverage large collections of sequenced humans to find and clinically characterize individuals with disruptive mutations in specific genes. Although clinical data are currently available for only a small subset of gnomAD individuals, future efforts that integrate sequencing and deep phenotyping of large biobanks will provide valuable insight into the biological implications of partial disruption of specific genes. This is illustrated in a companion manuscript that explores the clinical correlates of heterozygous pLoF variants in the LRRK2 gene, demonstrating that life-long partial inactivation of this gene is likely to be safe in humans 13 . Such examples, and the sheer scale of pLoF discovery in this dataset, suggest the near-future feasibility and considerable value of a human 'knockout' project-a systematic attempt to discover the phenotypic consequences of functionally disruptive mutations, in either the heterozygous or homozygous state, for all human protein-coding genes. Such an approach will require cohorts of samples from millions of sequenced and deeply, consistently phenotyped individuals and, for the discovery of 'complete' knockouts, would benefit substantially from the targeted inclusion of large numbers of samples from populations that have either experienced strong demographic bottlenecks or high levels of recent parental relatedness (consanguinity) 12 . Such a resource would allow the construction of a comprehensive map that directly links gene-disrupting variation to human biology. Online content Any methods, additional references, Nature Research reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-020-2308-7. Data availability The gnomAD 2.1.1 dataset is available for download at http://gnomad. broadinstitute.org, where we have developed a browser for the dataset and provide files with detailed frequency and annotation information for each variant. There are no restrictions on the aggregate data released. Code availability All code to perform quality control is provided at https://github.com/ broadinstitute/gnomad_qc, and the code to perform all analyses and regenerate all the figures in this manuscript is provided at https:// github.com/macarthur-lab/gnomad_lof. LOFTEE is available at https:// github.com/konradjk/loftee. All code and software to reproduce figures are available in a Docker image at konradjk/gnomad_lof_paper:0.2. Extended Data Fig. 2 | Variant calling performance for common variants. a-h, Precision-recall curves are shown for variant calls in two samples with independent gold-standard data, NA12878 49 (a-d) and a synthetic diploid mixture 50 (e-h). The random forest (blue) approach described here is compared to the current state-of-the-art GATK variant quality score recalibration (orange) for exome SNVs (a, e) and indels (b, f), and genome SNVs (c, g) and indels (d, h). Note that the indels presented in f and h exclude 1-base-pair (bp) indels as they are not well characterized in the synthetic diploid mixture gold standard sample. In all cases, at the thresholds chosen (dashed lines representing 10% and 20% of SNVs and indels filtered, respectively), random forest outperforms or is similar to variant quality score recalibration. Fig. 3 | Variant calling performance for rare variants. a-j, The x axes show the cumulative ranked percentile for our random forest (blue) model and, as a comparison, for the current state-of-the-art GATK variant quality score recalibration (orange). That is, the point at 10 shows the performance of the 10% best-scored data; the point at 50 shows the performance 50% best-scored data. a-d, The number of transmitted singletons (singletons in the unrelated individuals that are transmitted to an offspring) on chromosome 20 for exome SNVs (a) and indels (b), and genome SNVs (c) and indels (d). Chromosome 20 was not used for training our random forest model. We expect most of these to be real variants because we observe Mendelian transmission of an allele that was sequenced independently in a parent and child. e-h, The number of bi-allelic de novo calls per child (4,568 exomes, 212 genomes) outside of low-complexity regions. The expectation is that there is approximately 1.6 de novo SNV (e) and 0.1 de novo indels per exome (f), and 65 de novo SNVs (g) and 5 de novo indels (h) per genome 20 . i, j, The number of independently validated de novo mutations, available for a subset of 331 exome samples for which de novo mutations were validated as part of other studies 51 . In all cases, at the thresholds chosen (dashed lines representing 10% and 20% of SNVs and indels filtered, respectively), random forest outperforms or is similar to variant quality score recalibration. . For instance, variants that are not predicted to disrupt splicing based on retention of a strong splice site, or rescue of a nearby splice site. Additional filters not shown include: ANC_ALLELE (the alternative allele is the ancestral allele), NON_ACCEPTOR_DISRUPTING and DONOR_RESCUE (opposite to those already shown). b, To tune the END_TRUNC filter, we retained variants that pass the 50-bp rule (are more than 50 bp before the 3′-most splice site). The overall MAPS score for variants that fail this rule is shown in grey. For the remaining 39,072 variants, we computed the sum of the genomic evolutionary rate profiling (GERP) score of bases deleted by the variant. At 40 bins of this score, we compute the MAPS score for those variants retained at this threshold (red) compared to variants removed at this threshold (blue), and plot this as a function of the proportion of variants filtered at this threshold. We chose the 50% point as it retains variants with a MAPS score of 0.14, while removing variants with a MAPS score of 0.06. Error bars represent 95% confidence intervals. c, Density plot of aggregate pLoF frequency computed from high-confidence pLoF variants discovered using LOFTEE. Fig. 6 | See next page for caption. Fig. 6 | Computing the depletion of variation of functional categories. a, The distribution of mean methylation values across 37 tissues and across every CpG dinucleotide in the genome. We divided the genome into 3 levels (low methylation, missing or < 0.2; medium, 0.2-0.6; and high, >0.6) and computed all ensuing metrics based on these categories. b, Comparison of estimates of the mutation rate with previous estimates 52 . For transversions and non-CpG transitions, we observe a strong correlation (linear regression r = 0.98; P = 2.6 × 10 −65 ). For CpG transitions, the new estimates are calculated separately for the three levels of methylation and track with these levels. Colours and shapes are consistent in b-d. c, For c-e, only synonymous variants are considered. The proportion of possible variants observed for each context is correlated with the mutation rate. We compute two fit lines, one for CpG transitions, and one for other contexts to calibrate our estimates. d, Calibration of each context to compute a predicted proportion observed after fitting the two models in c, which is used to calculate an expected number of variants at high coverage. e, With an expectation computed from high coverage regions, the observed/expected ratio follows a logarithmic trend with the median coverage below 40×, which is used to correct low coverage bases in the final expectation model. f-h, For each transcript, the observed number of variants is plotted against the expected number from the model described above, for synonymous (f), missense (g), and pLoF (h) variants, and the linear regression coefficient is shown. Note that the expectation does not include selection, and so, pLoF and, to a lesser extent, missense variants exhibit lower observed values than expected. Fig. 7 | Genomic properties of constrained genes. a, b, Histogram of the observed/expected ratio of pLoF variation (a) and LOEUF (b). Most genes have fewer observed variants than expected (median observed/ expected = 0.48), and the genes with no observed pLoFs are distinguished between confidently constrained genes and noise by LOEUF. c, A 2D density plot of the number of observed versus expected pLoF variants. The boundaries of each decile are plotted as gradients (that is, the most constrained decile is below the lowest red line). d, The LOEUF of a gene is correlated with its coding sequence length (beta = −1.07 × 10 −4 ; P < 10 −100 ): thus, for all downstream statistical tests, we adjust for gene length or remove genes with fewer than 10 expected pLoFs. e, Observed/expected ratios of various functional classes across genes within each LOEUF decile. The most constrained decile has approximately 6% of the expected pLoFs, while synonymous variants are not depleted and missense variants exhibit modest depletion. f, The percentage of each LOEUF decile that was described in ExAC as constrained, or pLI > 0.9 4 . g, The percentage of each LOEUF decile that have at least one homozygous pLoF variant. h, Box plots of the aggregate pLoF frequency for each LOEUF decile. Centre line denotes the median; box limits denote upper and lower quartiles; whiskers denote 1.5× the interquartile range; points denote outliers). In e-g, error bars represent 95% confidence intervals (note that in some cases these are fully contained within the plotted point). Fig. 8 | Biological properties of constrained genes. a, The percentage of genes in each functional category from Pharos (see Supplementary Information) is broken down by the LOEUF decile. b, The mean number of tissues in which a transcript is expressed, binned by transcript-based LOEUF decile, is shown for all transcripts and canonical transcripts. c, The percentage of genes in which the most expressed transcript is also the most constrained is plotted in red, which is enriched compared to a permuted set (blue). d, For 927 genes with expected pLoF ≥10 in both the African/African American and European population subsets (n = 8,128), the LOEUF scores are highly correlated (linear regression r = 0.78, P < 10 −100 ), with a lower mean score observed in the African/African American population (0.49 versus 0.62; two-sided t-test P = 4.1 × 10 −14 ), which has a higher effective population size. e, The mean LOEUF score for 865 genes with expected pLoF ≥ 10 in all populations (n = 8,128). Error bars represent 95% confidence intervals. Article Extended Data Fig. 9 | Applications of constraint metrics to rare variant analysis of disease. a, Proportion of each LOEUF decile found in OMIM. b, Proportion of disease-associated genes discovered by whole-exome/ genome sequencing (WES/WGS) compared to conventional (typically linkage) methods, plotted by LOEUF decile. The former are more constrained (LOEUF 0.674 versus 0.806, two-sided t-test P = 1.2 × 10 −16 ), which suggests that these techniques are more effective for picking up genes with a de novo mechanism of disease, compared to recessive genes identified by linkage methods. c, Similar to Fig. 5a, the rate ratio is defined by the rate of de novo variants (number per patient) in autism cases divided by the rate in controls. pLoF variants in the most constrained decile of the genome are approximately fourfold more likely to be found in cases compared to controls. d, The mean odds ratio of a logistic regression of schizophrenia 28 is plotted for each LOEUF decile. Error bars in a-d correspond to 95% confidence intervals. Fig. 10 | Applications of constraint metrics to common variant analysis of disease. a, The τ ⁎ coefficient (see Supplementary Information) for each LOEUF decile across 276 independent traits. Unlike the enrichment measure reported in Fig. 5, τ ⁎ is adjusted for 74 baseline genomics annotations. Positive values of τ ⁎ indicate greater per-SNP heritability than would be expected based on the other annotations in the baseline model, whereas negative values indicate depleted per-SNP heritability compared to that baseline expectation. b, Enrichment coefficient for each LOEUF decile using different window sizes to define which SNPs to include upstream and downstream of each gene. c, Enrichment coefficient for each LOEUF decile across traits after controlling for brain expression and gene size. Results are consistent with those shown in Fig. 5, which indicates that brain gene expression and gene size do not fully explain the enrichment of heritability observed in constrained genes. Error bars represent 95% confidence intervals. Corresponding author(s): Konrad Karczewski Last updated by author(s): Mar 25, 2020 Reporting Summary Nature Research wishes to improve the reproducibility of the work that we publish. This form provides structure for consistency and transparency in reporting. For further information on Nature Research policies, see Authors & Referees and the Editorial Policy Checklist. Statistics For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section. n/a Confirmed The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section. A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted Software and code Policy information about availability of computer code Data collection No software was used for the collection of data, as this was an opportunistic study.
2020-04-17T15:20:42.578Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "2c832f3024bdf9f32ab1bf83382824ecc300a283", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41586-020-2308-7.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5db695fe41d1320ed2ee67ae50b84f9be2cde2e6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
237448013
pes2o/s2orc
v3-fos-license
The correlation between myocardial resilience after high-intensity exercise and markers of myocardial injury in swimmers Abstract To investigate how high-intensity exercise influences an athlete's myocardial resilience and the correlation between myocardial resilience and markers of myocardial ischemic injury. Fifteen swimmers participated in high-intensity exercises. Cardiac ultrasound was performed before and after exercise on each subject. Left ventricular general strain, systolic general strain rate, and the differences (▴general strain and ▴ general strain rate, respectively), before and after exercise were analyzed. Blood was collected at the morning of the exercise day and 6 hours after exercise to measure cardiac enzyme indicators. The correlation between myocardial resilience and markers of myocardial injury were evaluated. Most cardiac enzymes concentrations increased after exercise (P < .05). Cardiac troponin I, creatine kinase MB, and cardiac troponin T were all correlated with the degree of ▴ peak strain (differential value of posterior wall basal segment before and after exercise) and ▴ peak strain rate (differential value before and after exercise) (P < .05). After high-intensity exercise, the concentrations of creatine kinase MB and cardiac troponin T in the blood are positively correlated with two-dimensional ultrasound deformation indices, proving the fact that the seindices can be used as a diagnostic basis for myocardial injury, and are more sensitive than general strain. The two-dimensional strain echocardiogram is non-invasive and easily accepted by the patient. It can make up for the shortage of myocardial enzymes in the injury areas, including weak timeliness and the inability to locate injury. Introduction Two-dimensional (2D) ultrasound strain technique, also known as the speckle tracking technique, is currently used to accurately detect functional changes in each part of the myocardium. [1] Ultrasound images of 2D strain rate imaging technology is used, and the software will automatically divide left ventricular wall into 18 segmental, record left ventricular total strain, systolic strain rate, and 18 general segmental systolic peak strain (S), peak strain rate (SR). Indicators such as myocardial general strain (GS) and systolic general strain rate (GSR) are used mainly to measure myocardial resilience. Myocardial resilience refers to the magnitude of deformation of the myocardium during cardiac cycles, which can directly reflect its systolic and diastolic functions. [2] Most studies on 2D strain suggest that myocardial resilience is associated with the physiological and pathological conditions of the myocardium. [3] The myocardium of healthy people can reach the apical segment from the basal segment longitudinally, and myocardial systolic GS varies within the range of 15% to 19%. Comparative evaluation can be conducted through techniques such as magnetic resonance imaging. [3,4] When systolic GSR <-0.80/s is used as the standard, the sensitivity and specificity of detecting abnormal segments are both 85%. [5] Abnormal segments are usually associated with myocardial injury and myocardial infarction. [6] It is a clinically common detection approach to evaluate myocardial physiological status by detecting myocardial enzymes. Cardiac troponin T (cTnT), cardiac troponin I (cTnI), creatine kinase (CK), and creatine kinase MB (CK-MB) are the most commonly used indicators. Several studies have indicated that cTn is the best cardiac injury marker regardless of its specificity or sensitivity to the myocardium. It has gradually replaced CK-MB as the gold standard for the diagnosis of acute myocardial infarction. [7,8] High-intensity exercise can induce cardiac stress, manifesting in both structure function and physiological status. [9,10] Overtraining or exhaustive exercise can greatly increase the production of cardiomyocyte radicals, increase cardiomyocyte apoptosis, and impact the endocrine function of the heart. [9,10] Therefore, there is some concern that excessive training or longterm exhaustive exercise can damage the normal function of the heart and cause a change in the left ventricle function. In this study, swimmers were selected for high-intensity exercise, blood samples were collected and 2D echocardiogram images were taken before and after exercise. The influence of high-intensity exercise on the athletes' myocardium was analyzed, as well as the correlation between strains in different parts of the myocardium and markers of myocardial injury. Participants Fifteen healthy athletes from the Henan Province Swimming Team were enrolled in the study. After the participants were given all the information on the study protocols, informed consent was obtained from each. An electrocardiogram, echocardiogram, and blood biochemical tests were conducted. Those participants with cardiovascular system diseases were excluded, leaving 15 athletes enrolled in the study. The study was approved by the General Administration of Sport of China. Informed consent was obtained. 2.2. Methods 2.2.1. Exercise and cardiac ultrasound. Approximately 0.5 hour before exercise, a cardiac ultrasound was performed on each athlete. Then the athletes performed a warm up exercise of 2000 meters freestyle swimming in 30 minutes. After warming up, incremental load training of 8 Â 100 meters was conducted, followed by 5 Â 200 meters. In the 8 Â 100 meters training, the program is that in first 200 meters, after athletes swim each 100 meters, there is a 1 minute rest; after the third 100 meters, there is 5 minutes rest; after the fourth 100 meters, there is 1 minute rest; after the fifth 100 meters, there is 5 minutes rest; in the last 300 meters, there is 5 minutes rest after each 100 meters. Athletes will rest 10 minutes and then start 5 Â 200 meters special training after the 8 Â 100 meters progressive training. The program of 5 Â 200 meters special training is that athletes will rest 5 minutes after swim each 200 meters. When they finish the last 200 meters and go ashore, the collection of cardiac ultrasound image will begin immediately. We set up a simple sickbed by the pool. While athletes finish the training program, they will go ashore immediately and then run to the sickbed. The time is about 45 seconds for athletes from stop swimming to lie down for the ultrasonic testing. The GE VIVID I portable color ultrasound diagnostic echocardiogram with M3S probe and frame rate >70/s was used for the ultrasound tests on each athlete. The Eco PAC PC ultrasound workstation was used to analyze the images (General Electric). Blood collection. The venous blood of each athlete was collected immediately after wake up after fasting overnight and 6 hours after exercise. Three milliliters of venous blood were collected from the same-side upper limb of each athlete. Test indicators 2.3.1. Electrocardiogram. The athletes were positioned on their left side, chest leads were connected, and the electrocardiogram was performed. 2D grey-scale dynamic ultrasound images of 3 cut surfaces comprising the apical 2-chamber, apical 4-chamber, and apical long-axis views were collected. Each image comprised at least 3 complete cardiac cycles. Analysis software of the 2Dstrain rate imaging technique was used to conduct quantitative analysis on each segment of the myocardium. Endocardial clear images of the 3 cut surfaces were selected from 1 side of the valve annulus to the other side of valve annulus through the apical part. The left ventricular endocardial border was manually drawn and the instrument automatically tracked intra-myocardial spots. After tracking, the instrument equally divided the 6 chamber walls of the left ventricle into basal, middle, and apical segments, for a total of 18 myocardial segments. Endocardial clear images of the 3 cut surfaces were analyzed in the same way and the software obtained a bull's eye image of and left ventricle GS data on the 18 segments ( Fig. 1). The indicators were GS, GSR, the S of the 18 segments, and the SR of the 18 segments. ▴ GS and ▴ GSRs represented the difference in GS and GSRs before and after exercise, respectively. ▴ S and ▴ SR represented the difference between peak S and peak SR in each segment. Blood samples. Blood samples were centrifuged and serum was collected to analyze cTnT, cTnI, CK, and CK-MB levels. The hemoglobin and hematocrit tests were completed within 30 min of blood collection. Statistical analysis All data were represented by the mean ± standard deviation. SPSS17.0 (SPSS Inc., Chicago, IL) and Microsoft Excel 2010 (Microsoft, Redmond, WA) were used for data processing. Normality tests were conducted on all data. For data that followed a normal distribution both before and after exercise, a paired t test and bivariate correlation analysis were used, and the Pearson correlation coefficients were calculated. For data that did not follow a normal distribution before or after exercise, the Wilcoxon signed rank test and bivariate correlation analysis, which were paired sample comparison tests, were used, and the Spearman correlation analysis was calculated. A P < .05 indicates statistically significant difference. Basic characteristics The baseline characteristics of the participants were as follows, 9 males, 6 females; average age was 16.33 ± 1.95 years; average exercise years were 7.07 ± 2.12; exercise levels were 5 master grade, 8 level 1, and 2 level 2. results showed that both hemoglobin concentration and hematocrit percentage decreased after exercise, but the changes were not significant different for these indicators. Therefore, it was concluded that the increase in myocardial injury markers did not result from a decrease in blood volume. Myocardial injury markers before and after exercise. A paired sample t test was conducted for CK-MB levels before and after exercise and a Paris sample comparison and Wilcoxon signed rank test were conducted on cTnT, cTnI, and CK before and after exercise ( Table 2). The results showed that the concentrations of cTnT, CK, and CK-MB after exercise were significantly higher than those before exercise (P < .01). Howev-er, the concentration of cTnI showed only an increased trend after exercise, but not significant difference (P > .05).Thus, the results of this study suggested that exercise promoted the release of cTnT, CK, and CK-MB. Therefore, cTnT, CK, and CK-MB might be used as sensitive markers in further study. Table 1 Comparison between hemoglobin and hematocrit before and after exercise. Before exercise (n = 15) After exercise (n = 15) Table 2 Comparison of myocardial injury markers before and after exercise. Table 3). Myocardial injury markers The results indicated that the ▴ cTnT and ▴ GSR were positively correlated (r = 0.553, P < .05). There was no linear correlation between ▴ CK and ▴ GS or ▴ CK and ▴ GSRs, while ▴ CK-MB was positively correlated with both ▴ GS (r = 0.649, P < .01), and ▴ GSRs (r = 0.589, P < .05). ▴ cTnT concentration was positively correlated with myocardial GSR, and the extent of the ▴ CK-MB was consistent with that of myocardial ▴ GS. Correlation between SR of each myocardial segment and myocardial injury markers. Bivariate analysis was conducted between ▴ CTnT, ▴ CTnI, and ▴ CTnI and changes in myocardial peak SR of the 18 left ventricular segments. Spearman coefficients were calculated. Bivariate analysis was conducted between ▴ CK-MB and basInf ▴ SR, midInf ▴ SR, apInf ▴ SR, and basAnt ▴ SR and Spearman coefficients were calculated. Analysis was also conducted between ▴ CK-MB and basPost ▴ SR, midPost ▴ SR, Table 3 Analysis of the correlation among ▴ GS, ▴ GSRs, and myocardial injury markers. Table 4 Correlation between peak ▴ S and changes in myocardial injury markers. Discussion Our current study showed that the concentrations of CK-MB and cTnT in the blood were positively correlated with 2D ultrasound deformation indices after high-intensity exercise, while the change in myocardial SR of the apical segment is positively correlated with cTnT, CK, and CK-MB. Myocardial enzyme markers are commonly used in clinical settings to represent physiological status. Previous studies have suggested that high-intensity exercise can cause micro damage to the myocardium. [11] This study has compared 2 indicators, hemoglobin concentration and hematocrit percentage-that reflect body plasma volume. The results of the study found that although the indicators decrease as a result of exercise, the difference is not statistically significant. The concentrations of cTnT, CK, and CK-MB were also compared before and after exercise and all values were greater than those before exercise. Given that there was no statistical difference in indicators that reflected plasma volume, the results of this study infer that exercise promotes the release of cTnT, CK, and CK-MB in the cells but has little influence on cTnI concentration. The concentrations of myocardial injury markers are significantly different before and after exercise, which indicates myocardial injury. GS after exercise is also distinctly different after exercise. D'Andrea et al [8] evaluated the local and overall myocardial functions in normal athletes using speckle tracking techniques. By applying GS and GSR imaging techniques, it was found that professional soccer players had GSRs of the interventricular septum and left ventricular lateral walls that were higher than those in non-athletes. [12] They suggested that GSR could be used as an effective indicator by which to evaluate left ventricular systolic function and physiological status. Our study showed that releasing cTnT into the blood was correlated with the overall deformation rate of left ventricle in the systolic phase and the apLat ▴ S. The function that exercise promoted the release of cTnI into the blood was correlated with basPost ▴ S, basAnt ▴ S, midAnt ▴ S, basPost ▴ S, apPost ▴ S, and apLat ▴ S. The function that exercise promotes the release of CK into the blood was correlated with basPos ▴ S and apAntSept ▴ S. The function that exercise could promote the release of CK-MB into the blood was correlated with deformation degree of the overall left ventricle, midAnt ▴ S and apPost ▴ S as well as with the deformation rate of posterior wall, anterior wall, and anterior intervals. At the same time, the study found, by summarizing the myocardial parts that were associated with myocardial injury, myocardial injury easily occurs on the posterior wall and anterior interval. When there was clinical myocardial injury, the changes in the myocardial enzymes had specific timeliness and negative situations, which might be associated with the areas of injury. Myocardial enzymes were sensitive to injury in some areas but not in others. However, the release of myocardial enzymes is diffusive, thus this cannot be used as an indicator of injury area. 2D ultrasound not only can qualitatively detect myocardial injury, but can locate the injury relatively correctly, which mitigates the shortage of myocardial enzymes in specific areas of injury. A recent meta-analysis showed that exercise intensity and age were the most powerful determinants of cTn release. Diastolic function was influenced by exercise HR and cTn release, which implied that exercise bouts at high intensities were enough to elicit cTn release and reduced LV diastolic function. [13] The results were consistent with our study. Another study investigat- Table 5 Analysis of the correlation between ▴ SR and changes in myocardial injury markers. ing the cardiac structure and function in long-term elite master endurance athletes with special focus on the right ventricle by contrast enhanced cardiovascular magnetic resonance imaging, showed that a chronic right ventricular function damage in elite endurance master athletes with lifelong high training volumes seems to be unlikely. [14] The contrary results might be caused by the use of a cross sectional study design which might have led to a recruitment bias. During the process of applying the 2D strain ultrasound technique to evaluate the influence of exercise on the heart, we found that the 2D strain ultrasound also had certain disadvantages. First, the respiration rate of the athletes after exercise was relatively acute and more gas enters the lungs, in which case the track of speckles and the accuracy in evaluating myocardial function would be affected. Second, although 2D strain had no angular dependence, there was controversy in that ultrasonic motion in a perpendicular direction which was more susceptible to a greater incidence of errors. Third, the heart was a 3D structure, while 2D strain was only an estimation on a 2D level, indicating that it could reflect only myocardial strain laterally and could not fully and truly reflect the entire strain. Finally, the sample size for this study was relatively small. In conclusion, after high-intensity exercise, the concentrations of CK-MB and cTnT in the blood are positively correlated with 2D ultrasound deformation indices, proving the fact that the seindices can be used as a diagnostic basis for myocardial injury, and are more sensitive than GS. 2D ultrasound deformation indices are correlated with myocardial injury to some degree. The change in myocardial SR of the apical segment is positively correlated with cTnT, CK, and CK-MB, proving that the apical segment has greater sensitivity to motor stimulation and thus 2D strain ultrasound technique can be used at early stages of cardiac injury more easily. The 2D strain echocardiogram is non-invasive and easily accepted by the patient. It can make up for the shortage of myocardial enzymes in the injury areas, including weak timeliness and the inability to locate injury. Author contributions CG is responsible for the guarantor of integrity of the entire study, study concepts & design, definition of intellectual content, clinical studies, experimental studies, data acquisition & analysis, statistical analysis, manuscript preparation; CL is responsible for the guarantor of integrity of the entire study, study concepts & design, definition of intellectual content, clinical studies, experimental studies, data acquisition & data analysis, statistical analysis, manuscript preparation; JHZ is responsible for the literature research, manuscript editing; YM is responsible for the clinical studies; XXM is responsible for the data acquisition, data analysis; MHX is responsible for the guarantor of integrity of the entire study, definition of intellectual content, clinical studies, experimental studies, data acquisition, statistical analysis, manuscript review. All authors read and approved the final manuscript. Conceptualization: Can Gao, Minhao Xie.
2021-09-09T13:16:25.973Z
2021-09-10T00:00:00.000
{ "year": 2021, "sha1": "579f3ac92d05c9e23ba3ca23f205e880c8b08ede", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/md.0000000000027046", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "579f3ac92d05c9e23ba3ca23f205e880c8b08ede", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13691214
pes2o/s2orc
v3-fos-license
Microbiological variation amongst fresh and minimally processed vegetables from retail establishers-a public health study in Pakistan Fresh and minimally processed ready to eat vegetables are very attractive eatables amongst consumers as convenient, healthy and readily available foods, especially in the South Asian states. They provide numerous nutrients, phytochemicals, and vitamins but also harbor extensive quantity of potentially pathogenic bacteria. The aim of this study was to determine microbiological variation amongst fresh vegetables that were commercially available to the public at numerous retail establishments in Pakistan in order to present an overview of the quality of fresh produce. A total of 133 samples, collected from local distributors and retailers were tested for aerobic mesophilic and psychrotrophic, coliform and yeast and mould counts. Standard plating techniques were used to analyze all samples. Mesophilic count ranged from 3.1 to 10.3 log CFU/g with lowest and highest counts observed in onions and fresh cut vegetables. Psychrotrophic microorganisms count was as high as mesophilic microorganisms. Maximum counts for coliform were found in fresh cut vegetables with 100% samples falling over 6 log CFU/g. These results were consistent with yeast and moulds as well. In our study, Escherichia coli was determined as an indicator organism for 133 samples of fresh and minimally processed vegetables. Fresh cut vegetables showed the highest incidence of presumptive E. coli (69.9%). The results showed a poor quality of fresh vegetables in Pakistan and point to the implementation of good hygiene practices and food safety awareness amongst local distributors, food handlers at retail establishments. Introduction Fresh vegetables are an essential part of a balanced diet and there is much published evidence supporting nutritional and health gains linked to the consumption of raw or minimally processed vegetables.Aside from significant health and nutritional benefits, there has been a major change in the lifestyles and consumption patterns.With a wider range of food variety available, people now spend less time cooking at home.Such trends have led to popularity in consumption of raw or minimally processed vegetables that have become more convenient as ready to eat foods (Abadias et al., 2008).Factors influencing this upsurge in raw vegetable availability include geographical and climatic conditions, boost in international trade, enhanced preservation and storage and accelerated transportation together with yearround production that ensures continuous supply to consumers even during off season (Johnston et al., 2005).In Pakistan, most of the fresh produce (vegetables) originates locally.In comparison to other food commodities like meat, milk and other processed foods, fresh produce is readily available and low-cost which makes it an economical buy for most of the public hence increasing its consumption frequency. However, these commodities can act as a vehicle for transmission of bacteria, parasites, yeast and moulds and many other types of viral pathogens.Contamination can occur during cultivation, irrigation, harvest, transport, storage and marketing and at the hands of consumers (Tournas, 2005).In developing countries, pre-and postharvest processes and provisions contribute highly to contamination of fresh produce.For example, in Pakistan, a quarter of fresh produce is irrigated with wastewater (Pachepsky et al., 2011).The occurrence of foodborne related outbreaks by contaminated fresh vegetables has increased during recent years (Mukherjee et al., 2006).The most commonly linked pathogens to fresh produce include bacteria (E.coli, Salmonella), parasites (Cyclospora, Cryptosporidium) and viruses (Hepatitis A, Norwalk-like) (Tauxe et al., 1997) with E. coli O157: H7 and Salmonella being the most abundantly found pathogens in outbreaks related to fresh produce.The Center for Disease Control and Prevention in the United States has reported amplification in the number of foodborne occurrence correlating to fresh produce around the world between 1995 to 2005 (Seow et al., 2012).In the US, all known foodborne outbreaks have been related to the fresh produce rising from <1 percent to a stirring 6 percent (Sivapalasingam et al., 2004).OzFoodNet, an Australian disease surveillance network, has also reported an increase in illnesses caused by fresh produce (Angulo et al., 2008).Related studies have been carried out in several countries such as the US (De Roever, 1998;Olsen et al., 2000;Mukherjee et al., 2004;Mukherjee et al., 2006;Tournas et al., 2006;Oliveira et al., 2010) , the EU (Lund, 1993;Nygard et al., 2004;Johannessen et al., 2002;Emberland et al., 2007;Pezzoli et al., 2007;), Australia (Heard, 1999) and Japan (Gutierrez, 1997). To our knowledge, there is a colossal amount of data available on fresh produce worldwide; however, there is limited published research in this area in Pakistan.The objective of the study was to determine the microbiological quality of fresh vegetables commercially available for consumers in twin cities (Islamabad, Rawalpindi) in Pakistan with the aim to guide future improvements in measures relating to food safety. Sample selection Random samples (110) of nine different vegetables were collected from the local distributors in Rawalpindi and Islamabad during all four seasons of the year.The vegetables included tomato (16), carrot (12), green pepper (10), cucumber (13), radish (12), coriander (10), onion (13), lettuce (10) and cabbage (14) together with fresh cut vegetable samples (23).Fresh cut vegetables were selected from local vendors, restaurants and from open salad markets in the cities of Rawalpindi and Islamabad located in central region of Pakistan.Microbial analysis was performed in the Microbiology lab of Food Technology Department, PMAS-Arid Agriculture University Rawalpindi. The markets chosen were those with high standing amongst the consumers where fresh produce was easily available thus assembling this study to be more productive.Sample units were selected randomly from various locations.Due to limited published data related to Pakistan that may imply specific vegetables linked to foodborne diseases, the samples analyzed were selected on the base of their commonalities amongst consumers. The samples purchased from various distributors and markets were collected in re-sealable plastic bags which were then promptly transported to the laboratory.Samples that showed any visible ailment, damaged surface or were visibly compromised were discarded.Details of fresh-cut vegetables, including manufacturer, retailer, packing and best before date (if available) were documented systematically.All samples were examined within 24 h after the time of collection.Before the samples were taken out of the re-sealable plastic bags, the surface of the bags was sterilized with ethanol to prevent any cross contamination. Sample preparation and microbial analysis A sample of 25 g was added in 225 ml of 0.1% sterile peptone water (PW).It was homogenized for 2 min in a sterilized blender.Serial dilutions 1:10 were made from the produced homogenate.For aerobic mesophilic count (AM), the sample preparation was carried out following the methodology stated in FDA Bacteriological Analytical Manual.100 µl of the prepared sample was pour plate in the Plate Count Agar (PCA) (Oxoid).Plates were then incubated at 37 o C for 24 h.For psychrophilic count, the plates were prepared with the same methodology as mentioned for AM and incubated at 6 o C for 6 days.Following this, the numbers of colonies formed were counted and results noted accordingly.The results were recorded in terms of CFU/ g (colony forming unit) (FDA BAM, 1998). For coliform count, sample preparation was carried out as mentioned above.Suitable 1:10 dilutions of the produced homogenate were prepared using PW.For every dilution selected, 100 µl of the prepared sample was pour plate in chromogenic agar (Coliform/ E. coli: Oxoid).Afterward, all plates were incubated at specific conditions of 37 o C for 24 h (FDA BAM, 1998).Subsequently, the red and pink colonies were counted.For yeast and mould count, preparation of samples was carried out as mentioned above.Suitable 1:10 dilutions of the produced homogenate were prepared using PW.For every dilution selected, 0.1 ml of the prepared sample was spread plate on Potato Dextrose Agar (PDA) (Oxoid).The plates were then incubated for 3-5 days at 1998). Pathogen analysis The detection of E. coli was quantified using classical methodologies and the results were expressed in most probable number (MPN) following the methodology mentioned in FDA's Bacteriological Analytical Manual (FDA BAM, 1998). Statistical analysis The means obtained from the microbiological analysis of vegetables were analyzed by one-way ANOVA (Analysis Of Variance).To determine any significant difference amongst the means, they were subjected to Tukey test using Statistix 8.1 software (Analytical Software, FL, USA). Results and discussion Fresh vegetable quality depends on the use of adequate irrigation water together with good handling practices and appropriate storage.Processing steps like irrigation, harvesting, storage, handling, slicing cutting, shredding, and grading are all possible sources of contamination.Some of these microbes could spread during transport on whole and fresh-cut vegetables or even when these food commodities are not stored at recommended temperatures (1-5 o C).Tables 1-4 show high populations of aerobic mesophilic, aerobic psychrotrophic, coliform and yeast and mould counts.Unanimously, the results suggested that the fresh and minimally processed vegetables had poor hygienic quality attained during multiple processes including harvesting, storage, transportation and lack of good handling practices. The aerobic mesophilic count for fresh cut vegetables was 9.4 log CFU/g and a range of 7.1 to 10.3 log CFU/g (Table 2), depicting that all samples were unacceptable for consumption.The highest counts were found in the samples collected from street vendors and local restaurants.This was comparable to a survey conducted in India on 120 samples of fresh-cut vegetable and fruit salads collected from street vendors and had reported the aerobic mean counts ranging between 6-8 log CFU/g (Viswanathan and Kaur, 2001).These results were similar to the one obtained in this study and provided an overview of the unsanitary conditions predominant amongst street vendors. Psychotropic microbes can multiply even during retail mostly when the food products are not stored at proper temperatures (1-5 o C) (Abadias et al., 2008).Psychrotrophic counts in this study are very much comparable to the mesophilic counts (Table 2) and a similar trend was observed in a study by Abadias et al. (2008). Overall, counts for whole vegetables varied significantly with cabbage, tomato and lettuce had the highest aerobic means of 7.4 and 7.2 log CFU/g with no significant difference amongst the means (P<0.05).Vegetables like onion, cucumber, and radish contained noticeably lower aerobic mesophilic, psychrotrophic and coliform bacteria (<6 log CFU/g).Other commodities showed consistently higher counts.The mean of coliform counts for whole vegetables ranged between 2.7 to 6.2 log CFU/g (Table 3).The highest means were found in cabbage and lettuce with 6.1 and 6.2 log CFU/g whereas most of the other vegetables fell under the range of 5 log CFU/g respectively. Lettuce and cabbage show considerably high counts overall.A possible reason for this is due to the fact that leafy vegetables like these have high folds and the higher surface area which make more susceptible to trapping dirt, irrigation water or soil in the folds (Aycicek et al., 2006).This leads to a higher potential of microbial colonization on these types of vegetables and hence leading to higher risks of microbial pathogenesis.Furthermore, this phenomenon is enhanced by the fact that in Pakistan, it is common for many small-scale farmers to use untreated sewage water for irrigation, and as this water traps inside the folded leafy vegetables, it contributes to microbial growth. Fresh cut vegetables had a mean coliform count of 8.0 log CFU/g which was lower than aerobic means.Mean value for Yeast and mould count (YMC) was lower than other aerobic counts (Table 4) amongst most of the selected commodities.Yeast and mould count (YMC) of onion was 3.9 log CFU/g whereas lettuce, cabbage, and tomato demonstrated the highest counts with means of 6.4 and 6.8 log CFU/g amongst whole vegetables respectively. In our study, whole vegetables like cucumber, onion, radish and green pepper were found to have lower counts with onions having the lowest counts amongst all whole vegetables.The results aligned with a study conducted in the US.It concluded that green peppers and cucumbers had lesser counts as they have waxy, smooth and hard skin which does not allow microbes to reach inside and proliferate (Tournas, 2005) (Yin and Tsao, 1999). In this study, E. coli were detected in 30.1% of samples and the bacterial population varied between 1 to 6 log MPN/g (data not shown).The most contaminated samples were of fresh cut vegetables and none were detected in onion samples (Table 5).The contamination varied amongst other whole vegetables with carrot and lettuce showing the highest contamination of the bacterium.The contamination rate of E. coli found in our study was higher than the one reported by Sagoo et al. (2003) for fresh vegetable salads in the United Kingdom (1.3%) and by Abadias et al. (2008) for fresh cut vegetables in Spain (11.4%).A study reported high contamination of E. coli in samples ready to eat vegetables in Brazil (53.1%) (De Oliveira et al., 2011).Another similar study conducted in Brazil found a high percentage of E. coli amongst minimally processed vegetables (30%) (Prado et al., 2008). Isolation and detection of E. coli are carried out to determine the sanitary conditions of the foods.There may be specific serotypes of this species that may be involved in foodborne diseases.However, in this study, no subtyping of E. coli was carried out.There have been numerous outbreaks relating E. coli enterohemorrhagic (EHEC) strain and enteropathogenic (EPEC) strains (CDC, 2010). There is very limited published data related to the quality of fresh produce in Pakistan Therefore results found in this study cannot be compared to any other data with similar geographical location.Studies related to raw food especially fruits and vegetables should be increased in Pakistan.This study revealed that the majority of whole and fresh-cut vegetables analyzed were of poor quality especially fresh cut vegetables.This indicates that there is a dire need for implementation of good hygiene practices.It is also suggested that the concerned authorities such as Punjab Food Authority should implement strict laws and checks for implementation of good hygiene practices together with training programs to create self-awareness amongst retailers and vendors. Table 1 . . Lower counts in onions are due to their chemical composition.A study reported that Quantitative analysis of aerobic mesophilic count (AM) of fresh vegetables collected from different retail establishers Means represented by different superscript letters ( A, B ) are significantly different (P<0.05) n: Number of samples a : Range in CFU g -1 of result b : Counts are provided in log CFU g -1 of result Percentage (%) Table 2 . Quantitative analysis of aerobic psychrotrophic count (AP) of fresh vegetables collected from different retail establishers Means represented by different superscript letters (A, B ) are significantly different (P<0.05)n: Number of samples a : Range in CFU g -1 of result b : Counts are provided in log CFU g -1 of result FULL PAPER *Corresponding author.Email: ali.sair.47@gmail.comeISSN: 2550-2166 / © 2017 The Authors.Published by Rynnye Lyan Resources Percentage (%) Table 3 . Quantitative analysis of coliform count (CC) of fresh vegetables collected from different retail establishers Table 4 . Quantitative analysis of yeast and mould count (YMC) of fresh vegetables collected from different retail establishers Means represented by different superscript letters (A, B ) are significantly different (P<0.05)n: Number of samples a : Range in CFU g -1 of result b : Counts are provided in log CFU g -1 of result FULL PAPER eISSN: 2550-2166 © 2017 The Authors.Published by Rynnye Lyan Resources Chinese chive, scallions, and other Allium plants possessed natural antifungal activity Table 5 . Occurrence of pathogens in randomly collected vegetables n : Number of samples a : Not detected
2018-05-11T07:41:52.335Z
2017-08-18T00:00:00.000
{ "year": 2017, "sha1": "5d8d7e5b21506e4c4ccd84e47d61d72e26da86c1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.26656/fr.2017.6.060", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "5d8d7e5b21506e4c4ccd84e47d61d72e26da86c1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
14123674
pes2o/s2orc
v3-fos-license
Equality of critical points for polymer depinning transitions with loop exponent one We consider a polymer with configuration modelled by the trajectory of a Markov chain, interacting with a potential of form $u+V_n$ when it visits a particular state 0 at time $n$, with $\{V_n\}$ representing i.i.d. quenched disorder. There is a critical value of $u$ above which the polymer is pinned by the potential. A particular case not covered in a number of previous studies is that of loop exponent one, in which the probability of an excursion of length $n$ takes the form $\phi(n)/n$ for some slowly varying $\phi$; this includes simple random walk in two dimensions. We show that in this case, at all temperatures, the critical values of $u$ in the quenched and annealed models are equal, in contrast to all other loop exponents, for which these critical values are known to differ, at least at low temperatures. 1. Introduction. A polymer pinning model is described by a Markov chain (X n ) n≥0 on a state space containing a special point 0 where the polymer interacts with a potential. The space-time trajectory of the Markov chain represents the physical configuration of the polymer, with the nth monomer of the polymer chain located at (n, X n ) (or just at X n , for an undirected model). When the chain visits 0 at some time n, it encounters a potential of form u + V n . The i.i.d. random variables (V n ) n≥1 typically model variation in monomer species. We study the phase transition in which the polymer depins from the potential when u goes below a critical value. We denote the distribution of the Markov chain (started from 0) in the absence of the potential by P X and we assume that it is recurrent. This recurrence assumption is merely a convenience and does not change the essential mathematics; see [1,11]. Of greatest interest is the case where the excursion length Here, the loop exponent is c ≥ 1, E denotes the length of an excursion from 0, that is, the time elapsed between successive returns to 0, and ϕ is a slowly varying function, that is, a function satisfying ϕ(κn)/ϕ(n) → 1 as n tends to infinity for all κ > 0. A large part of the existing rigorous literature on such models omits the case c = 1 because it is often technically different and not covered by the methods that apply to c > 1; see, for example, [1,13,15,17]. That omission is partially remedied in this paper and we will see that the behavior for c = 1 can be quite different from the behavior for c > 1. The case c = 1 includes symmetric simple random walk in two dimensions, for which ϕ(n) ∼ π/(log n) 2 [14]. The essential feature of c = 1 is that P X (E > n) is a slowly varying function of n so that, for example, the longest of the first m excursions typically has length greater than any power of m. This effectively enables the polymer to (at low cost) bypass stretches of disorder in which the values V n are insufficiently favorable and make returns to 0 in more favorable stretches. The quenched version of the pinning model is described by the Gibbs measure The normalization ] is the partition function. The disorder V is a sequence of i.i.d. random variables with mean zero and variance one. We denote the distribution of this sequence by P V . We assume that V 1 has exponential moments of all orders and denote by M V (β) the moment generating function of P V . Let denote the local time at 0 and define the quenched free energy where this limit is taken P V -a.s. The existence and nonrandomness of this limit is standard, as is the fact that see [8]. The parameter u ∈ R can be thought of as the mean value of the potential, while the parameter β > 0 is the inverse temperature. It is known that the phase space in (β, u) is divided by a critical line u = u q c (β) into two regions: localized and delocalized. In the delocalized region u < u q c (β), we have f q (β, u) = 0, while in the localized region u > u q c (β), we have f q (β, u) > 0. It is proved in [12] that f q (β, ·) is infinitely differentiable for all u > u q c (β). An alternate, more phenomenological, characterization of the two regions is as follows. From convexity, we have, for fixed β, that . This limiting value is called the contact fraction, denoted C q (β, u), and it is positive in the localized region and zero in the delocalized region. When the contact fraction is positive, we say the polymer is pinned. The effect of the quenched disorder on the phase transition is quantified by comparing the quenched model to the corresponding annealed model, which is obtained by averaging the quenched Gibbs weight over the disorder to give the annealed Gibbs weight . The corresponding annealed partition function is Z a N = Z a N (β, u) := E X (e β∆L N ) and the Gibbs measure is The corresponding annealed free energy and contact fraction are denoted f a (β, u) and C a (β, u), respectively. The annealed critical point is readily The effect, or lack of effect, of the disorder on the depinning transition may be seen in whether these two critical points actually differ and whether the specific heat exponent (describing the behavior of the free energy as u decreases to the critical point) is different in the quenched case. Although most mathematically rigorous work is relatively recent, there is an extensive physics literature on polymer pinning models; see the recent book [8] and the surveys [9,16] and references therein. In [1] (see also [15] for a slightly weaker statement with simpler proof), it was proven that for 1 < c < 3/2, and for c = 3/2 with ∞ n=1 1/nϕ(n) 2 < ∞, for sufficiently small β, one has u q c (β) = u a c (β) and the specific heat exponents are the same. Both works considered Gaussian disorder, although the method in [1] can be extended to accommodate more general disorder having a finite exponential moment. By contrast, it follows straightforwardly from the sufficient condition ( [17], ; the method is based on fractional moment estimates. These results, together with [1], suggest that for 1 < c < 3/2, there should be a transition from weak to strong disorder, that is, there should exist a value β 0 > 0 below which the annealed and quenched critical curves coincide [i.e., u q c (β) = u a c (β) for β < β 0 , while for β > β 0 , one has u a c (β) < u q c (β)], but this has not been proven. For c > 3/2, it follows from [11] that the quenched and annealed specific heat exponents are different and it was proven in [4] that the critical points are strictly different for all β > 0, that is, β 0 = 0. In [3], the distinctness of critical points at high temperature was extended to include c = 3/2 with ϕ(n) → 0 as n → ∞ and the asymptotic order of the gap u q c (β) − u a c (β) was given. Recently, in [10], the critical points were shown to be distinct for all β > 0 for the case of c = 3/2 and ϕ(n) asymptotically a positive constant, a case about which physicists had long disagreed [6,7]. Here, we show that even with true strong disorder, the critical points remain the same in the case c = 1. Theorem 1.1. Consider the quenched model (1.2) and suppose that E(e tV 1 ) < ∞ for all t ∈ R and that (1.1) holds with c = 1. For all β > 0 and all u > u a c (β), the quenched free energy f q (β, u) > 0 and thus u q c (β) = u a c (β) for all β > 0. In [1] and [15], for the case 1 < c < 3/2, a statement stronger than the equality of the critical points was proven: given ε > 0, if β and β∆ are sufficiently small, then one has f a (β, u) ≥ f q (β, u) > (1 − ε)f a (β, u). One may ask whether a similar statement (possibly strengthened to be valid for all β's) holds for the c = 1 case. We do not pursue that question here, although LOOP EXPONENT ONE 5 we expect such a statement to be true for small β. There are technical obstacles to carrying over the proof for 1 < c < 3/2 to the case c = 1, as noted in Section 4 of [1]. 2. Notation and idea of the proof. Denote the local time at zero over a time interval I by The overlap between two paths X, X ′ in an interval I is defined as We denote by P X,X ′ the measure corresponding to two independent copies X, X ′ of the Markov chain. The "energy gained over an interval I" is defined as The annealed correlation length is defined to be M = M (β, u) := 1/(βf a (β, u)). For example, if ϕ(n) ∼ K(log n) −α for some α > 1, then as β∆ → 0, (2.4) so f a (β, ·) is C ∞ , even at u = u a c (β). The details are similar to those in the case c > 1 considered in [1], but we do not include them here as they are not required for our analysis. Define the intervals For an interval I, let τ I = inf{n ∈ I: x n = 0} and σ I = sup{n ∈ I: x n = 0}. We set τ I = σ I = ∞ if the path does not visit 0 during the interval I. We denote by Ξ N K 1 the set of all paths of length N K 1 which have the following property: if τ I i < ∞ for some i ≤ N , then τ Idea of the proof. We will look at a scale N K 1 and restrict the partition function Z N K 1 (u, β, V) to paths that belong to the set Ξ N K 1 . Further, we will restrict our attention to paths within Ξ N K 1 which bypass bad blocks of length K 1 . Roughly speaking, a bad block is defined to be a block for which the quenched partition function of a path starting at a uniform random point in the block, and making its final visit to 0 in the block within time K 2 after this starting point, is less than half of the corresponding annealed partition function. In Lemma 3.2, we control the probability of having a bad block. It then remains to make an energy-entropy balancing of the paths that belong in Ξ N K 1 and bypass bad blocks, and to show that for β > 0 and ∆ = u + β −1 log M V (β) > 0, this balance is uniformly (in N ) bounded away from zero. For this, we will use the fact that in a good block, the free energy gained is of the order K 2 /M (this is essentially Lemma 3.1), and the fact that because P X (E > k) is a slowly varying function of k, the cost of bypassing bad blocks is small. Proof of the theorem. Proof. It is observed in [1] that a N := β∆ + log E X [e β∆L N ] is subadditive in N . Since a N /N → βf a (β, u), it follows that and the result is immediate. and called bad otherwise. Let p V good := P V (I i is good) and p V bad := P V (I i is bad). Proof. By Chebyshev's inequality, Here, we used the fact that whenever the two independent paths x, An easy calculation shows that the above is equal to for K 1 , K 2 satisfying (2.5) and the second inequality in (2.6). In the third line, we have used the fact that the expectations in the second line do not depend on b and b ′ . We now return to the proof of Theorem 1.1. Let Under P V , the sequence (i j − i j−1 ) j≥1 is an i.i.d. sequence of geometric random variables with parameter p V good . We denote by Ξ J N N K 1 = Ξ J N N K 1 (V) the set of paths x ∈ Ξ N K 1 which satisfy x N K 1 = 0 and make no returns to 0 in bad blocks after the first block. In the following computation, a j and b j are the starting and ending points, respectively, of the excursion from I i j to I i j+1 . Let p n = P X (E = n). As a convention, we set b 0 := 0 and b |J N | := N K 1 . Let Z N K 1 (Ξ J N N K 1 ) denote the partition function restricted to the set of paths Ξ J N N K 1 . We then have With a mild abuse of notation, let us interpret I i |J N |+1 as meaning the one-point interval and, therefore, for some C, the above is bounded below by In the second inequality, we used the fact that the interval I i j is good, while the last equality makes essential use of c = 1 in the cancellation of factors K 1 . We then have that 1 Cϕ((i j+1 − i j + 1)K 1 ) 4(i j+1 − i j + 1) . Letting N → ∞, we get that the left-hand side converges to the quenched free energy f q (β, u), while the right-hand side converges to where C is a constant different from what appears above. Recall that i 2 − 1 is a geometric random variable under P V with parameter p V good . For K sufficiently large, we have C ϕ := inf xϕ(kx) ϕ(k) : x ≥ 1, k ≥ K > 0 and we may assume that K 1 ≥ K. We then have and, by Lemma 3.1 and Proposition 3.2, this is bounded below by 1 2K 1 K 2 2M + log(CC ϕ ϕ(K 1 )) − 2 log 3 . Then, using the first inequality in (2.6), we get that, provided M is sufficiently large, that is, ∆ is small, This completes the proof of Theorem 1.1.
2010-01-14T09:23:16.000Z
2008-11-12T00:00:00.000
{ "year": 2008, "sha1": "dd2303f465acef5023ec1df2a0bb6b0bad5ec148", "oa_license": "implied-oa", "oa_url": "https://doi.org/10.1214/09-aap621", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "63f029f77dfb7683d513bb79af1395bbcf258e3f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
16507262
pes2o/s2orc
v3-fos-license
CD133 antisense suppresses cancer cell growth and increases sensitivity to cisplatin in vitro The increased incidence of cancer in recent years is associated with a high rate of mortality. Numerous types of cancer have a low percentage of CD133+ cells, which have similar features to stem cells. The CD133 molecule is involved in apoptosis and cell proliferation. The aim of this study was to determine the biological effect of CD133 suppression and its role in the chemosensitization of cancer cell lines. RT-PCR and immunocytochemical analyses indicated that CD133 was expressed in the cancer cell lines B16F10, MCF7 and INER51. Downregulation of CD133 by transfection with an antisense sequence (As-CD133) resulted in a decrease in cancer cell viability of up to 52, 47 and 22% in B16F10, MCF-7 and INER51 cancer cell lines, respectively. This decreased viability appeared to be due to the induction of apoptosis. In addition, treatment with As-CD133 in combination with cisplatin had a synergic effect in all of the cancer cell lines analyzed, and in particular, significantly decreased the viability of B16F10 cancer cells compared with each treatment separately (3.1% viability for the combined treatment compared with 48% for 0.4 μg As-CD133 and 25% for 5 ng/μl cisplatin; P<0.05). The results indicate that the downregulation of CD133 by antisense is a potential therapeutic target for cancer and has a synergistic effect when administered with minimal doses of the chemotherapeutic drug cisplatin, suggesting that this combination strategy may be applied in cancer treatment. Introduction Cancer is a disease in which cells lose their normal control mechanisms and exhibit unorganized growth, thus it may develop in several tissues or organs, growing and invading contiguous tissues and extending to the whole body (1). Cancer is associated with a high rate of mortality due to its capacity to disseminate rapidly and the lack of effective treatments (2)(3)(4)(5). A number of cancer types originate from cancer stem cells (2,(6)(7)(8)(9)(10). These cancer stem cells are important in tumor proliferation and resistance to chemotherapy and radiotherapy (11)(12)(13). Each type of tumor has a unique combination of markers that define the subpopulation of stem cells with the highest tumorigenic potential (14). For example, the stem cell marker CD133 is expressed in fetal liver but not in normal adult liver, and is re-expressed in cancer livers. This upregulation of CD133 is a factor associated with poor prognosis, suggesting that CD133 plays an oncogenic role in hepatocellular carcinoma (12,(15)(16)(17)(18). CD133 (or Prominin 1) is a membrane glycoprotein of 120 kDa in size in humans and 115 kDa in mice (19). Cancer stem cells that are positive for CD133 exhibit the activation of a number of mechanisms responsible for tumor growth and recurrence (8)(9)(10) and inhibition of apoptosis (16,18,20,21). Observation of CD133 + cancer stem cells aids the classification, diagnosis and treatment of cancer, and a high expression of CD133 protein has been associated with lymph and visceral metastasis (22), malignancy and poor prognosis (23). The aim of this study was to determine the effect of suppression of the CD133 protein in cancer cell lines and its role in chemosensitization, with a view to contributing to our understanding of CD133 in cancer stem cells as a possible therapeutic target. Cell lines were cultured and maintained in Dulbecco's modified Eagle's medium (DMEMF-12, Life Technologies, Invitrogen, Burlington, ON, Canada). The medium was supplemented with 10% fetal bovine serum (FBS; Gibco, Grand Island, NY, USA) and cells were incubated at 37˚C in a 5% CO 2 atmosphere. Immunocytochemistry. B16F10, MCF7 and INER51 cells were grown on glass slides in 6-well plates (1x10 5 cells/well) with 3 ml DMEMF-12 supplemented with 10% FBS for 24 h at 37˚C and 5% CO 2 , and fixed with a 1:1 acetone-methanol solution for 10 min at -20˚C. The cells were rehydrated in phosphate-buffered saline (PBS) and processed for antigen retrieval by a standard microwave heating technique prior to incubation with anti-CD133 antibody (Ab-CD133; Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) at a dilution of 1:100. The reaction was developed using the Dako Liquid DAB Substrate-Chromogen system (Dako , Carpinteria, CA, USA) and the cells were counterstained with hematoxylin and eosin. Transfection with As-CD133. B16F10, MCF7 and INER51 cell lines were transfected with As-CD133 and pEGFP-N3 plasmid as a control (Clontech Laboratories, Inc., Palo Alto, CA, USA) using the cationic branched polymer polyethylenimine 25 kDa (PEI) (Sigma-Aldrich, St. Louis, MO, USA). A stock solution of PEI was prepared at a concentration of 6.45 µg/ml in H 2 O. The charge ratio, expressed as PEI nitrogen:DNA phosphate, was 5 (N:P=5). The cells were seeded at 3x10 3 cells/well in 100 µl DMEMF-12 supplemented with 10% FBS in a 96-well plate 24 h before transfection. For each well, 0.1-0.6 µg of As-CD133 was diluted into 10 µl 150 mmol/l NaCl and 0.01-0.06 µl of the PEI solution was added to another 10 µl of 150 mmol/l NaCl. The PEI-NaCl solution was added to the DNA-NaCl solution, agitated and incubated for 30 min at room temperature. Then, 20 µl of the mixture was added to each well and incubated at 37˚C in a 5% CO 2 atmosphere. Cell viability was evaluated by MTT assay after 48 h. Analysis of CD133 expression by RT-PCR (reverse transcription-polymerase chain reaction). B16F10, MCF7 and INER51 cell lines were plated in a 6-well plate at 1x10 5 cells/well in 3 ml DMEMF-12 supplemented with 10% FBS and incubated for 48 h at 37˚C. Cells were harvested and total RNA was extracted using 1 ml TRIzol reagent (Invitrogen) according to the manufacturer's instructions. For RT-PCR, 5 µg of total RNA was reverse transcribed using RT and oligo(dT) (Invitrogen). Cell viability analysis by 3-(4,5-dimethylthiazol-2yl)-2,5-diphenyl tetrazolium bromide (MTT) assay. The transfected cells were seeded in 96-well plates at a density of 3x10 3 cells/well and allowed to attach for ~24 h at 37˚C. For the MTT assay, 0.025 g MTT (Sigma-Aldrich) was added to 5 ml PBS at a concentration of 5 mg/ml MTT. The cells were incubated with 20 µl MTT solution at 37˚C for 1 h. The medium was then removed, 100 µl dimethylsulfoxide was added to each well and the samples were incubated for 10 min. The optical density (OD) at 570 nm was determined using a microplate reader (Microplate Autoreader EL311, BioTek Instruments, Inc., Winooski, VA, USA). The data are shown as the percentage viability with the standard error. Determination of DNA integrity by acridine orange staining. B16F10 cells (3x10 3 cells/well in a 96-well plate) were transfected with 0.4 and 0.6 µg of As-CD133 and incubated at 37˚C in a 5% CO 2 atmosphere. After 48 h, the cells were stained with 20 µl of a solution of ethidium bromide (1 mg/ml) and acridine orange (1 mg/ml) in PBS. The cells were incubated for 5 min in the dark at room temperature and then washed with PBS. The samples were photographed using fluorescence microscopy (TE-Eclipse 300, Nikon). RT-PCR of apoptotic genes. cDNA of B16F10 cells was amplified using the MPCR kit for mouse apoptotic gene set-1 (Maxim Biotech, Inc., San Francisco, CA, USA) according to the manufacturer's instructions, using a PTC-200 Peltier Thermal Cycler. The PCR products were analyzed by electrophoresis on a 0.8% agarose gel and visualized under UV light in a ChemiDoc transilluminator. Synergistic effect of As-CD133 and cisplatin combination treatment on cancer cell viability. B16F10, MCF-7 and INER51 cells were seeded in a 96-well plate at 3x10 3 cells/well in 100 µl DMEMF-12 supplemented with 10% FBS 24 h prior to transfection. Subsequent to the previous procedure, the cells were transfected with 0.4 µg As-CD133 and the addition of cisplatin at the time of transfection (2-14 ng/µl resuspended in DMEMF-12 supplemented with 10% FBS). Cells were incu-bated for 48 h at 37˚C in a 5% CO 2 atmosphere and analyzed by MTT assay. Results Expression of CD133 in cancer cell lines. The RT-PCR analysis revealed that the three cancer cell lines analyzed, B16F10, MCF7 and INER51, all expressed high levels of CD133 mRNA (Fig. 1A). These results correlate with the immunocytochemistry, which showed that 70% of the cells were CD133 + (Fig. 1B). Effect of CD133 downregulation by As-CD133 on cancer cell viability. To determine the effect of CD133 protein downregulation in cancer cells, the three cell lines were transfected with As-CD133 or control pEGFP-N3. To determine the transfection efficiency, green fluorescent protein expression was visualized by UV microscopy, demonstrating that 70-80% of B16F10 cells were transfected, compared with only 20-30% of MCF7 and INER51 cells (Fig. 2A). Forty-eight hours after transfection with As-CD133, the three cell lines exhibited a decrease in cell viability and morphological changes. The MTT assay of cells treated with 0.6 µg As-CD133 indicated 48, 53 and 78% viability for the B16F10, MCF-7 and INER51 cancer cell lines, respectively (Fig. 2B), indicating a statistically significant difference between the control and treated B16F10 and MCF7 cell lines (P<0.05). These effects were dose-dependent (P=0.4). However, the decrease in the viability of INER51 cells was not statistically significant (P>0.05; Fig. 2B). To investigate the correlation between the decrease in cell viability and CD133 expression, immunocytochemical and RT-PCR analyses of CD133 expression were conducted. Immunocytochemical staining showed a decrease in the CD133 protein in B16F10 cells transfected with As-CD133 (Fig. 2C). RT-PCR with primers CD133-1 corroborate the antisense expression in transfected cells, while the primers CD133-2 indicated a decrease in CD133 mRNA expression when the cells were transfected with As-CD133 (Fig. 2D). Analysis of DNA integrity in B16F10 cancer cells transfected with As-CD133. The analysis of DNA integrity with acridine orange showed staining of a high percentage (70-80%) of transfected B16F10 cells compared with the control, indicating that the transfected cells contained degraded DNA. This staining was dose-dependent with respect to the antisense vector (Fig. 3A), suggesting that the cell death mechanism induced by As-CD133 is apoptosis. Analysis of apoptotic gene expression in B16F10 cells transfected with As-CD133. Analysis of the expression of apoptotic genes by multiplex RT-PCR revealed overexpression of the p53 gene in cells transfected with As-CD133 (Fig. 3B). It is likely that the downregulation of CD133 in transfected B16F10 cells is correlated with a loss of DNA integrity and p53 activation, causing the cells to enter apoptosis. Chemosensitization by As-CD133 in combination with cisplatin. To determine whether the inhibition of CD133 expression by As-CD133 has a chemosensitizing effect in B16F10, MCF7 and INER51 cells, the cells were co-treated with a median lethal dose (LD 50 ) of As-CD133 (0.4 µg) and various concentrations of cisplatin (2-14 ng/µl). This combination produced a synergistic effect in B16F10 cells, since the cell viability decreased significantly with the combination treatment compared with individual treatments (3.1% viability for the combination compared with 48% viability for 0.4 µg As-CD133 and 25% for 5 ng/µl cisplatin; P<0.05). However, MCF7 and INER51 cells did not exhibit the same effect, and there was no statistical difference in cell viability between the individual and combined treatments in these cell lines (Fig. 4). Discussion Our The CD133 molecule is crucial in the survival of cancer cells, and our results showed that downregulation of the CD133 protein by an antisense construct resulted in a decrease in cancer cell viability. These results support the findings of other authors such as Immervoll et al (24), whose data indicated that CD133 is involved in cellular polarity and is required for cellular movement as well as the processes of chemotaxis, embryonic development, invasive growth and metastasis. In addition, Yang et al (25) reported CD133 involvement in glucose metabolism and cytoskeleton alteration. Additionally, Rappa et al (7) showed that the downregulation of CD133 resulted in retarded cell growth, reduced cell motility and a decreased ability to form spheroids under stem cell-like growth conditions. Findings of the present study also showed that the decrease in cancer cell viability following transfection with As-CD133 was most likely the result of increased cell death through an apoptotic mechanism. This pathway was likely activated via the pro-apoptotic gene p53, which was itself most likely activated by a member of the MAP kinase family, which responds to various types of stress resulting in the upregulation of p53 expression being triggered (26). However, additional studies are required to confirm this pathway. The synergistic effect of an antisense sequence and an anticancer drug are likely to provide a good alternative treatment against CD133 + cancer since downregulation of the CD133 protein may result in chemosensitization of cancer cell lines, as observed in the B16F10 cell line used in this study. This finding presents a potentially effective and promising approach to cancer therapy, which may decrease the required In a previous study, Tirino et al (19) mentioned that CD133 + cells represent a small population of cells that possess stem features and are potentially resistant to drugs, and thus may effectively drive cancer progression. Dell'Albani (16) and Liu et al (20) reported that CD133 + cells express high levels of apoptotic suppressors (Bcl2, FLIP, BCL-XL) and several apoptotic protein inhibitors (XIAP, cIAP1, cIAP2, NAIP), which are linked to caspases 3, 7 and 9 to prevent apoptosis and modulate cellular division, as well as progression of the cell cycle and signal transduction pathways (16,20). In the present study, the synergistic effect of antisense and cisplatin was not observed in the INER51 and MCF7 cell lines. With respect to the INER 51 cell line, it is necessary to identify a more effective transfection method than polyethylenimine since improved transfection efficiency may lead to results similar to, or even better than, those obtained with the B16F10 cells. Additionally, various drugs should be tested to obtain improved results with the MCF7 line. In conclusion, findings of this study have provided evidence that CD133 is important in the viability of cancer cells and suggest that CD133 downregulation by antisense, alone and in combination with cisplatin, is potentially a new and powerful therapeutic strategy for CD133 + cancers.
2018-04-03T01:39:35.029Z
2012-08-31T00:00:00.000
{ "year": 2012, "sha1": "b4f02150faf3c579d7ad5c89a15dc319a34df33f", "oa_license": "CCBY", "oa_url": "https://www.spandidos-publications.com/etm/4/5/901/download", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4f02150faf3c579d7ad5c89a15dc319a34df33f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257779680
pes2o/s2orc
v3-fos-license
Practitioner and Director Perceptions, Beliefs, and Practices Related to STEM and Inclusion in Early Childhood The importance of early science, technology, engineering, and math (STEM) learning opportunities for all young children has become increasingly documented by research and recommended practices. In addition, high quality inclusive settings where all children can access and benefit from learning activities continues to demonstrate optimal outcomes for all children. This manuscript reports findings from a survey broadly disseminated related to early childhood practitioners’ and directors’ perceptions related to STEM and inclusion and explores what practices related to STEM and inclusion are currently being used by early childhood practitioners and directors. While the majority of respondents supported the importance of both STEM and inclusion, there were varied responses related to relevance for infants and toddlers and inconsistent reports of specific practices being used. The findings suggest the need to emphasize and provide professional development opportunities focused on STEM and inclusion for our early childhood workforce more explicitly. Additional implications for research and practice are discussed. Supplementary Information The online version contains supplementary material available at 10.1007/s10643-023-01476-w. . Science, technology, engineering, and math (STEM) have become recognized as critical elements in early childhood learning and later success (e.g., Sarama et al., 2018;Tippett & Milford, 2017). STEM learning has been linked to improved reading, writing, math, and literacy skills (Claessens & Engel, 2013;Clements et al., 2020;Paprzycki, et al., 2017;Sarama et al., 2018). Furthermore, young children as explorers, questioners, and problem solvers, are ideal candidates to engage in STEM learning experiences (Clements et al., 2020;Sarama et al., 2018). Thus, early intervention and early childhood practitioners should purposefully embed early STEM learning opportunities for the children with which they work. As a relatively new field within early childhood, recommended practices for STEM learning are not yet widely available, but there are some existing models and resources to support early STEM learning. The National Research Council (NRC, 2012) has developed a Framework for K-12 Science Education and informed the Next Generation Science Standards which represent and describe important scientific concepts such as cause and effect and force and motion for older children. These concepts are similarly important for younger children and have been adapted in models such as the Early Science Framework (Greenfield et al., 2009) and Teaching and Learning with Learning Trajectories (Clements & Sarama, 2012, 2021. However, consistent with findings relative to older children, current evidence suggests STEM is not as available and accessible for children with disabilities (Clements et al., 2020). Thus, the need to ensure all children are engaged in accessible high-quality inclusive STEM learning opportunities from their youngest years is pertinent. Furthermore, the importance and benefits of high-quality early intervention and inclusive early childhood education have been documented by research (e.g., Agran et al., 2020;Barton & Smith, 2015;Odom et al., 2019). Highquality inclusive environments are beneficial to all young children, including children with and without disabilities (Barton & Smith, 2015;Beatty et al., 2021;Noggle & Stites, 2018). There are existing models that have been developed to guide high quality inclusion and focused on ensuring child access to and participation in learning opportunities. The models generally do so by providing tier-based modifications and adaptations to the environment, materials, and instructional practices (e.g., Campbell et al., 2012;Carta & Young, 2019;Greenwood et al., 2014Hemmeter et al., 2013, 2021Sandall et al., 2019;Shepley & Grisham-Brown, 2019). In addition, the Division for Early Childhood of the Council for Exceptional Children (DEC) has developed a set of recommended practices to guide high quality service provision, including inclusion, for practitioners working with young children with disabilities and their families (DEC, 2014). In addition, the National Association for Education of Young Children (NAEYC) Developmentally Appropriate Practices include ensuring high-quality inclusion for all young children (NAEYC, 2022). DEC and NAEYC have come together as a field to define early childhood inclusion as embodying "the values, policies, and practices that support the right of every infant and young child and his or her family, regardless of ability, to participate in a broad range of activities and contexts as full members of families, communities, and society (DEC & NAEYC, 2009, p. 2). The position statement also further explicates that "the desired results of inclusive experiences for children with and without disabilities and their families include a sense of belonging and membership, positive social relationships and friendships, and development and learning to reach their full potential" (DEC & NAEYC,p. 2) and that the key defining features of inclusion are access, participation, and support (DEC & NAEYC). Taken together, these findings suggest that early intervention and early childhood practitioners should focus on providing high-quality inclusive services for each and every young child. Attitudes and Beliefs Related to STEM and Inclusion In order to support STEM learning for young children with and without disabilities, attitudes and beliefs of early childhood practitioners should be examined. Research suggests that practitioners' attitudes and beliefs play a crucial role in what content is addressed and how STEM is taught in early childhood settings (Maier et al., 2013). Of note, the majority of research on practitioner perceptions and beliefs related to STEM learning has been conducted relative to math and science and suggests that teachers' positive attitude and increased knowledge in science were critically important in child learning (Fleer, 2009;Krapp & Penzel, 2011;Nilsson & Van Driel, 2011;Westman & Bergmark, 2014). Yet, research also suggests early educators often are not adequately prepared or confident in their ability to teach or facilitate STEM experiences (Brenneman et al., 2009;Sarama & Clements, 2009). Early childhood teachers report not possessing the knowledge, skills, and/or confidence to teach STEM learning in their work settings leading to anxiety and feelings of inadequacy (Pendergast et al., 2015). In addition to negative feelings related to STEM, some research suggests practitioners may also hold negative beliefs related to inclusion (Barton & Smith, 2015;Sharma & Hamilton, 2019). Similar to practitioners' beliefs about early STEM, researchers have found that early childhood educators feel less prepared to teach children with disabilities (Chadwell et al., 2020;Cruz et al., 2020). In all, while previous studies have examined early childhood educators' attitudes and beliefs about STEM or inclusion separately, none has focused on practitioners' perceptions of STEM learning for young children with disabilities or for infants and toddlers. Most were also focused on math or science alone. Additionally, most studies on STEM or inclusion were focused on early childhood educators and did not include early intervention, early childhood special education practitioners, or directors. This suggests the need to further explore how early childhood practitioners and directors perceive STEM and inclusion and whether they are currently embedding the content areas and approaches into their practice. From here on, we use the term early childhood practitioners to refer to early childhood educators, early intervention, and early childhood special educators and directors to refer to the leaders of early intervention agencies and early childhood centers. The purpose of the present study was to examine early childhood practitioners' and directors' perceptions and beliefs related to STEM and inclusion in early childhood settings. In this manuscript, we are using the definition of inclusion agreed upon by two international early childhood professional organizations, DEC and NAEYC, which primarily focus on the inclusion of children with disabilities. We addressed the following research questions: (1) What are practitioners' and directors' feelings and beliefs related to STEM and inclusion in early intervention and early education? (2) What are practitioners and directors doing related to STEM and inclusion in early intervention and early education? (3) What challenges do providers and directors/leaders face related to STEM and inclusion in early intervention and early education? Method We received approval to conduct this study through the Institutional Review Board at a university in the southeastern United States. Survey methodology was employed to address the research questions above. In the following section, we describe the survey development process, dissemination, and data analysis procedures. Survey Development and Cognitive Interviews The initial survey was developed through a three-phase process. During the first phase members of the research team drafted multiple-choice questions, comment opportunities, and Likert scale matrices based on feelings, beliefs, and practices related to STEM and inclusion. Due to the different levels of extant research and models for each of the STEM domains discussed above, with more addressing math and science as opposed to engineering and/or technology, we developed the survey to address items related to science, technology, engineering, math and STEM as separate concepts to inform our findings. During the second phase, internal research team members with expertise in STEM and inclusion reviewed the survey. The reviewers noted a few discrepancies. For example, in a question addressing curriculum used in center-based settings, one of the survey choices was an assessment not a curriculum. Based upon feedback from reviewers, the team made necessary edits. Following internal edits, the final phase of the survey development involved conducting four cognitive interviews with early childhood practitioners to finalize the survey. This process is described below. Of note, the team developed surveys for both practitioners and directors in early childhood settings, but the development phases focused on the practitioner survey. The director surveys were adapted slightly from the practitioner survey in regard to wording. For example, items addressing caseloads and/or classrooms were adapted to be about staff and programs. The cognitive interview process included participants completing the working draft of the survey in Qualtrics and then engaging in semi-structured interviews relative to the survey and participants' thought processes while completing the survey (Collins, 2015). These interviews served to ensure that the survey was readable, understandable, and relevant to current early childhood practitioners. Interview protocols began with questions about the survey topics (e.g., "In your own words, what was this survey asking you about?" and "What does the term "early STEM" mean to you?"). The protocol also included questions on the style (e.g., "How useful were the additional open-ended questions and optional comments?") and question formats (e.g., "How well were you able to provide answers to questions that allowed multiple choices such as [item numbers]?" and "How easily were you able to provide answers to questions that asked you to agree/disagree with specific statements such as [item numbers]?"). The interviews also included questions on definitions and terminology (e.g., "Do you feel the definitions provided on the survey made sense to you?"). We recruited four early childhood practitioners to participate in the interviews through personal contacts. They each completed a draft of the survey and engaged in a semi-structured cognitive interview. One participant was a home-based early intervention provider; one was a head start teacher; one was an itinerant teacher of young children; and one was an occupational therapist providing early intervention services. Three of the four interviews were conducted in person with one being conducted virtually both due to the COVID-19 pandemic and the location of the provider. The providers found the survey to be understandable and reported the majority of questions were relevant to their work settings. They appreciated concept definitions included in the survey and suggest a few more specific references to home-visits and 1:1 sessions. They also suggested adding a few specific curriculums or approaches that may be used in early childhood settings (e.g., structured TEACCHing, TEACCH, Mesibov et al., 2005). When all suggested edits and modifications were made following internal expert and practitioner reviews, the final surveys for both practitioners and directors included 5 blocks: (a) demographics (24 questions) addressing roles, settings, race/ethnicity, and general curriculum and approaches used; (b) importance and developmental appropriateness of STEM and inclusion (11 questions) including Likert style matrices for participants to strongly disagree to strongly agree about importance and developmental appropriateness of each domain of STEM and STEM as a whole; (c) inclusive practices used (6 questions) including a matrix that asked participants whether they used specific models and approaches such as the Pyramid Model and Universal Design for Learning; (d) STEM practices used (6 questions) including a matrix that asked participants whether they embed specific STEM content such as counting and sequencing and specific models such as the Early Science Framework and Learning Trajectories; and (e) challenges faced accessing and using inclusive and STEM practices (12 questions) including Likert scales that asked participants to rate the extent to which common challenges (time, funding, etc.) were perceived as challenges in addressing STEM and inclusion in practice. In total, there were 59 questions on the survey. When referencing perceptions about STEM, each domain (science, technology, engineering, and math) was addressed separately and as an inclusive domain. The research team developed definitions for conceptualizing each domain and STEM as a whole pulling from existing literature and recommended practices (e.g., National Research Council; NRC, 2012). Open ended optional comments were embedded throughout the survey. See Fig. 1 for examples of items in each of the above sections and provided definitions for STEM domains. Surveys for practitioners and directors were available in English and Spanish. Sample Survey Items Demographics: Data Collection The survey was disseminated through a variety of channels beginning in the winter of 2020 through the spring of 2021 with consent written into the introductory script of the survey. We posted links of the survey on the research team's social media platforms and website which have an international following of participants interested in early STEM and inclusion. In addition, contacts and listservs of existing groups (e.g., DEC, NAEYC, and the Supporting Change and Reform in Preservice Teaching in North Carolina; SCRIPT-NC) that included early childhood practitioners and directors were emailed with an invitation to complete the survey. Eligibility criteria included being 18 years or older and working as a practitioner or director in any capacity focused on early intervention and/or early childhood. Identifiable data were not collected, but upon completion of the survey, participants were able to opt into a $50 gift card drawing. They were also able to opt in to join the research team's listserv, and/or agree to follow up by sharing their email addresses. The survey allowed respondents to skip questions and submit their survey, so respondents who did skip any questions were not excluded from the analysis. All data were collected via Qualtrics online survey program. The research team was prepared to provide paper copies for anyone who requested it, but there were no requests. Thus, all data were collected electronically. Participants In all, 160 practitioners and 120 directors completed and submitted the survey at the survey closure date in the spring of 2021. Response rates of completion were calculated using emails and responses as recorded by Qualtrics. For practitioners, we used Qualtrics data to determine a response rate using reported email responses. Specifically, 2570 emails were sent, and 130 practitioners responded via this method, indicating a response rate of 5%. For directors, Qualtrics recorded all responses as originating as anonymous links, so we were unable to calculate email responses. Of note, a number of survey responses that were started did not move beyond the captcha suggesting they may have been accessed by bots. Of the 160 practitioners who filled out the survey 99% (n = 159) were female. Of the directors, 100% (n = 120) were female. Most practitioners (76%; n = 120) and directors (70%; n = 84) held a bachelor's degree or higher. In addition, most practitioners (81%; n = 130) and directors (86%; n = 108) were White. Many practitioners worked as classroom/center-based practitioners and most directors worked in centers. Figures 2 and 3 provide details on practitioner and director roles. Child age ranges varied with practitioners reporting serving children from 0 to 5 years old (41%; n = 65), primarily 0-2 years old (15%; n = 24), and primarily 3-5 years old (31%; n = 49). There was less variance with the directors who reported their centers served children from 0-5 years old (55%; n = 66) and primarily 3-5 years old (22%; n = 26). Only 1 director (1%) reported their center served children primarily 0-2 years old. Additional demographic data may be viewed in Table 1. As seen in Tables 2 and 3, some respondents reported previous training in STEM and inclusion with the highest percentage of participants (73%; n = 101 practitioners and 73%; n = 82 directors) reporting professional development opportunities in early STEM. In addition, although disseminated on a national level, the number of respondents was not consistent across states with notably higher frequency of respondents in the states of Kansas (26%; n = 42 practitioners and 24%; n = 29 directors), North Carolina (12%; n = 21 practitioners and 4%; n = 5 directors), and Vermont (8%; n = 13 practitioners and 18%; n = 22 directors). Individual states of residence may be viewed in Fig. 4. Data Analysis Descriptive statistics were used to address each research question to obtain an overall sense of early childhood practitioners' and directors' perceptions related to STEM and inclusion. Specifically, data were viewed and charted to provide a visual representation of perceptions and trends. Findings are described below. Responses to open-ended comments on the survey were analyzed using qualitative content analysis with consensus coding (Neuendorf, 2017). Specifically, we pulled the responses onto a single excel spreadsheet and used the responses to create categories and themes around their meaning. Of note, all the open-ended questions were optional and linked to survey items so there were a limited number of responses. The majority of categories were determined a priori being the topic of the question itself. For example, after filling out a matrix on specific STEM content and approaches used in practice, there was an optional comment to "Please share any comments or additional examples of teaching STEM in your work" and these responses generally fell into the category "Additional examples of STEM teaching". All open-ended comments were coded by two team members with weekly consensus meetings to establish credibility and reliability of the open-ended data (Saldana, 2016). Results Results from the survey are reported below in text with supplementary figures and tables as noted. Terms "majority", "most", and "many" are used interchangeably to mean at least 60% of respondents. Responses to survey items were generally consistent between practitioners and directors with the exceptions of specific models reported and challenges faced as described below. Thus, findings from both surveys are reported but the majority of the content is combined. Percent of Director Respondents Center-Based Directors Preschool Directors Other No response RQ1: Feelings and Beliefs Related to STEM and Inclusion in Early Intervention and Early Education Overall, the importance and developmental appropriateness of STEM, each STEM domain, and inclusion (as evidenced by consistent responses for children with and without disabilities) were widely supported by respondents. There were a few discrepancies within the individual domains of engineering and technology being perceived as slightly less important and developmentally appropriate relative to infants and toddlers. When asked about feelings and beliefs related to STEM's importance for young children with and without disabilities, many practitioners (95%; n = 150) and directors (93%; n = 111) agreed or strongly agreed that STEM is important for infants and toddlers. A notable number of practitioners perceived of the importance of engineering (10%; n = 16 strongly disagree or disagree) and practitioners (19%; n = 30 strongly disagree or disagree) and directors (15%; n = 20 strongly disagree or disagree) perceived the importance of technology as lower for infants and toddlers. This sentiment was echoed for infants and toddlers with disabilities. For preschool aged children with and without disabilities, most practitioners and directors agreed and strongly agreed that all domains were important. Similar responses were recorded related to the developmental appropriateness of each domain, with lowest agreement rates for infants and toddlers (with and without disabilities) and the appropriateness of technology (83%; n = 129 practitioners; and 81%; n = 96 directors) and engineering (89%; n = 138 practitioners). Thus, most practitioners felt all domains were both important and developmentally appropriate for young children with the largest discrepancies for infants and toddlers related to technology and engineering. Figure 5 provides a display of these data. A fair number of practitioners (29%; n = 45) and directors (19%; n = 22) reported that they did not know how to find information on teaching early STEM concepts. Additionally, 26% (n = 32) of directors reported their staff did not know how to find information on early STEM learning. Many practitioners (85%; n = 132) reported feeling supported to teach STEM concepts. When asked about their staff's comfort level teaching these concepts, over 30% of directors Fig. 4 States of residence expressed their staff were not comfortable teaching early technology (32%; n = 37) or engineering (35%; n = 42). Most practitioners (78%; n = 120) and directors (73%; n = 88) reported they were interested in learning more about teaching STEM concepts to their children. As it relates to inclusion, many practitioners and directors reported serving children with and without disabilities, while 22.5% (n = 36) of practitioners and 15% (n = 18) of directors reported they did not serve children with disabilities in their setting. Ninety percent (n = 140) of practitioners and 92% (n = 110) of directors agreed or strongly agreed that they know how to access information about inclusive practices, and 94% (n = 128) of practitioners felt supported in their work to use inclusive practices. Many practitioners (86%; n = 129) felt confident in their ability to use inclusive practices whereas 14% disagreed or strongly disagreed. Directors agreed slightly less (73%; n = 88) when asked if their staff are adequately prepared to use inclusive practices. Seventy percent (n = 110) of practitioners and 66% (n = 76) of directors desired to learn more about using inclusive practices in their work. RQ2: Practices Related to STEM and Inclusion in Early Intervention and Early Education In general, survey respondents reported using a variety of specific models related to STEM and inclusion and embedding a number of STEM related content into their practice. The majority of practitioners (69%; n = 109) and directors (69%; n = 83) reported that they currently integrate STEM learning opportunities with their children, but a notable number (16%; n = 25 practitioners and 20%; n = 24 directors) reported they did not. Fourteen percent (n = 23) of practitioners and 10% (n = 13) of directors indicated that they were not sure whether they embed STEM learning into their practice. In the open-ended comments, a few practitioners shared they would like to integrate more STEM learning as one practitioner shared, "I am not integrating STEM learning experiences as much as I would like to" and others reported challenges to focusing on STEM learning such as, "I try, but it is difficult with this many children in such a small room". Directors said such things as "I think the teachers do try, but it's more by accident that integrated STEM occurs." and "STEM is limited due to lack of budget for materials, lack of space in classrooms, and staff not comfortable with it." Only 25% (n = 39) of practitioners reported very frequently teaching early STEM concepts and 30% (n = 47) reported teaching STEM somewhat frequently. Practitioners also reported teaching STEM not much (31%; n = 49) or not at all (13%; n = 21). Similarly, only 21% (n = 25) of directors reported very frequent early STEM teaching in their setting and 40% (n = 46) reported somewhat frequent teaching. Thirty-four percent (n = 40) of directors reported their staff teach STEM not much and 6% (n = 7) reported their staff do not teach STEM at all. Specific to STEM models and concepts practitioners and directors had very similar replies with a number of reported STEM practices. Almost all practitioners and directors reported teaching their students about STEM concepts such as creativity (97% in both groups), counting (97% in both groups), collaboration (95% practitioners; 94% directors), and sequencing (92% practitioners; 95% directors). They agreed that they focused on identifying and comparing shapes (92% practitioners; 91% directors), comparing numbers (91% practitioners; 97% directors), and life science (89% practitioners; 89% directors). On the other hand, few Fuchs & Fuchs, 2006). It is notable that directors perceived their staff as using these models at a slightly lower agreement rate. Some practitioners and directors also reported using the Division for Early Childhood of the Council for Exceptional Children (DEC) Recommended Practices (67% practitioners; 51% directors; DEC, 2014) and Creating Adaptations for Routines and Activities (60% Fig. 7 provides an summary of these responses. Challenges Accessing Information about STEM and Inclusion There were more discrepancies among practitioners and directors when it came to perceived challenges related to STEM and inclusion. As seen in Table 4, when asked about specific challenges encountered accessing information on STEM, some practitioners and directors reported lacking resources (47% practitioners; 59% directors), lacking funding (46% practitioners; 64% directors), lacking time (46% practitioners; no data for directors), and lacking evidencebased practices (23% practitioners; 26% directors) as obstacles. It is noteworthy that throughout this question, directors reported more perceived challenges than did practitioners. Additional barriers to accessing information listed in the optional comments included challenges working individually with children, not knowing where to find the resources, lack of professional understanding, child and staff safety, low staffing, variety of funding sources, and lack of networking opportunities. There was a similar pattern with a slightly higher agreement rate between practitioners and directors for the responses related to perceptions of barriers faced in accessing information about inclusive practices, as seen in Table 5. Only some practitioners (37%) agreed that lacking resources was an obstacle, while more practitioners did not perceive this as an obstacle (46%). Seventeen percent reported this did not apply to their setting. Half of the directors (50%) perceived lacking resources to be a barrier to accessing information about inclusion. Slightly more directors (60%) than practitioners (44%) agreed that lacking funding was an obstacle faced accessing information about inclusive practices, while almost half of practitioners and nearly a quarter of directors did not perceive lacking funding as an obstacle. Similar numbers were reported regarding lack of time as a challenge. Fewer respondents (18% practitioners; 22% directors) perceived a lacking evidence-based practices as a barrier to information about inclusive services, and more than half of directors and practitioners did not perceive this as a barrier. Respondents listed additional obstacles faced in accessing information about inclusive practices which included lack of training/education/professional development, too many children on a caseload, keeping up with standards, and lack of networking opportunities. Challenges Using and Applying STEM and Inclusion As it relates to using and applying STEM content, nearly half of the practitioners (46%; n = 68) and over half of the directors (59%; n = 71) agreed that lacking training/professional development was a barrier. More practitioners (43%; n = 65) than directors (34%; n = 38) noted other demands as obstacles faced in practice. States' priorities of other disciplines such as literacy was also reported as a barrier for some practitioners (39%; n = 65) as was STEM activities taking more time than other activities (34%; n = 52) and lacking confidence teaching STEM concepts (38%; n = 58). Directors varied a bit with few (10%; n = 11) agreeing that alternative state priorities were a barrier, some (27%; n = 30) agreeing STEM takes more time than other activities), and about half (49%; n = 56) agreeing a lack of confidence teaching STEM is a barrier. Few practitioners (18%; n = 28) and very few directors (5%; n = 6) stated that lack of interest was an obstacle. These data may be viewed in Table 6. Additional barriers reported included challenges determining how to embed such content, short days for preschoolers, and difficulty translating STEM concepts for families in early intervention settings. When asked about challenges faced using and applying inclusive practices, the highest number of practitioners (37%; n = 57) and directors (54%; n = 61) agreed that lacking training/professional development was an obstacle faced. A lack of confidence was reported by a little over a quarter of respondents (29%; n = 44 practitioners and 27%; n = 31 directors). Similarly, some practitioners (15%; n = 17) felt the field demanded focus on other content and there was not enough time to use inclusive practices. Most practitioners (74%; n = 114) and directors (79%; n = 90) did not agree that a lack of interest was a barrier to using inclusive practices. Additional barriers to using and applying inclusive practices listed in the optional comments included needing additional staff and colleagues' lack of knowledge of the importance of inclusive practices in early childhood. Desire to Learn More about STEM and Inclusion Finally, when asked about which aspects of STEM and inclusion practitioners and directors had a desire to learn more, responses varied. As seen in Fig. 8, many respondents (86%; n = 131 practitioners and 88%; n = 98 directors) reported a desire to learn more about STEM and inclusion as it relates to model inclusive early childhood programs and professional development opportunities. Most respondents were interested in learning how to teach inclusive STEM (78%; n = 119 practitioners and 80%; n = 90 directors) and identifying available funding sources (75%; n = 115 practitioners and 89%; n = 92 directors). Only a little over half of the respondents (57%; n = 86 practitioners and 60%; n = 68 directors) reported they desired to learn about why STEM is important. Directors (73%; n = 81) and practitioners (74%; n = 114) were also interested in learning more about research related to early STEM learning and inclusion in early childhood. Additional topics related to early childhood STEM and inclusion that practitioners desired to learn more about included assessment, storing objects for outdoor classrooms, Percent of Responents who Report Interest in Learning More about STEM and Inclusion Directors PracƟƟoners STEM for toddlers and families, and sharing information with childcare providers. Discussion The purpose of this study was to explore early childhood practitioners' and directors' attitudes and beliefs about STEM and inclusion, what is occurring in current practice related to STEM and inclusion, and challenges faced accessing and using inclusive and STEM practices. Most respondents perceived STEM and inclusion as important and reported using some practices related to STEM and inclusion, but there were a few notable discrepancies. Similar to previous research on STEM or inclusion, challenges to including such content in practice included lack of time, funding, and training (e.g., Barton & Smith, 2015;Brenneman et al., 2009;Sarama & Clements, 2009). Implications for research and practice are discussed below. Perceptions As a relatively new element in early childhood, survey respondents were supportive and knowledgeable as it relates to most early STEM learning opportunities. A notable discrepancy was in the realms of technology and engineering for infants and toddlers with and without disabilities. Specifically, some respondents did not agree that technology and engineering were important for infants and toddlers. For technology, this may be due to perceptions of what technology entails. Foundational, low-tech concepts of computational thinking, such as sequencing, repetition, and causation are all very relevant to our youngest learnings (e.g., Bers et al., 2021). In fact, although many respondents did not feel technology was important or age appropriate for infants and toddlers, the concept of "sequencing" was one of the concepts most respondents agreed was regularly embedded into their teaching or practice. Thus, there may be a need for clarifying that technology in early childhood is more than playing on a screen. There are many critical elements that can be incorporated into early learning to prepare our children for success in computational thinking careers. A similar educational approach may be needed as it relates to engineering. Engineering-informed design approaches and investigations are motivating for children and educators alike and certainly have a place in early intervention and early education. Relative to inclusion, most survey respondents served children with disabilities and reported they were using and were confident in their ability to use inclusive practices. However, there were also some who did not feel they and/ or their staff were prepared to use inclusive practices. This is similar to previous studies that have shown that early childhood educators felt less prepared to teach children with disabilities (Chadwell et al., 2020;Cruz et al., 2020). There was also a fair number of respondents who did not desire to learn about using inclusive practice. It is possible that the respondents did not want to learn more because they felt confident in their skills or were not currently working with children with disabilities. In addition, many early intervention providers see children on their caseload on an individual basis so they may not perceive this as being important to their practice. It is also worth noting that these results are based on perceptions, so some respondents may have reported using inclusive practices because they serve children with and without disabilities in the same classroom. Inclusion involves creating adequate modifications and adaptations so that all children may access and learn in their environment (DEC, 2014;DEC & NAEYC, 2009). Nonetheless, recommended practices are always being updated and modified, so the need to stay informed is critical in meeting optimal outcomes in early childhood service provision (Kasprzak et al., 2020). Importantly, inclusion leads to optimal outcomes for young children (Agran et al., 2020;Barton & Smith, 2015;Odom et al., 2019). These findings may point to the need to further emphasize the importance of inclusion in early childhood and providing continuing education opportunities related to inclusion. There were minimal differences between respondents' perceptions regarding the importance of STEM domains for infants and toddlers with and without disabilities, and for preschoolers with and without disabilities. It is encouraging that practitioners and directors were not differentiating the importance of STEM for children with and without disabilities which suggests they are using an inclusive lens. This is particularly notable with recent research suggesting that children with disabilities are not always able to access early STEM learning (Clements et al., 2020). The findings from this survey may indicate progress towards supporting all young children to have equal access and opportunities to learn STEM concepts in early childhood. Practices Survey respondents reported using many specific practices related to inclusion and STEM. The Pyramid Model was used by almost all the practitioners which is very encouraging for providing the necessary adaptations and modifications for all young children to meaningfully engage in early learning opportunities and develop social emotional competence. The Pyramid Model has been supported by research related to improved child outcomes (Hemmeter et al., 2015) and lower rates of child expulsion (Fox et al., 2021;U.S. Department of Health and Human Services and Education, 2015;Vinh et al., 2016) which is promising related to its effectiveness in improving inclusive practices in early childhood settings. On the other hand, fewer survey respondents reported using the DEC Recommended Practices (DEC, 2014) in their setting, suggesting a need to more widely disseminate those or to ensure they are more usable and relevant. In addition, systematic research is needed on how practitioners access and use the Recommended Practices in their work. While still reporting using inclusive models, it is noteworthy that directors' total reported usage was lower than that of practitioners. This could be related to a lack of awareness of exactly what is happening in classrooms or a lack of education about inclusive practices. Regardless, ensuring all early childhood professionals are using inclusive practices should be a priority of the field. Directors in particular are critical in ensuring supports are in place (e.g., planning time, professional development) to enable practitioners to plan and use inclusive practices effectively. For STEM concepts and models, survey respondents indicated general concepts seem to be integrated more so than official models, frameworks, and resources. This begs the question whether early childhood professionals are truly considering STEM concepts as early STEM learning, particularly as they relate to technology and engineering. The field of early childhood should focus on increasing knowledge and awareness of what STEM is and its importance and relevance for all children from birth. Early childhood practitioners and directors may benefit from guidance and technical assistance to support purposeful STEM learning and using STEM vocabulary in early childhood settings. Further, limited usage of existing models and resources for early childhood professionals supports the need to share these resources more widely. While practitioners and directors agreed on many of the survey items, an interesting finding from this study involves relatively different perceptions related to challenges and barriers faced by practitioners and directors embedding STEM and inclusive practices. For both STEM and inclusion, more directors than practitioners perceived a lack of training/professional development as a barrier. Challenges related to other demands and lack of time were perceived more by practitioners than directors. A little over a quarter of both groups perceived lack of confidence in using inclusive practices as a barrier, but more directors reported lack of confidence in using STEM as a barrier than did practitioners. This is concerning and points to the need to ensure training and ongoing professional development opportunities related to STEM and inclusion are ample and accessible across all levels of early childhood service provision. There is a need to focus on empowering all professionals to increase their confidence and competence in teaching STEM content and facilitating STEM experiences. It is interesting to note that low numbers of both groups (under 20%; slightly more practitioners than directors) agreed that a lack of interest was a challenge to STEM and inclusion. Only around half of the survey respondents reported the desire to learn why STEM is important. It is possible this was a lower percentage because many respondents felt they were aware of the importance of STEM learning. Overall, these findings suggest that most early childhood professionals ARE interested in learning more about STEM and inclusion, particularly those in director roles. Thus, increasing early STEM and inclusion training opportunities would likely be well received by early childhood professionals. A final consideration to note is the idea of teaching STEM and inclusion together. Although findings on each of the content areas are reported separately in this manuscript, the intent is that the two are inherently used and applied together. The importance of ensuring that all young children are able to access and engage in STEM experiences early can empower our educators and our youngest learners and help to close the STEM opportunity gap (Clements et al., 2020). Limitations There are several limitations to this study. First, while the survey was disseminated nationally and widely, the total number of respondents at the survey closure is somewhat low. Perhaps impacting the number of responses, the survey was disseminated during a global pandemic where the early childhood workforce decreased significantly and were undergoing significant stressors and competing priorities. In addition, although disseminated widely, the sample is skewed with a small number of states (Kansas, North Carolina, and Vermont) being more prominently represented in the sample, Thus, although useful information, this is not a nationally representative sample. Third, the individuals who filled out the survey all did so online, suggesting they had access to technology and devices to do so. Many respondents additionally received information about the survey through the research team's or their partners' websites and social media accounts so may have been already interested in and aware of STEM and inclusion in early childhood. This may have skewed results of the survey. Finally, the cognitive interview process was only conducted with practitioners and not directors. Although the modifications to the director survey were minimal, it is possible the director survey could have been more relevant had interviews also been conducted with directors. Future Directions Although many of the survey responses were encouraging, due to the smaller sample size and some variability in responses, future directions certainly include more research in the field of STEM and inclusion together. The survey provides important information about practitioners' perceptions, but observations and interviews could help further the field's understanding about what is happening and needed related to STEM and inclusion. Currently there are no observation tools related to the implementation of early childhood inclusive STEM instruction. Therefore, observations using the Inclusive Classroom Profile (Soukakou, 2016) in conjunction with a tool such as the Early Childhood STEM Classroom Observational Protocol (Milford & Tippett, 2015) could support the field's understanding of practitioners' use of inclusive practices and their STEM instruction respectively. These observations could provide further insight into what types of professional development are needed relative to embedding inclusive STEM. Future interviews could allow for practitioners and directors to explain their perceptions and explore whether perceptions have changed resulting from practice or increased awareness. In addition, the majority of respondents were interested in learning more about STEM and inclusion, suggesting a focus on developing high-quality professional development, resources, and technical assistance to support these needs. Conclusion In sum, these findings suggest that early childhood practitioners and directors report they know and use many STEM and inclusive practices, with the largest discrepancies being in technology and engineering for infants and toddlers. Participants indicated that they faced barriers to accessing and using STEM and inclusive practices such as a lack of time and professional development opportunities. Findings indicate a need for increased efforts to better understand how to ensure high quality inclusive STEM practices are occurring regularly in practice, particularly for infants and toddlers. Future research should address professional development models and implementation practices to support embedding STEM learning opportunities into daily routines and activities in centers and in homes. STEM and inclusion are both critical elements for success in our youngest learners, so we need to ensure that professionals in the early childhood field (serving children from birth) are aware, proficient, and regularly using STEM and inclusive practices with all young children for optimal outcomes. Funding This research was supported by the Office of Special Education Programs # H327G180006 awarded to the University of North Carolina at Chapel Hill.
2023-03-29T15:07:46.361Z
2023-03-27T00:00:00.000
{ "year": 2023, "sha1": "d7e53e5a09bcf572fe5ecee65ddb4ee0ad7b1c5e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "15f5965d26678af83a4edc2602c62c40da9a520f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
11208906
pes2o/s2orc
v3-fos-license
Overexpression of Notch3 and pS6 Is Associated with Poor Prognosis in Human Ovarian Epithelial Cancer Notch3 and pS6 play important roles in tumor angiogenesis. To assess the expression of Notch3 and pS6 in Chinese ovarian epithelial cancer patients, a ten-year follow-up study was performed in ovarian epithelial cancer tissues from 120 specimens of human ovarian epithelial cancer, 30 specimens from benign ovarian tumors, and 30 samples from healthy ovaries by immunohistochemistry. The results indicate that the expression of Notch3 and pS6 was higher in ovarian epithelial cancer than in normal ovary tissues and in benign ovarian tumor tissues (p < 0.01). In tumor tissues, Notch3 expression and pS6 expression were negatively associated with age (p > 0.05) but positively associated with clinical stage, pathological grading, histologic type, lymph node metastasis, and ascites (p < 0.05 or p < 0.01). A follow-up survey of 64 patients with ovarian epithelial cancer showed that patients with high Notch3 and pS6 expression had a shorter survival time (p < 0.01), in which the clinical stage (p < 0.05) and Notch3 expression (p < 0.01) played important roles. In conclusion, Notch3 and pS6 are significantly related to ovarian epithelial cancer development and prognosis, and their combination represents a potential biomarker and therapeutic target in ovarian tumor angiogenesis. Introduction Ovarian cancer represents one of the most aggressive neoplastic diseases in women, and 75% patients are diagnosed at advanced stage due to the lack of biomarkers for early diagnosis [1].In 2012, ovarian cancer occurred in 239,000 women and caused 152,000 deaths worldwide and was more common in North America and Europe than in Africa and Asia [2].Until now, the molecular etiology of this cancer has remained mostly unknown and therefore it is of great importance to explore the association of key proteins with poor prognosis in human ovarian epithelial cancer (the major histological type of ovarian cancer). Notch signaling is a highly conserved cell-cell communication system present in multicellular organisms and has been characterized for its well-established role in a variety of physiological and pathological processes, including cancer development [3].Notch3, a type of Notch receptor (Notch1, Notch2, Notch3, and Notch4), plays an important role in promoting ovarian tumorigenesis, cancer progression, and chemotherapy resistance via activating the PI3K/Akt/mTOR signaling pathway [4].Ribosomal S6 kinase (S6K), a downstream effector of the PI3K/Akt pathway, is frequently activated in human ovarian cancer [5] and is significantly more prevalent in malignant tumors than in benign lesions.pS6 kinase is also involved in other aspects of cancer progression in addition to its well-established role in regulating proliferation and cell survival [5][6][7]. Although the roles of Notch3 and S6K in cancer development have been studied, no study has been carried out to combine the expression of Notch3 and S6K in relation to the prognosis of human ovarian epithelial cancer.It is known that 2 Mediators of Inflammation Notch3 and S6K may complement their common functions in cancer development, but their roles in specific tumors are unique and context-dependent [8][9][10][11].In the present study, we first investigated the expression of Notch3 and S6K in human ovarian epithelial cancer, to verify their expression related to clinicopathological features and prognosis in human ovarian epithelial cancer and to further evaluate their potential value as biological markers of aggressiveness in ovarian cancer, with the goal of improving the management of ovarian cancer patients. Ethics Statement. Patient samples were obtained with written informed consent in accordance with ethics committee requirements at the participating institutes and the Declaration of Helsinki.Permission to carry out the study was obtained from the Institutional Review Board (IRB) of the Second Affiliated Hospital of Nanchang University. Tissue Samples.Tissue samples were collected from 120 patients with ovarian epithelial cell carcinoma who underwent surgical resection at the Second Affiliated Hospital of Nanchang University between 1998 and 2008 (age range 36-68 years, median 49 years).All patients were histopathologically diagnosed based on clinical protocols, and none of them received presurgery chemotherapy or immunotherapy.Of the 120 patients, 41 patients (at stage I + II) underwent a hysterectomy + bilateral oophorectomy + omentum resection + appendectomy + pelvic lymph node dissection; 79 patients with advanced ovarian cancer (III + IV) underwent cytoreductive surgery, pelvic lymph node dissection, or pelvic lymph node biopsy; 37 patients had lymphatic metastasis and 70 patients had evident ascites.The histological results revealed that 77 patients had serous carcinoma and 43 patients had mucinous carcinoma; 17 tumors showed a high degree differentiation, 40 showed moderate differentiation, and 63 showed poor differentiation based on pathological grading. In this study, 30 patients with benign ovarian cystadenoma (14 serous cystadenoma and 16 mucinous cystadenoma) were selected to perform a tumor stripping operation or unilateral salpingo-oophorectomy (age range 23-46 years, median 35 years).Another 30 patients (with either uterine fibroids, adenomyosis, or other nonovarian diseases) underwent hysterectomy + bilateral or unilateral oophorectomy and were selected as the control group (age range 46-69 years, median 58 years). In the ovarian epithelial cell carcinoma group, 44 patients received combination chemotherapy of cisplatin + adriamycin + cyclophosphamide and 64 patients received carboplatin + paclitaxel.Twelve patients did not receive any postsurgery chemotherapy (see Table S1 in Supplementary Material available online at http://dx.doi.org/10.1155/2016/5953498). Immunohistochemistry. Each tissue was fixed in formalin and embedded in paraffin and then sectioned and mounted on glass slides.After dewaxing in xylene and dehydration in graded alcohol, endogenous peroxidase activity was blocked with 3% hydrogen peroxide for 10 min.Then, the sections were subjected to antigen retrieval in a microwave oven at 700 W for 20 min in 10 mol/L citrate buffer solution (pH 6.0).After that, 10% goat serum albumin was applied for 20 min.Overnight incubation was carried out at 4 ∘ C with the following primary antibodies: rabbit polyclonal Notch3 (1 : 50 dilution; Santa Cruz Biotechnology, Santa Cruz, CA, USA) and p70S6k (1 : 50 dilution; Cell Signaling Technology, Beverly, MA, USA).Then, sections were incubated with the appropriate secondary antibodies at room temperature for 60 min and washed in phosphate-buffered saline (PBS).Diaminobenzidine (DAB) was used as the chromogen, and the sections were counterstained with hematoxylin.Samples incubated with PBS instead of primary antibodies were used as negative controls [12]. 2.5.Statistical Analysis.SPSS 19.0 software was used for the statistical analysis.The significance of the relationships between Notch3 and pS6 expression and clinicopathological parameters was evaluated using the Wilcoxon and Kruskal-Wallis tests and Spearman's rank correlation.Survival rates were calculated using the Kaplan-Meier method and compared by the log-rank test.Multivariate analysis was used to identify independent prognostic factors for survival rates using the Cox proportional hazards regression model. values < 0.05 were considered statistically significant [11][12][13]. Expression of Notch3 and pS6 in Different Ovarian Tissues. The immunohistochemistry results show that Notch3 was mainly expressed in the cytoplasm and/or nucleus of ovarian epithelial cancer cells, while pS6 was mainly expressed in the cytoplasm (data not shown).In Figure 1 and Table 1, Notch3 protein was detected in normal ovarian tissue, ovarian cystadenoma, and ovarian epithelial cancer at different level.The positive expression rates of Notch3 in normal ovarian tissue, ovarian cystadenoma, and ovarian epithelial cancer were 16.7% (5/30), 70.0% (21/30), and 91.7% (110/120), respectively.Notch3 expression in ovarian epithelial cancer was significantly higher than in normal ovarian tissue ( < 0.01) and ovarian cystadenoma ( < 0.01), and Notch3 expression in ovarian cystadenoma was much higher than in normal ovarian tissue ( < 0.01). Correlation between the Clinicopathological Features and Expression of Notch3 and pS6.The relationship between ovarian epithelial cancer clinical stage and signaling molecule expression (Notch3 and pS6) was analyzed in Table 2.We found that Notch3 expression and pS6 expression were negatively associated with age ( > 0.05) but were positively associated with clinical stage, pathological grading, histological type, lymph node metastasis, and ascites.As shown in Table 2, Notch3 expression and pS6 expression were higher in stage III-IV than in stage I-II ( < 0.01, < 0.01); similarly, Notch3 expression and pS6 expression were stronger with higher pathological grading compared to low pathological grading ( < 0.01, < 0.01).The expression of Notch3 and pS6 was higher in serous cystadenocarcinoma, lymph node metastasis, and ascites than in mucinous cystadenocarcinoma ( < 0.01, < 0.05) and in the absence of lymph node metastasis ( < 0.01, < 0.01) and ascites ( < 0.01, < 0.01). The correlation analysis of Notch3 expression and pS6 indicated a positive correlation between these two proteins in ovarian epithelial cancer ( = 0.668, < 0.01) (Tables 3 and 4). Survival Analysis of Notch3 and pS6 Expression. A followup survey was performed on 64 patients with ovarian epithelial cancer who had received chemotherapy (carboplatin and paclitaxel) after surgery.Of these 64 patients, 46 patients died and 18 patients were censored or truncated.The shortest and longest survival times for these patients were 1 month and 102 months (with an average of 35.16 months), and the accumulated 1-to 5-year survival rates of the patients were 0.55, 0.36, 0.36, 0.28, and 0.21, respectively (Figure S1). As ascites is a key finding in cancer, we analyzed the relationship between the coexpression of Notch3 and pS6 expression and the presence of ascites.Higher expression of Notch3 and pS6 was associated with a higher positive rate of ascites (Table 6); the positive rates of ascites in patients with high, moderate, and low expression of Notch3 and pS6 were 82.1%, 51.9%, and 27.0%, respectively.The 2 test indicated that the expression level of Notch3 and pS6 has a significant positive correlation with ascites in these groups ( 2 = 28.448, < 0.01). Discussion The Notch signaling cascade is critical for cell proliferation, differentiation, development, and homeostasis [13], and deregulated Notch signaling is found in various diseases (e.g., T-cell leukemia, breast cancer, prostate cancer, colorectal cancer and lung cancer, and central nervous system malignancies) [14].However, the mechanism of its regulation in ovarian cancer is unclear. In our study, Notch3 expression in ovarian epithelial cancer was significantly higher than in benign cystadenoma and normal ovarian tissues ( < 0.01, Table 1) and was associated with clinical stage, pathological grading, histologic type, lymph node metastasis, and ascites ( < 0.01 or < 0.05), suggesting that the Notch signaling pathway is in an activated state and probably plays an important role in the development of ovarian epithelial cancer [1,8,12,13,15,16]. It is known that cancer occurrence is a comprehensive consequence of disorders in multiple signaling transduction pathways [9].It has been shown that the PI3K/AKT signaling pathway is the key downstream mediator of Notch signaling; when Notch ligands activate the Notch signaling pathway, mTOR activates the downstream effectors S6k and eukaryotic translation initiator 4E binding protein 1 (4EBP1).Activated S6K phosphorylates the ribosomal protein pS6 and enhances the synthesis of the translation regulator p4EBP1 to regulate protein synthesis [5,17,18].Therefore, Notch3 expression and pS6 expression play important roles in PI3K/AKT/mTOR signaling and ovarian epithelial cancer development.Our data also indicate a strong positive correlation between Notch3 expression and pS6 expression ( = 0.668, < 0.01; Table 3).In our follow-up survey of 64 patients with ovarian epithelial cancer (Table 4), the patients with high Notch3 and pS6 expression only survived for an average of 12.3 months, while patients with moderate and low Notch3 and pS6 expression survived for 16.8 months and 81.9 months, respectively ( 2 = 41.479, < 0.01).The clinical stage ( < 0.05) and Notch3 expression ( < 0.01) were more important than other clinicopathological features (Table 5).In addition, the occurrence of ascites in patients with a high level of Notch3 and pS6 expression was significantly higher than in the other groups, suggesting that a high level of Notch3 and pS6 expression may be associated with peritoneal implantation and spreading (Table 6). In summary, although some studies have indicated that Notch3 or pS6 alone could be used as indicator of cancer development and prognosis [5,11,19], our results indicate that Notch3 and pS6 together have a strong relationship with the clinicopathological features of ovarian epithelial cancer and overall patient survival.However, Notch3 is not the only protein upstream of PI3K/AKT/mTOR signaling, and pS6 is not the only effector of PI3K/AKT/mTOR signaling [20,21].Moreover, the association analysis of Notch3 and pS6 (Table 3) indicated that five pS6 negative patients expressed moderate levels of Notch3 (4.2%), and three Notch3 negative patients expressed moderate levels of pS6 (2.5%).Therefore, the combined assessment of Notch3 and pS6 expression is a better choice of prognostic biomarker for overall survival in ovarian epithelial cancer than Notch3 or pS6 alone. Figure 1 : Figure 1: Evaluation of the protein expression of Notch3 and pS6 in normal ovarian tissue, ovarian cystadenoma, and ovarian epithelial cancer using immunohistochemistry. Table 1 : The protein expression of Notch3 and pS6 in normal ovarian tissue, ovarian cystadenoma, and ovarian epithelial cancer. Table 2 : Correlation between the protein expression of Notch3 and pS6 proteins and clinicopathological parameters in patients with ovarian epithelial cancer. Table 3 : Association between the expression of Notch3 and pS6. Table 4 : The survival distribution of patients with different Notch3 and pS6 expression. Table 5 : Multiple COX regression analysis of patients with ovarian epithelial cancer. Table 6 : Relationship between the coexpression of Notch3 and pS6 expression and ascites.
2018-04-03T02:08:49.913Z
2016-06-30T00:00:00.000
{ "year": 2016, "sha1": "846c5371df4550ff031a669e9aee165541773fa3", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/mi/2016/5953498.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "846c5371df4550ff031a669e9aee165541773fa3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2435333
pes2o/s2orc
v3-fos-license
Dyspnea in Pulmonary Arterial Hypertension Dyspnea is a complex sensation involving interaction of physiological, psychological, social, and environmental factors. Dyspnea in general is common across cardio-vascular and respiratory conditions and it is often difficult to clinically differentiate the exact cause of dyspnea in patients with heart or lung disease. Pulmonary hypertension in the absence of heart or lung disease, a condition called pulmonary arterial hypertension (PAH), is due to endothelial dysfunction and remodelling of small pulmonary arteries. Progressive dyspnea on exertion is a cardinal sign of PAH, which is often first diagnosed when in advanced stages. Improved understanding of pathogenic mechanisms underlying PAH and the related dyspnea should translate into new treatment options for symptom control and to prevent disease progression. This chapter reviews the current understanding of the etiology and pathogenesis of PAH and recent advances in management of this debilitating condition. forms a distinct subgroup of pulmonary hypertension.PAH incorporates a number of different groups including familial/heritable PAH, idiopathic PAH and PAH associated with connective tissue disease, congenital heart disease, portal hypertension, or human immuno-deficiency virus (HIV) infection.Clinical severity of PAH is expressed in World Health Organisation (WHO) functional classes (FC), which mainly describe the severity of dyspnea experienced by patient (Barst et al., 2004) (Table 2).Assessment of severity is important as it guides clinical management and helps to determine prognosis (D'Alonzo et al., 1991).In untreated patients with PAH, historical data showed a median survival of 6 years for WHO-FC I and II, 2.5 years for WHO-FC III, and just 6 months for WHO-FC IV (D'Alonzo et al., 1991).Extremes of age (<14 yrs or >65 yrs), falling exercise capacity, syncope, haemoptysis and signs of RV failure all confer a worse prognosis in PAH.P a t i e n t s w i t h P A H i n W H O -F C I I I o r I V b e n e f i t f r o m s p e c i f i c d i s e a s e -m o d i f y i n g treatments and data to support improved outcomes from treatment of earlier stages of PAH are emerging. Class Symptomatic profile Class I Patients with pulmonary hypertension but without resulting limitation of physical activity.Ordinary physical activity does not cause dyspnea or fatigue, chest pain or near syncope Class II Patients with pulmonary hypertension resulting in slight limitation of physical activity.They are comfortable at rest.Ordinary physical activity causes undue dyspnea or fatigue, chest pain or near syncope Class III Patients with pulmonary hypertension resulting in marked limitation of physical activity.They are comfortable at rest.Less than ordinary activity causes undue dyspnea or fatigue, chest pain or near syncope Class IV Patients with pulmonary hypertension with inability to carry out any physical activity without symptoms.These patients manifest signs of right heart failure.Dyspnea and/or fatigue may even be present at rest.Discomfort is increased by any physical activity. Table 2. WHO Classification of functional status of patients with pulmonary hypertension Dyspnea and exercise intolerance in pulmonary arterial hypertension Dyspnea is a complex sensation comprising at least three distinct sensations including air hunger, work/effort, and chest tightness.Dyspnea on exertion is a hallmark of PAH.The mechanism of dyspnea in pulmonary hypertension is complex and depends on the underlying condition and co-morbidities.It has been hypothesised that in pulmonary hypertension associated with pulmonary thromboembolism pressure, receptors or C fibers in the pulmonary vasculature or right atrium mediate the sensation of dyspnea (Manning & Schwartzstein, 1995).This mechanism may also operate in PAH since the severity of dyspnea in these patients is disproportionate to the impairment of left ventricular function, respiratory mechanics and gas exchange.In PAH dyspnea on exertion is usually associated with little or no abnormalities in lung mechanics measured at rest (e.g., normal spirometry and lung volumes), while lung gas exchange may be abnormal (e.g., reduced diffusing lung capacity for carbon monoxide (DLco) (Chandra et al., 2010).Therefore, it is likely that exertional dyspnea in PAH results from complex interactions of signals from the central nervous system (i.e.autonomic centers in the brain stem and the motor cortex) and receptors in the upper airway, lungs, right atrium, pulmonary vessels and chest wall (Manning et al., 1995;O'Donnell et al, 2009).Cardiopulmonary exercise testing (CPET), which measures pulmonary gas exchange during exercise, demonstrates significant oxygen transport abnormalities, such as a decrease in peak oxygen uptake (V'O 2 ), a decreased slope of the increase in V'O 2 -work rate relationship and a low lactic threshold (Palange et al, 2007).In addition, patients with PAH often have a low V' E and a normal breathing reserve at peak exercise.This reflects the fixed high physiological dead space consequent to reduced pulmonary perfusion.The alveolar-arterial O 2 difference is often widened during exercise; a right-to-left shunt may contribute to arterial hypoxemia during exercise in a proportion of patients with co-existent patent foramen ovale.Excessive V' E at low absolute work rates may also reflect the influence of a premature lactic academia.Finally, the breathing pattern tends to be more rapid and shallow than normal; this pattern is not explained by restrictive mechanics and may result from activation of vagal-innervated mechanoreceptors in the right atrium, pulmonary vasculature and pulmonary interstitium.Importantly, with CPET it is possible to detect an abnormal increase in exercise ventilatory response relative to carbon dioxide output (V' E /V'CO 2 ) that is associated with a proportional and sustained reduction in end-tidal carbon dioxide partial pressure (P ET CO 2 ) (Riley et al., 2000).The degree of ventilatory and gas exchange inefficiency during exercise (i.e., the increase in V' E /V'CO 2 and the drop in P ET CO 2 ) correlates with the severity of the disease, and the level of P ET CO 2 reduction may be a useful non-invasive screening tool for the selection of PAH patients for right heart catheterization (Yasunoby et al., 2005).Interestingly, differences in ventilatory and gas exchange adaptations to cycling and walking exercise have been described in PAH; walking, in particular, is more severely limited by high ventilatory response, arterial O 2 desaturation and dyspnea sensation compared to cycling (Valli et al., 2008). Right heart catheterization with measurement of pulmonary artery pressure and cardiac output has traditionally been used to assess the severity of PAH and the response to interventions.However, catheterization is invasive and cumbersome, largely restricted to tertiary referral centers and not suitable for regular follow-up.Therefore, exercise testing has been used in its place as a surrogate marker to monitor disease severity and prognosis.Wensel et al. studied the prognostic value of V'O 2 peak in patients with PAH and reported that patients with V'O 2 peak < 10.4 ml/min/kg have a 50% risk of early death at 1 year and 85% at 2 years (Wensel et al., 2002).Since CPET is not always available, field walking tests have been utilized to assess the degree of exercise intolerance in PAH (Steel, 1996).The degree of exercise impairment is judged by the measurement of distance covered during a fixed time period.The six minute walking test (6MWT), in which patients are free to choose the most convenient walking speed, is the most popular walking test (American Thoracic Society [ATS], 2002).The distance achieved in the 6MWT has been used as a primary endpoint in most randomised controlled trials of modern PAH therapies (Galiè et al., 2002).The 6MWT has good reproducibility and some studies have demonstrated a significant correlation between the 6MWT distance and peak oxygen uptake (V'O 2 peak) measured www.intechopen.comduring standard incremental protocols (Myamoto et al., 2000).Walk distance and oxygen desaturation during 6MWT in patients with PAH relate well to V'O 2 peak and appear to have a prognostic value (Myamoto et al., 2000;Paciocco et al., 2001).Unexplained dyspnea on exertion is the main symptom in the early stages of PAH and causes exercise intolerance. In PAH exercise intolerance correlates with the severity of the disease (Sun et al., 2002).Furthermore, the reduction of pulmonary vascular resistance and/or pulmonary arterial pressure with treatment (see below) is paralleled by changes in WHO FC and improvement in dyspnea on exertion as measured by the 6MWT. Management of dyspnea in pulmonary hypertension Treatment is determined by whether the pulmonary hypertension is arterial, venous, hypoxic, or thromboembolic and is first directed to the primary cause.Since pulmonary venous hypertension (Group 2) is synonymous with congestive heart failure, therapy of dyspnea aims to optimize left ventricular function through use of diuretics, beta blockers, and ACE inhibitors, and where relevant repair/replace dysfunctional heart valves. Similarly, in patients with lung disease and/or hypoxia (Group 3) therapy of dyspnea is usually directed at the underlying cause and correction of hypoxia. PAH (Group 1) has no underlying cardiac or respiratory cause and most often is associated with scleroderma or idiopathic.While lifestyle changes, digoxin, diuretics, oral anticoagulants, and oxygen therapy were long considered appropriate treatments for PAHassociated dyspnea, these have never been proven to be beneficial in a randomized, prospective manner.As there is currently no cure, therapy of dyspnea in PAH is targeted at symptom control with the aim being to ease dyspneic symptoms thereby allowing patients to become more active, and to use treatments to reduce pulmonary vascular resistance and pressures and thereby slow disease progression. As outlined in subsequent sections, four major classes of medications are available for treatment of PAH-associated dyspnea (Fig. 1) (Humbert et al., 2004b).Calcium channel blockers reduce contractility of pulmonary arterial smooth muscle thereby reducing pulmonary vascular resistance.Prostacyclin analogues induce pulmonary vasodilation by supplementing inadequate endothelial prostacyclin caused by under-activity of endothelial prostacyclin synthase.Endothelin receptor antagonists block the vasoconstrictor effects of endothelin on pulmonary smooth muscle.Phosphodiesterase 5 (PDE5) inhibitors promote the vasodilation activity of the nitric oxide pathway by reducing conversion of cyclic guanylate monophosphate (a nitric oxide second messenger) to 5′-guanylate monophosphate (an inactive product).By reducing pulmonary arterial pressures and resistance, these treatments reduce dyspnea and improve exercise tolerance in patients with PAH. Calcium channel blockers High-dose calcium channel blockers only showed benefit for symptom relief in about 10% of patients with idiopathic PAH, who achieved reduction in their mean pulmonary arterial pressure and pulmonary vascular resistance by > 20% as measured by Swan-Ganz catheterization during acute vasodilator challenge.In the absence of measurable improvement in pulmonary arterial pressure, calcium channel blockers are not indicated.www.intechopen.com The criteria for vasoreactivity have recently changed.Only patients whose mean pulmonary artery pressure falls by > 10 mm Hg to < 40 mm Hg with an unchanged or increased cardiac output when challenged with adenosine, epoprostenol, or nitric oxideare considered "vasoreactive".Of these, only 50% may have sustained response to calcium channel blockers (Rich & Brundage, 1987;Sitbon et al., 2005) and can be treated with dihydropiridine calcium channel blockers (i.e.nifedipine, felodipine or amlodipine) or diltiazem.Verapamil is contraindicated because of its negative inotropic effect. Prostacyclin analogues Prostacyclin is a potent pulmonary vasodilator and inhibits platelet aggregation.It is a metabolite of arachidonic acid produced in the normal vascular endothelium.Reduced expression of prostacyclin synthase in patients with PAH causes prostacyclin deficiency. Epoprostenol is a potent, short-acting prostacyclin analogue that induces pulmonary vasodilation and is approved for treatment of patients with PAH in WHO FC III or IV. Epoprostenol is administered by continuous intravenous infusion through an indwelling central line.Unfortunately, it is expensive, inconvenient (patients need to carry a continuous infusion pump) and has significant dose-dependent adverse effects including flushing, headache, jaw and lower extremity muscular pain, diarrhea, nausea, and rash.Patients with PAH may live at distance from the nearest tertiary care facility and a well-established setting for ensuring continuous supply and supervision is critical, as sudden interruption of the epoprostenol infusion may cause rebound severe PAH and death.Despite the side effects and inconveniences with administration, epoprostenol has solid clinical evidence of efficacy in PAH.It has been shown to reduce dyspnea, improve exercise capacity, quality of life and survival in patients with PAH (Badesch et al., 2000;Barst et al., 1996;McLaughlin et al., 2002;Sitbon et al., 2002).Three-year survival of patients with PAH treated with epoprostenol was 63% (McLaughlin et al., 2002;Sitbon et al., 2002).Most patients experienced optimal benefit from epoprostenol at a stable dose of 25 to 40 ng/kg per minute, after incremental increases over the course of 6 -12 months from an initial dosage of 2 to 6 ng/kg per minute. Treprostinil is a prostacyclin analogue that is stable at room temperature and has a longer half-life than epoprostenol (3-4 hours).This allows it to be given intravenously or via a small subcutaneous catheter with a continuous pump.Treatment by either route improves the 6MWT distance in patients with PAH in WHO FC III or IV (Simonneau et al., 2002;Tapson et al., 2006).Three-year survival of patients with idiopathic PAH treated with subcutaneous treprostinil monotherapy has been reported as 71% (Barst et al., 2006a).Whilst short-term efficacy of intravenous treprostinil may equal epoprostenol; comparative survival data are lacking.The adverse effect profile of treprostinil is similar to epoprostenol.Frequent, severe pain at the site of infusion may limit the treprostinil dose that can be administered subcutaneously.This limitation may reduce treprostinil's efficacy because the effect on 6MWT distance is dose-dependent, and higher doses may require intravenous administration.Other clinical trials of treprostinil inhaled and oral formulations are ongoing.Some patients treated with epoprostenol may be switched to intravenous treprostinil with maintenance of 6MWT distance; although a larger dose of treprostinil is required (Gomberg-Maitland et al., 2005). www.intechopen.com Iloprost is an inhaled prostacyclin analogue that was shown to improve dyspnea, exercise capacity and hemodynamics in patients with PAH.In a randomized, placebo-controlled, 12week study, iloprost produced a placebo-corrected increase in 6MWT distance of 36 m in 207 patients with symptomatic idiopathic PAH, PAH associated with connective tissue disease or appetite suppressants, or pulmonary hypertension related to inoperable chronic thromboembolic disease (Olschewski et al., 2002).Long-term maintenance of improved exercise capacity and hemodynamics was observed with iloprost use (Hoeper et al., 2000).Adverse effects of iloprost are similar to other prostacyclin analogues and include flushing, headache, and cough.A major downside for patients is that iloprost's short duration of action necessitates frequent 10-minute inhalations, 6 to 9 times per day. Endothelin receptor antagonists (ERA) By binding to endothelin receptors A and B, endothelin-1 triggers pulmonary vasoconstriction and stimulates vascular smooth muscle and fibroblast proliferation. Endothelin-1 levels are increased in PAH, and correlate with disease severity, suggesting that blockade of endothelin-1 should have beneficial effects.Bosentan is an orally-active, dual ERA that improves exercise capacity, quality of life, hemodynamics, and time to clinical worsening in PAH (Channick et al., 2001;Rubin et al., 2002).Two-year survival of patients with idiopathic PAH in whom bosentan was used as first-line therapy was 87% (McLaughlin et al., 2005;Provencher et al., 2006).Bosentan is currently approved for treatment of PAH patients in WHO FC III or IV.Adverse effects of bosentan include flushing, edema, nasal congestion, mild anemia, and teratogenicity.Dose-dependent elevation of liver transaminases occurs in about 10% of patients on bosentan, requiring monthly monitoring of liver function. Ambrisentan and sitaxentan are ERA with relative selectivity for the endothelin receptor subtype A. Treatment with sitaxentan was efficacious for patients with PAH with low incidence of liver toxicity in initial reports (Barst et al., 2006b;Benza et al., 2008).However, it was recently removed from the market due to case reports of severe hepatitis and liver toxicity.Ambrisentan is another ERA with safer liver toxicity profile.It has been shown to improve symptoms, exercise capacity, and hemodynamics (Galiè et al., 2005a).Adverse effects of ambrisentan include flushing, edema, nasal congestion, and teratogenicity.Although associated with a low incidence of liver enzyme elevations, monthly liver function monitoring is still required (McGoon et al., 2009). Phosphodiesterase 5 Inhibitors Sildenafil, originally commercialized for erectile dysfunction, is a potent, highly specific inhibitor of PDE5 that has been shown to improve symptoms and functional capacity in PAH patients (Galiè et al., 2005b).Adverse effects associated with sildenafil include headache, flushing, dyspepsia, nasal congestion, and epistaxis.Nitrates are contraindicated in patients taking PDE5 inhibitors because the additive effects of the drugs can cause lifethreatening hypotension. Tadalafil, a long-acting PDE5 inhibitor, is the most recent oral agent for treatment of PAH (Rosenzweig, 2010).The Pulmonary Arterial Hypertension and Response to Tadalafil (PHIRST) clinical trial examined the efficacy and tolerability of tadalafil for the treatment of www.intechopen.com Dyspnea in Pulmonary Arterial Hypertension 199 PAH over a period of 16 weeks (Galiè et al., 2009a).Tadalafil 40 mg showed significant improvement over placebo for six of eight SF-36 domains and EQ-5D index scores.Also, the tadalafil 40-mg group showed significant improvement over placebo on the 6MWT distance (p < 0.001), but no clear relationship was found between 6MWT distance and health-related quality of life (HRQoL).Results suggest that tadalafil may significantly improve HRQoL and exercise capacity in patients with PAH. Combination treatments Combinations of disease-modifying agents of various classes seems a logical next step in PAH management and is becoming standard care in many PAH centres.Clinical trial evidence for combination therapy is encouraging (Humbert et al., 2004a;McLaughlin et al., 2006;O'Callaghan & Gaine, 2007;Simonneau et al., 2008).The relatively small BREATHE-2 study (Humbert et al., 2004a) showed a trend to haemodynamic improvement with combination epoprostenol-bosentan as compared to epoprostenol alone.The STEP-1 study (Simonneau et al., 2008) addressed the safety and efficacy of 12 weeks therapy with inhaled iloprost plus bosentan and reported a non-significant increase of 26 m in the post-inhalation 6MWT distance (p=0.051).There was no improvement in pre-inhalation haemodynamics in the iloprost group after 12 weeks of treatment, but time to clinical worsening was significantly prolonged in the iloprost group (0 events versus 5 events in the placebo group; p = 0.02).In contrast, the COMBI trial which also studied the benefits of inhaled iloprost added to bosentan, was stopped prematurely after a planned interim analysis failed to show an effect on 6MWT distance or time to clinical worsening (Hoeper et al., 2006).The TRIUMPH trial studied the effects of inhaled treprostinil in patients already treated with bosentan or sildenafil (McLaughlin et al., 2010).The primary end-point, change in 6MWT distance at peak exposure, improved by 20 m compared with placebo (p<0.0006).At trough exposure, i.e. after >4 hours post-inhalation, the difference was 14 m in favour of the treprostinil group (p<0.01).There were no significant differences in Borg dyspnea index, functional class and time to clinical worsening.The PACES trial addressed the effects of adding sildenafil to epoprostenol in 267 patients with PAH (Simonneau et al., 2008) and showed significant improvements after 12 weeks in 6MWT distance and time to clinical worsening. Additional data are available for the combination of ERA and PDE5 inhibitors.In the subgroup of patients enrolled in the EARLY study (Galiè et al., 2008) (bosentan in WHO FC II PAH patients already on treatment with sildenafil), the haemodynamic effect of the addition of bosentan was comparable with that achieved in patients without background sildenafil treatment.A pharmacokinetic interaction has been described between bosentan and sildenafil, which act as inducers or inhibitors of cytochrome P450 CYP3A4, respectively.The co-administration of both results in a decline of sildenafil and increase in bosentan plasma levels (Paul et al., 2005).So far there is no indication that these interactions are associated with reduced safety (Humbert et al., 2007), but whether the clinical efficacy of sildenafil is significantly reduced is still controversial.No pharmacokinetic interactions have been reported between sildenafil and the two other available ERAs, sitaxentan and ambrisentan.In the PHIRST study (Galiè et al., 2009a) the combination of tadalafil and bosentan resulted in an improvement of exercise capacity of borderline statistical significance. www.intechopen.comIn summary, whether the response to monotherapy is sufficient or not can only be decided on an individual basis.Combination therapy is recommended for PAH patients not responding adequately to monotherapy and ideally should be instituted by experienced centres (Fig. 2). Palliative and supportive treatments for residual dyspnea in treated progressive pulmonary hypertension Since PAH is not a curable disease, many patients inevitably progress to WHO-FC IV with severe dyspnea.This necessitates additional palliative and supportive treatments.Exercise training, atrial septostomy and opioids are some of the interventions that may have a role in improving symptoms and exercise tolerance in patients with progressive PAH. Exercise in the form of respiratory and physical training appears efficacious as part of the management of PAH.When superimposed on an optimal stable drug regimen, 15 weeks of respiratory and physical training led to an average increase of 111 m in 6MWT distance, in addition to improvements in other measures of exercise tolerance and quality of life (Mereles et al., 2006).This is a major benefit when put in the perspective of expensive PAH medications that may deliver more modest 20-40 m improvements in 6MWT distance. Atrial septostomy performed by graded balloon dilatation may be suitable for selected patients with severe PAH.In patients with medically treated severe progressive PAH, atrial septostomy improved clinical symptoms and dyspnea, cardiac index, exercise endurance and systemic oxygen transport (Reichenberger et al., 2003;Sandoval et al., 1998). Transplantation is an important option for selected PAH patients.Up to 25% of patients with PAH fail to improve on disease-specific therapy and the prognosis of patients who remain in WHO FC III or IV is poor.International guidelines to aid referral and listing have been published by the International Society for Heart and Lung Transplantation (Orens et al., 2006).Both heart-lung and double-lung transplantation have been performed for PAH and each centre has developed its own strategy for the choice of the type of transplantation in the individual patient.However, due to the shortage of donor organs, most patients are considered for double-lung transplantation.While right ventricular afterload is immediately reduced after double-lung transplantation, right ventricular systolic and left ventricular diastolic functions do not improve immediately and hemodynamic instability is a common problem in the early post-operative period.The overall 5-year survival following transplantation for PAH is 45-50%, with evidence of sustained improvement in dyspnea and quality of life (Trulock et al., 2006). In parallel with attempts to treat the underlying pathology causing dyspnea, the sensation of breathlessness itself must be ameliorated.For many patients there comes a point when there are no further identifiable reversible components of PAH and the treatment focus needs to move to reducing the subjective sensation of breathlessness (Davis, 1994).Like pain, dyspnea has a sensory and an affective dimension.Therefore, treatment strategies in dyspnea should be similar to those used in pain.Recent neuroimaging studies suggest that neural pathways involved in pain and dyspnea sensation may be shared and, therefore, similar neurophysiological and psychological approaches used to understand and manage pain can be applied to dyspnea (Nishino, 2011).Previous randomised controlled trials have www.intechopen.com Pulmonary Hypertension -From Bench Research to Clinical Challenges 202 reported the effectiveness of sustained-release morphine in patients with refractory dyspnea, including those with severe COPD (Abernethy et al., 2003;Poole et al., 1998) and chronic heart failure (Johnson et al., 2002).Currently there is no data on use of opioids to relieve dyspnea in patients with progressive advanced PAH and clinical trials are on-going. Effective and predictive treatment of dyspnea remains elusive and a better understanding of the pathophysiology and neurophysiology of dyspnea may lead to more effective treatments. Conclusion Etiological diagnosis and assessment of pulmonary hypertension WHO functional class is critical for management.Despite recent advances in understanding and treatment of PAH it remains a progressive and incurable disease, with dyspnea and exercise intolerance being the major causes of distress to sufferers.More effective palliation of dyspnea in patients with PAH depends on better understanding of it's mechanisms.More randomised clinical trial evidence on combination and palliative treatments is needed to improve management of dyspnea in patients with advanced PAH. Fig. 1 . Fig.1.Therapeutic Targets for Pulmonary Arterial Hypertension Three major pathways involved in abnormal proliferation and contraction of the smoothmuscle cells of the pulmonary artery in patients with pulmonary arterial hypertension are shown.These pathways correspond to important therapeutic targets in this condition and play a role in determining which of four classes of drugs --endothelin-receptor antagonists, nitric oxide, PDE5 inhibitors, and prostacyclin derivatives --will be used.At the top of the figure, a transverse section of a small pulmonary artery (<500 μm in diameter) from a patient with severe pulmonary arterial hypertension shows intimal proliferation and marked medial hypertrophy.Dysfunctional pulmonary-artery endothelial cells (blue) have decreased production of prostacyclin and endogenous nitric oxide, with an increased production of endothelin-1 --a condition promoting vasoconstriction and proliferation of smooth-muscle cells in the pulmonary arteries (red).Therapies interfere with specific targets in smooth-muscle cells in the pulmonary arteries.In addition to their actions on smoothmuscle cells, prostacyclin derivatives and nitric oxide have several other properties, including antiplatelet effects.Plus signs denote an increase in the intracellular concentration; minus signs blockage of a receptor, inhibition of an enzyme, or a decrease in the intracellular concentration; and cGMP cyclic guanosine monophosphate.
2017-08-15T04:58:10.583Z
2011-12-09T00:00:00.000
{ "year": 2011, "sha1": "b34ad19889c9793862a4326c7e24681fa9888f75", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/24757", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "e0d69825660cba4c513b2accb90def367135ae74", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232255787
pes2o/s2orc
v3-fos-license
Water drinking test: The second innings scorecard Even though both intraocular pressure (IOP) fluctuation and peak IOP have been demonstrated to be significant risk factors for glaucoma progression, the latter has been shown to be a better predictor of disease progression. Peak IOP is also a more practical tool for guiding management protocols.[1-3] Continuous 24 h IOP monitoring arguably provides the best measure of an individual’s IOP, but is logistically not possible in clinical settings. A diurnal variation of IOP over 24 h provides a better understanding of an individual’s IOP profile including mean and peak IOP, as well as IOP fluctuation.[4,5] All of the currently available methods of recording circadian IOP variations are resource and time intensive, and usually not feasible in routine glaucoma practice. It is for bridging this lacuna that the water drinking test (WDT) has seen a resurgence in in glaucoma assessment and management. The WDT was initially used as a diagnostic test for glaucoma, and its use fell out of favor, understandably, because of its low sensitivity, specificity, and diagnostic value.[6,7] However, given that the WDT measurements correlate well with diurnal tension curves, it may be considered as a more cost effective and efficient surrogate for the more time-consuming IOP phasing. The WDT, therefore, has seen a recent revival as a “stress test” to assess the capacitance of the aqueous outflow, an indirect tool to measure aqueous outflow facility, along with peak IOP and IOP fluctuation.[8] Introduction Even though both intraocular pressure (IOP) fluctuation and peak IOP have been demonstrated to be significant risk factors for glaucoma progression, the latter has been shown to be a better predictor of disease progression. Peak IOP is also a more practical tool for guiding management protocols. [1][2][3] Continuous 24 h IOP monitoring arguably provides the best measure of an individual's IOP, but is logistically not possible in clinical settings. A diurnal variation of IOP over 24 h provides a better understanding of an individual's IOP profile including mean and peak IOP, as well as IOP fluctuation. [4,5] All of the currently available methods of recording circadian IOP variations are resource and time intensive, and usually not feasible in routine glaucoma practice. It is for bridging this lacuna that the water drinking test (WDT) has seen a resurgence in in glaucoma assessment and management. The WDT was initially used as a diagnostic test for glaucoma, and its use fell out of favor, understandably, because of its low sensitivity, specificity, and diagnostic value. [6,7] However, given that the WDT measurements correlate well with diurnal tension curves, it may be considered as a more cost effective and efficient surrogate for the more time-consuming IOP phasing. The WDT, therefore, has seen a recent revival as a "stress test" to assess the capacitance of the aqueous outflow, an indirect tool to measure aqueous outflow facility, along with peak IOP and IOP fluctuation. [8] How to Do the Test? Ideally, the patient should not have any liquids for 2 h before the test is performed, to offset any effect of previous fluid intake on the IOP measurements. [9] After measuring the patient's baseline IOP, the patient is asked to drink water over 5-10 min. Various authors have recommended various measures: Some use a fixed volume of water while others prefer to use a volume titrated to body weight. Kerr et al. used 500 ml and 1000 ml of water for the WDT, and found that both fluid challenge volumes resulted in a statistically significant rise in IOP. [10] However, they also reported that the mean maximum increase in IOP was less in the 500 ml WDT compared with the 1000 ml WDT. Hence, while the 500 ml fluid challenge can be used in patients who are unable to drink a liter of water, it is not an accurate estimate of the peak diurnal IOP. Susanna et al. have suggested that a fluid volume challenge of 800 ml WDT may be used instead. [8] The chief author (SB) also prefers to use 800 ml of water for routine WDTs. Volume adjusted to body weight Kumar et al. used 10 ml/kg body weight of water over 5 min and found that the peak IOP measured during diurnal IOP measurement showed a strong correlation with peak IOP during WDT. [11] They, however, also reported that the IOP fluctuation measured by the two tests did show a good correlation. Proponents of body weight adjusted fluid volume challenge aim to compensate for the effect of body mass and the expected fluid shift between the intravascular, intracellular, and interstitial compartments. It stands to reason that a fluid challenge of 1000 ml would have a different physiological effect in a subject weighing 50 Kg, as compared with one who weighs a 100 Kg. That said, this difference has not been validated, and most clinician scientists agree that a significant change from baseline IOP may be elicited on fluid challenge. This change is known to correlate with the diurnal IOP peak, but may or may not correlate well with fluctuation, in case of a challenge volume with 500 ml or less. However, in the absence of any consensus about the predictive value of WDT with various volumes used, most recent studies and clinics prefer to use either 800 ml or 10 ml/kg of body weight for the fluid challenge. The IOP is measured three to 4 times at 15 min intervals, after drinking water. The maximum measured IOP is the peak IOP. This increase in IOP may be sustained or recover quickly. This may be considered as indicative of the outflow facility reserve. [12] A word of caution before you decide for a WDT The WDT is contraindicated in patients who are on fluid restriction because of systemic conditions such as cardiac and/ or renal issues. Mechanism of IOP Elevation During WDT While the mechanism of IOP increase during the WDT is unclear, there are several postulates. Increase of episcleral venous pressure (EVP) and change in choroidal thickness probably contribute most to the change in IOP from baseline, but literature suggests that the decrease in outflow resistance and a centrally mediate increase in IOP may also contribute to the WDT response. • Decrease in outflow resistance. [13] • Potential centrally mediated mechanism. [14] • Increased EVP (measured to be more than twice the baseline, within 10 min of the water drinking, and maintained for at least 90 min). [15] • Choroidal expansion (measured increase of more than 20% in choroidal thickness during the WDT in eyes with open angles; may be more in eyes with angle closure). [16] Evidence So Far Medication versus medication Antiglaucoma medications that increase outflow facility such as prostaglandin analogues result in better IOP control during the WDT than those that decrease aqueous humor inflow, namely, beta-blockers and carbonic anhydrase inhibitors. [8] Betablockers have been shown to have the worst profile managing IOP changes compared to the rest of medications tested. [17] Another study showed that the combination therapy has the highest percentage of IOP fluctuations and positive WDT as compared to the other medications. This could be possibly due to the fact that the combination therapy is needed for patients with advanced glaucoma and extensive trabecular meshwork. [18] Medication versus Surgery Some glaucoma patients continue to deteriorate even after IOP reduction with antiglaucoma medications. Although medications lower the IOP and dampen the diurnal IOP fluctuations, they are not able to compensate for decreased outflow facility in glaucoma patients after a challenge of 1000 ml of water ingestion. [19] WDT can help detect such patients with compromised outflow facility and surgery can be offered to such patients as it has been shown that patients with medically controlled advanced glaucoma show greater IOP elevation and peak IOP after the WDT than eyes that have undergone trabeculectomy. [20] Surgery versus surgery Razeghinejad et al. studied effect of WDT after trabeculectomy and tube shunt (Ahmed glaucoma valve) surgery. They noted that IOP started to decline 30 min after the WDT in the trabeculectomy group, while it continued to increase up to 60 min in the tube shunt group. [21] This may have implications regarding the efficacy of tubes in some patients with advanced glaucoma. Progression in unilateral versus bilateral glaucoma De Moraes in a recent prospective study has shown that WDT peak is an independent predictor of progression whereas office-based IOP measurements fail to show a significant association with visual field progression. [22] They found that each mmHg higher WDT peak at baseline increased the risk of progression by 11%. In addition, in patients with bilateral glaucoma, eyes with higher IOP peaks during the WDT have worse visual field damage than their fellow eyes. Angle closure versus open angle Razeghinejad and Nowroozzadeh have shown that pharmacologic mydriasis and the WDT had similar IOP elevation before laser peripheral iridotomy (LPI), but after LPI, IOP elevation was much greater in the WDT group in primary angle-closure suspects. No changes in ocular biometric parameters were seen after LPI and/or pharmacologic mydriasis except for increments in anterior chamber volume after LPI. [23] Arora et al. have shown a decrease in anterior chamber depth after WDT secondary to a significant increase in choroidal thickness in angle-closure eyes unlike open angle eyes. [24] Test-Retest Reproducibility Hatanaka et al. found that even though the IOP peaks showed excellent reproducibility, while the reproducibility of fluctuation was considered fair. [25] Medina et al., on the other hand, found low levels of agreement among WDTs performed at different times of the day, despite good correlation. [26] The use of WDT, therefore, like diurnal pressure curves, in the serial monitoring of glaucoma patients requires caution. [27] The Final Verdict The WDT, using either 800 ml or 10 ml/kg of body weight for fluid challenge, may be used as a surrogate to evaluate the aqueous outflow facility, predicting the diurnal IOP peak and the efficacy of surgical and medical management in selected cases of both, open and angle-closure glaucoma. The second innings scorecard of the WDT is, of course, significantly better than its performance as a diagnostic tool for glaucoma. Like all other measures used in glaucoma practice, its relevance is also subject to clinical correlation and judicious interpretation. In the immortal words of Salvador Dali, as true for the WDT as for life itself: "Have no fear of perfection -you'll never reach it."
2021-03-17T03:18:29.869Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "619b2b8332954e3a0bff132475f36fc0a200b938", "oa_license": null, "oa_url": "https://doi.org/10.15713/ins.clever.48", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "619b2b8332954e3a0bff132475f36fc0a200b938", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Environmental Science" ] }
11421071
pes2o/s2orc
v3-fos-license
Increase of Universality in Human Brain during Mental Imagery from Visual Perception Background Different complex systems behave in a similar way near their critical points of phase transitions which leads to an emergence of a universal scaling behaviour. Universality indirectly implies a long-range correlation between constituent subsystems. As the distributed correlated processing is a hallmark of higher complex cognition, I investigated a measure of universality in human brain during perception and mental imagery of complex real-life visual object like visual art. Methodology/Principal Findings A new method was presented to estimate the strength of hidden universal structure in a multivariate data set. In this study, I investigated this method in the electrical activities (electroencephalogram signals) of human brain during complex cognition. Two broad groups - artists and non-artists - were studied during the encoding (perception) and retrieval (mental imagery) phases of actual paintings. Universal structure was found to be stronger in visual imagery than in visual perception, and this difference was stronger in artists than in non-artists. Further, this effect was found to be largest in the theta band oscillations and over the prefrontal regions bilaterally. Conclusions/Significance Phase transition like dynamics was observed in the electrical activities of human brain during complex cognitive processing, and closeness to phase transition was higher in mental imagery than in real perception. Further, the effect of long-term training on the universal scaling was also demonstrated. Introduction It is an accepted notion that human brain is one of the most complex systems. The brain is complex at all organization levels spanning from the morphology and activity patterns of the individual unit (i.e. single neuron) to the formation and dynamics of neuronal assemblies and finally to the circuitry and ensemble activity of large-scale networks where each node represents the collective dynamics of millions of neuronal assemblies. The involvement of large-scale and distributed cortical networks in higher complex cognition is supported by many studies using diverse imaging modalities. However, it is further proposed that co-activation of these multitude of brain areas are most likely to be associated with functional co-operation between these areas. In essence, brain regions do not act in isolation, rather they display large-scale coherent patterns of activity in both space and time (See, for reviews [1,2,3,4]. Even by a cursory look at the noninvasively obtained large scale electrical brain responses (electroencephalogram, EEG, Fig. 1), one could notice an intricate mixture of order (the presence of strong oscillating components) and disorder (the time-varying nature of the amplitude and frequency components of the oscillations). It is known that the oscillatory, yet transient, dynamics of neuronal assemblies emerges from the dense interaction between excitatory and inhibitory sets of neurons, and when modeled, they could produce chaotic oscillations [5], and the flexible switching between multiple chaotic attractors was earlier demonstrated in the olfactory bulb of rabbits [6]. Friston [7] has suggested that brain dynamics could be characterized by a series of flexible neuronal transients where transients represent an essential metric of interaction between neuronal assemblies. In a similar line, Kelso et al [8] showed in a now classic experiment involving bimanual coordination that a phase transition like phenomenon is observed in human brain, suggesting brain as a self-organizing system which operates close to critical points of instability, thereby allowing appropriate flexibility in switching between different dynamical states; this is now known as metastability [2,7]. Two important properties of complex systems close to their critical points are [9,10]: scaling -correlation decays as a power-law C r ð Þ*r {j where j is the correlation length of the entire system, and the system becomes scale-invariant; (ii) universality -different complex systems have similar critical exponents forming a universal class. This latter property stems from the fact that a system near its critical point is not very sensitive to the nature of the detailed properties of its components subsystems or to the details of the microscopic interactions, instead it depends on the more fundamental characteristics (i.e. symmetry, dimension, path of order propagation) of the system [11]. The universal behaviour and scale invariance properties seem to be present in numerous real-life systems including natural [12,13,14], biological [15,16,17,18], sociological [19] and even political [20] ones, most of which approximately belong to the category of a complex system involving large numbers of interacting subsystems that display the phenomenon of selforganization [21]. In this study, I investigated these two features of criticality in the electrical responses of human brain during higher complex cognition. Towards this, I presented a new approach based on the cumulative variation amplitude analysis [22] to estimate the strength of the universal scaling structure in the time series of multivariate EEG signals. Particularly, I put special emphasis on the comparative analysis of universality during encoding phase (visual perception) and retrieval phase (mental imagery). Results Multivariate EEG signals were recorded from two broad groups -professional artists and non-artists -at three different conditions: (i) visual perception (looking at a painting), (ii) mental imagery (mentally imagining the painting shown before), and (iii) rest. The duration of each condition was not less than 2 min, and after artefact reduction and removal, I analyzed the first 50 sec of spontaneous EEG signals recorded from 19 scalp locations. Fig. 1 shows an EEG signal recorded at scalp location Pz (midline parietal electrode) from an artist (Vp.483) while she was mentally imagining the painting by Holbein which was shown earlier (see Materials & Methods). The wavelet transformed signal with different scales a = 32, 18, 10, 5, 2 are shown afterwards. The dominant frequency is not identical to the concept of wavelet scale but they can be closely related: higher scale value is associated with lower frequency and this fact is evident in the power spectra of the wavelet transformed signals. The center frequencies of the wavelet transformed signals for these scale values roughly correspond to the center frequencies of the standard five frequency bands (range), namely, delta (1-4 Hz), theta (4)(5)(6)(7)(8), alpha (8)(9)(10)(11)(12), beta (12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30) and gamma (.30 Hz). Although the individual power-spectrum of the wavelet transformed signal for a single scale did contain, in addition to a peak at the center frequency, components from neighbouring frequency bands, one could still, for the ease of terminology, associate the wavelet transformed signal predominantly with one of the standard frequency bands. For example, W x (a = 18,t) represents theta band oscillations, whereas W x (a = 2,t) represents gamma band oscillations. As discussed earlier, the (real) wavelet transform reveals very local properties of the signal by emphasizing the extrema or discontinuities of the oscillations, so the wavelet transformed signal captures the intrinsic local properties of the dynamics masked by nonstationarity, which cannot be revealed by standard stationary digital filtering technique which is globally applied to the signal. For each EEG electrode and for each wavelet scale, I calculated the instantaneous amplitudes m(t) of the wavelet transformed signal as described in the Eq. (3) and estimated its pdf (P(y)). Next I grouped pdfs either across different scalp locations within one individual or across individuals at each scalp location. The first one explored the universality across brain regions, and the latter one explored the universality across individuals. Task Related Differences Universality Across Brain Regions. Fig. 2(a)-(c) shows a set of pdfs obtained from 19 EEG signals recorded from a non-artist during resting condition, looking at a painting, and mentally imagining the painting shown before, respectively. The wavelet scale was a = 18. Inspection of the pdfs reveals marked differences among different brain regions for each condition. These discrepancies are not utterly surprising given the underlying Figure 2. Universality across brain regions for all participants. Empiriical probability density functions P(y) of the envelope of wavelet transformed coefficients at scale a = 2 for multivariate EEG signals recorded from 19 scalp locations from a participant (Vp.483) during (a) resting condition, (b) perception of a visual art object, and (c) mental imagery of the same art object. All pdfs were normalized to unit area. (d-f) Same pdfs are in (a-c) but after rescaling: P(y) by P max and y by 1/P max to preserve the normalization to unit area. The values in inset indicate the degree of data collapsing as measured by the KL divergence measure (see the text for details). Lower divergence or higher data collapse was found during mental imagery. doi:10.1371/journal.pone.0004121.g002 functionally segregative behavior of individual brain regions. To test the hypothesis that there is a hidden, possibly universal structure to these time series generated by distributed brain regions, I rescaled each pdf and computed the Kullback-Leibler (KL) divergence measure for the set of rescaled pdfs (see Methods for details). If the rescaled pdfs collapse, i.e. they are scale invariant, KL measure for the set will be minimal. Here, I found strongest scale invariance or universal structure during mental imagery condition followed by visual perception and resting condition. The entire analyses were repeated with the chosen five wavelet scales, and Fig. 3 shows the profiles of the Mean-Kullback-Leibler (MKL) divergences averaged over all participants and over all possible combinations of each electrode region for both visual perception and mental imagery. The profiles were plotted after subtracting the MKL values for resting condition. Scalp topographies of the differential (perception minus imagery) MKL are also shown in Fig. 3. Following noteworthy points are found. (i) The degree of universality was overall higher (i.e. MKL values are lower) Figure 3. Universality across brain regions at different frequency bands for all participants. Mean Kullback Leibler (MKL) divergence measure for five different scales (a = 32, 18, 10, 5, and 2) used in the wavelet transform which roughly correspond to five standard frequency bands: delta (,4 Hz), theta (5-8 Hz), alpha (9-12 Hz), beta (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30), and gamma (.30 Hz). Results were pooled across groups, participants, electrode pairs. doi:10.1371/journal.pone.0004121.g003 in mental imagery than in visual perception condition (Wilcoxon signed rank test, p = 0.038). (ii) Among frequency bands, this effect was mostly pronounced in low frequency theta band (Wilcoxon, p = 0.013, mean difference (perception -imagery) in MKL, D MKL = 0.07, which is 11% if expressed in percentage changes with respect to perception condition), followed but less significantly in alpha (D MKL = 0.03, 5.1%) and beta bands D MKL = 0.01, 4.0%). (iii) Among scalp locations, frontal brain regions, bilaterally, were associated with least universality during visual perception. Since the differences between visual perception and mental imagery were most pronounced in the theta band (a = 18), I focussed the subsequent analysis only at this frequency band. Group Related Differences Universality Across Brain Regions. The earlier results emphasized the differences between perception and imagery after pooling the data from all participants, where as the group related effects are shown in Fig. 4. It is evident that the scale invariance properties during mental imagery w.r.to visual perception was more pronounced in artists than in non-artists. Further, artists also showed higher (Wilcoxon p,0.0039) universality in mental imagery as compared to resting condition (i.e. the MKL profile for artists was mostly negative where as it was more positive for non-artists). On topographic scales, bilateral frontal regions (F7, F8) in artists indicate reduced (paired Wilcoxon p,0.002) universal structure during visual perception from mental imagery. In non-artists, the least degree of universality was found in right frontal region (F8, Wilcoxon p,0.037 for perception vs rest and p,0.013 for imagery vs rest). Universality Across Participants. Next I grouped the pdfs of each individual electrode region across participants within each group; the results for electrode region O2 are shown in Fig. 5. For this posterior brain region, data collapsing behaviour was significantly enhanced for both perception and imagery conditions from rest. Fig. 6 shows the degree of universality of individual brain region across participants but within each group. For artists, taskrelated increases (perception or imagery from rest) in the degree of data collapsing were found in primarily posterior electrode regions excluding T4. Non-artists also showed similar effect, but to a much lesser extent, of weaker data collapsing at rest in the posterior electrode regions. For artists, frontal regions bilaterally (F7 and F8) showed least data collapsing during perception, where as this effect was mostly right accentuated for non-artists. Surrogate Analysis As a further statistical control, I generated 19 sets of surrogate signals for each set of 19 EEG signals obtained from each individual and for each condition, and compared the data collapse (at a scale a = 18) behaviour of surrogates to the original data. Fig. 7 shows one such comparison for an artist (Vp.483) while looking at a painting; the same comparison is displayed in Fig. 8 for mentally imagining the same painting. If a pdf of any electrode region is found to be significantly different from the set of pdfs of surrogates, one can reasonably infer that the phase correlations in this electrode region to be non-random and different from the phases of other electrode regions. Interestingly, the frontal electrode regions, bilaterally (F7, F8, Fp1, Fp2), showed long tailed distributions which could not be reproduced by the surrogate signals, thus suggesting these electrode regions most likely possess not only non-random but also different phase structure from other electrode regions. However, these effects were not found during mental imagery (Fig. 8) since their pdfs were almost indistinguishable from those surrogate pdfs and also from pdfs of other electrode regions. These effects were found to be remarkably consistent across artists. These altogether suggest that, as compared to other electrode regions, frontal electrode regions in artists possess distinctly different phase correlation during visual perception but similar phase correlations during mental imagery. Discussion The overall similarities in the topological profile of mean Kullback-Leibler measure across several frequency bands during visual perception and mental imagery support the hypothesis that the diverse brain regions active during sensory-induced perceptions are reactivated during retrieval of such information [23,24,25]. However, I also found that this similarity was less in artists at the frontal regions for theta band oscillations where mental imagery was found to induce stronger universal structure than visual perception. Or conversely, the degree of universality at frontal regions was minimal across all brain regions in artists during visual perception. It could be explained as follows. Visual perception of art broadly consists of three stages [25]: (i) extraction of basic visual features, (ii) organization of these features into coherent and fundamental forms, followed by (iii) addition of meaning onto these forms through associations stored in long-term memory. This last stage can be termed as top-down processing where the brain adds the information to raw visual impressions giving a richness of meaning well beyond the sensory stimuli [26]. Prefrontal cortex plays a crucial role by providing this top-down control [27], and it is reasonable to assume that the involvement of prefrontal cortex during perception of visual art would vary substantially across artists due to their expertise and training in visual art. However, the top-down control operation was significantly reduced during mental imagery of the painting shown before, thus leading to a substantial increase of universality in frontal regions. Further, low frequency oscillation in theta band plays a prominent role in mediating access to stored representation in long term memory [28], and moreover, an increase in theta band oscillation in frontal regions was found during concentrated mental activity requiring higher memory load [29]. During imagery, the extent of retrieved visual-art patterns from long-term memory was assumed to be much higher in artists than in nonartists which possibly led to an increase of universality. Several studies have earlier shown that ongoing spontaneous brain activity exhibits scale-free behaviour at resting condition [30,31,32], where scale-free dynamics is primarily characterized by a power-law scaling behaviour of amplitude fluctuations in raw signal or in specific frequency bands. Recent evidence also indicates that scale-free structure can be altered by external stimulus, such as nerve stimulation [33], performance feedback [34], music [35], imaginary and visual motor-tracking [36]. Noteworthy to mention that all these studies report a stimulus related modulation, but not a disruption, of the scaling activity. The present study extends this finding by reporting that not only the scale-free properties but also the underlying universality could be modulated by complex cognitive tasks and by task-specific expertise. But what can one conclude by finding such universal structure in large scale brain responses? Before answering this question, let me mention a few key details about universality. Universality, as discussed here, refers to a phenomenon whereby different systems exhibit very similar characteristic or critical exponents which determine the correlation and scaling functions [9,37]. Since critical exponents offer a complete description of the dynamics of a system near a continuous phase transition including the emergence of a long-range interaction out of paradoxical competition between exponentially decaying correlation function and exponentially increasing number of connecting paths. Two systems with similar scaling functions and critical exponents belong to the same universality class. Thus, if the critical properties of one system could be known, it would theoretically be possible to predict the critical properties of the other system belonging to the same universality class [38]. The finding that the artists group showed a stronger universality across brain regions during mental imagery offer a surprising conclusion: despite the possibilities of wide individual variations across many hidden degrees of freedom in the group of artists, during mental imagery there exists a remarkable consistency in dynamical scaling and correlation characteristics across different brain regions distributed over the scalp. Finally, I would like to leave two cautionary remarks. First, Kadanoff showed [10] that scaling and universality of critical exponents are primarily a consequence of the scale invariance of physical systems near critical points, but the converse is not essentially true; in other words, by observing some sort of universality, one cannot prove the closeness to criticality and the presence of a scale invariance. Second, the adopted experimental paradigm involved a very highly abstract task of complex cognition lasting for minutes, so one has to be careful with interpretations before ''overstretching'' the findings. The very nature, i.e. ecologically valid and naturally appropriate, of the task/paradigm, and the nature of the recorded signals (i.e. from large scale EEG signals, it is almost impossible to prove the theoretical concept of universality as produced by two trajectories of two neuronal populations, the local manifolds of which have similar Jacobians close to their critical points though a novel way was offered to find the closeness to the universality) impose the constraints, yet a more pragmatic approach would be to look for the consistency and systematic differences across tasks and groups. In summary, a new method has been discussed to find the hidden universal structure in a multivariate data set. The paper also presents evidence that conceptual framework provided by the theory of statistical mechanics to characterize complex systems poised at criticality by twin pillars of scaling and universality may be useful in providing new insights into the analysis of brain electrical responses recorded under complex cognitive task paradigm. Participants and Stimuli Forty three female participants were divided into two groups: (i) artists (n = 19, mean age 38.4 yrs) with M.A. degree in Fine Arts, and (ii) non-artists (n = 24, mean age 36.6 yrs) without any training or prominent interest in visual art. Three conditions were considered: (1) visual perception: looking at slides of four paintings characterizing four different periods in the history of the Fine arts (Bean-festival by Jordeans, a charcoal-etching by Rembrandt, a portrait by Holbein, and an abstract figure by Kandinksy), (2) mental imagery: mental imagination of these paintings shown before, and (3) rest: resting with eyes opened. At the end of each condition, the participants read a newspaper article for distraction. Each of the condition lasted for at least 2 min and the orders of the tasks as well as the orders of the paintings were randomized. All participants gave informed written consent and the study was formally approved by the local ethics committee of the Brain Research Institute, University of Vienna, Austria. Data recording Multivariate (19 channels) EEG signals were recorded by 19 gold-cup electrodes (Fig. S1) which were equally distributed over the scalp according to the standard 10-20 electrode placement system [39] with respect to the forehead as ground. EEG signals were amplified by Nihon-Kohden amplifier. The signals were later algebraically re-referenced by the average signals of two ear-lobes. The electrode impedance was kept below 8 kOhm, the sampling frequency was 128 Hz and the A/D conversion was 12 bit. Custom-made independent component analysis based software was utilized offline to remove the eye-blink related components and other artefacts. Data processing The data processing was composed of four steps as follows. 1. Wavelet analysis. The wavelet analysis is analogous in nature to the Fourier analysis by which a signal is decomposed in to a set of finite basis functions. The primary advantage of wavelet is the local property of the chosen wavelet basis function which may be appropriate to detect transient dynamics in the signal, where as such transient is often obscured by the fixed trigonometric basis function with infinite support used in the Fourier analysis. Let x(t) be the signal and Y be the mother wavelet. The wavelet coefficients W x (a,t) are produced through convolution of the scaled mother wavelet function with the analyzed signal as follows: where a is the scale of the wavelet which is inversely related to frequency, and t is the local time origin of the analyzed wavelet. The nature of oscillation with the continuous wavelet spectrum will depend crucially on the nature, complex or real, of the mother wavelet. The complex wavelet usually produces a constant power across the entire time duration of the oscillation, whereas a real wavelet produces power mostly at those times where the oscillation is at an extreme or where a sharp discontinuity occurs. In this study, I used real Morlet wavelet, which has [24,4] as effective support, and is defined as an exponentially decaying sinusoidal signal: Y t ð Þ~e {x 2 =2 cos 5x ð Þ. There are numerous other wavelets that could also be adopted [40]; however, the Morlet wavelet is particularly suitable for oscillatory signals generated by dynamical systems. Typically, complex Morlet wavelet has been routinely used in estimating time-varying spectral content in brain oscillations [41,42], whereas real wavelets are better suited to detect sharp signal transitions. MatlabH function cwt was used to compute the continuous wavelet transformed coefficients. 2. Analytic signal formation. The second step of the analysis was to extract the instantaneous variation amplitude of Figure 8. Surrogate analysis during mental imagery. Same as in Fig. 7 but during mental imagery. Note that the pdfs for frontal electrode regions being indistinguishable from those of surrogates and of other electrode regions, which is in sharp contrast with visual perception (Fig. 7). doi:10.1371/journal.pone.0004121.g008 the wavelet transformed signal by means of an analytic signal approach [43]. For any wavelet transformed signal W x (a,t) at scale a, an analytic signal, z(t), is defined as Theoretically, there are infinitely main ways of defining the imaginary part y(t), but the Hilbert transform provides a unique way of defining the imaginary part so that the result would become a complex analytic function. Further, Hilbert transform (H.T.) is particularly attractive because it does not require any information concerning the centre frequency of the signal. For the sequence W x (a,t), Hilbert transform is calculated as where P.V. means that the integral is taken in the sense of the Cauchy Principal Value. From the above equation, the Hilbert transform y(t) can be considered as the convolution of the concerned time series with 1/pt, and does not produce a change of domain unlike Fourier transform which changes from time domain to frequency domain representation. Hilbert transform can be realized by a ideal filter whose amplitude response is unity and phase response is a constant p/2 lag at all frequencies [44]. MatlabH function hilbert was used to calculate the analytic signal. The concept of analytic signal may also be better understood by comparing the role of phasors in simplifying manipulations of current and voltages [45]. A rotating phasor is defined by Comparing Eq.(2) to Eq. (5), it is evident that the given real signal W x (a,t) and its Hilbert transformed version y(t) play analogous role to the real part and the imaginary part of an unit phasor. This fact can be explained as follows. Let us consider a point undergoing uniform circular motion about the centre. The projection of the motion of which onto axes at right angles to one another yields the pair of sine and cosine waves, respectively. If the time axes are combined, and the sine and cosine waves are included and their instantaneous values are projected into the three dimensional space, then the motion of the resulting point traces out the trip of the vector originating at the time axis. The resulting vector is termed as the analytic signal, and its length is the amplitude of the analytic signal. Ideally the analytic procedure put the low frequency content in to the amplitude a(t) and the high frequency component in to the phase w(t) [43]. Here I investigated only the instantaneous amplitude component and its empirical probability density function (pdf) was studied next. Empirical Probability Density Function (PDF) analysis. For each electrode, task and participant, I computed the pdf P(y) of instantaneous amplitudes of the wavelet transformed signal and normalized it to unit variance. For individual task condition (rest/perception/imagery), these individual pdf's were pooled together according to scalp locations or participants within each group (artist/non-artist). To test the hypothesis that there is a hidden, possibly universal, structure to these different pdf's, the pdf's were rescaled as follows: P(y) by P max and y by 1/P max to preserve the normalization to unit area [22]. If there exists, indeed, a universal structure among all these pdf's, the rescaled pdf's would collapse into a single pdf and the entire pool of pdf's can be described a single scaling parameter. Such collapsing of density functions is reminiscent of a wideclass of well-studied physical and natural systems with universal scaling properties [9]. This stems from the fact that generalized homogenous functions display this property of scale invariance and data collapsing, and such functions were investigated in the context of formalism to treat thermodynamic functions, static correlation function, dynamic correlation function and universality near the critical point [9]. Lets mention briefly the key features of generalized homogenous function [46]. A function f(x,y,z, …) of any number of variables is called homogeneous of degree m in these variables if multiplication of each of the variables x, y, z, …by a positive scalar l results in multiplication of the function by l m , i.e. ð Þ ð6Þ where the parameter m is generally called the degree of homogeneity. Now this function will be called a generalized homogeneous function if one finds a set of numbers a, b, c, … not all zero, such that f l a x,l b y,l c z, . . . However, it needs to be stressed here that one would rarely obtain a strict data collapsing behaviour of pdf's when dealing with real-life ongoing EEG signals, and even more so when the underlying task was as complex as visual perception and mental imagery of complex object like a painting. Therefore, it was more appropriate to perform a comparative study, i.e. whether data collapsing was more (or less) in visual perception than in mental imagery. In order to quantify the degree of data collapsing, I used Kullback-Leibler (KL) divergence measure [47] which is also known as relative entropy or cross-entropy [48]. KL between two pdfs P(x) and Q(x) over the same alphabet L X is KL is always non-negative and is zero iff P = Q. So the degree of similarities between two pdfs is inversely related to the value of KL. Therefore, if I computed the KL for all pair-wise pdf's within each pool, the averaged or mean-Kullback-Leibler (MKL) measure approximately quantifies the degrees of universality or scale invariance of that pool. The lower the values of MKL, the higher the degrees of universality and scale invariance. 4. Surrogate Analysis. The final analysis was based on the method of surrogate data analysis [49], which is an application of the popular statistical bootstrapping technique [50]. The surrogate series are generated from the original EEG signals on the basis of a certain null hypothesis. A most basic null hypothesis is that EEGs are completely random, and a more advanced null hypothesis is that EEG signals are generated by filtered Gaussian linear stochastic processes, which may be observed through a static nonlinear but invertible transformation function. Here, surrogates possess all the linear structure (mean, variance, power spectrum and circular autocorrelation) but are devoid of any phase correlations present in the original signal. Since multiple EEG signals were measured simultaneously, it would be important to incorporate also the phase correlation properties between multiple EEG signals [51]. Here the set of surrogates not only possess the earlier mentioned linear structure of individual EEG signals but also their cross spectral information. In this study, I generated 19 surrogates for each EEG signal and their pdfs of the instantaneous amplitude of the wavelet transform using the same scale (a) were compared to the corresponding pdf of the original signal. If the original pdf is indistinguishable from the set of surrogates, one can accept the underlying null hypothesis of linear Gaussian process.
2014-10-01T00:00:00.000Z
2009-01-05T00:00:00.000
{ "year": 2009, "sha1": "7b2c71f92190d31ae49630dd8b2257be1d1de8bf", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0004121&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b2c71f92190d31ae49630dd8b2257be1d1de8bf", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
237387415
pes2o/s2orc
v3-fos-license
Eosinophilic Colitis in Recurrent Sigmoid Volvulus ABSTRACT Eosinophilic colitis (EC) falls along the spectrum of a series of inflammatory gastrointestinal disorders in which eosinophils infiltrate the gut without known tissue eosinophilia. Eosinophilic gastrointestinal disorders include eosinophilic esophagitis, eosinophilic gastritis, and the least common EC. The presentation of EC is extremely variable in mucosal, submucosal, and transmural inflammation. We present a case of recurrent volvulus with histologic findings of eosinophilia. INTRODUCTION Primary eosinophilic gastrointestinal disorder is defined as the presence of intestinal inflammation and eosinophils in the absence of peripheral eosinophilia. 1,2 Any portion of the gastrointestinal tract can be affected by eosinphilic colitis (EC), with the least common portion being the colon. Primary EC is very rare because the presence of colonic eosinophils is most commonly seen in inflammatory bowel disease, immunoglobulin E-mediated food allergens, and parasitic infections. 3,4 For this reason, EC remains a diagnosis of exclusion at this time. The differential of colonic eosinophilia includes inflammatory bowel disease, parasitic infections, and allergens. Primary EC typically presents as diarrhea in the mucosal form, whereas transmural has been associated with volvulus, intussusception, and even perforation in select cases. 3 In previous literature, the cecal volvulus has been associated with EC. However, the prevalence of sigmoid volvulus has not been reported. We present a case of recurrent volvulus with subsequent sigmoidectomy with histologic findings of transmural and subserosal eosinophilic infiltration. CASE REPORT A 55-year-old woman with a previous sigmoid volvulus and resultant sigmoidectomy in 2017 presented with 1.5 weeks of abdominal pain, increasing abdominal distention, and constipation. She had not had a bowel movement for 5 days at the time of presentation and was no longer passing flatus. She stated that other than her abdominal symptoms, she had no other complaints. Per the patient, her symptoms were very similar to her previous volvulus in 2017, however, slightly less severe than her previous presentation. At that time, endoscopic decompression was successful with subsequent resection of 34.9 cm of the sigmoid colon approximately 2 weeks later and anastomosis achieved through staples. She had no abnormal laboratory values, and her vital signs were within normal limits. Her examination was notable for a distended, tympanic abdomen without any significant tenderness. Initial abdominal and pelvic computed tomography with IV contrast showed an acute sigmoid volvulus causing a large bowel obstruction. This was also initially evident on a kidney, ureter, and bladder ( Figure 1). The patient was initially evaluated by a general surgeon who recommended evaluation from a gastroenterologist before considering surgical intervention. She was taken for an urgent attempt of endoscopic decompression with flexible sigmoidoscopy. The water immersion technique was used with normal-appearing mucosa until approximately 25 cm from the rectum at the sigmoid flexure. At this point, the examination showed a swirling appearance of the lumen, consistent with volvulus ( Figure 2). At this point, the mucosa was friable, erythematous, and ulcerated concerning for a subacute-to-chronic appearance. There was a stricture approximately 2 cm in length at the transition point, which was traversed with a pediatric colonoscope with minimal resistance. Proximal to the stricture, a moderate amount of gas and stool was seen with otherwise normal-appearing mucosa. Multiple attempts of endoscopic decompression were unsuccessful. The surgical team was notified and recommended bowel preparation. The patient continued to have symptoms similar to presentation during preparation, and she was subsequently taken for an open sigmoidectomy with end-to-end anastomosis 5 days after initial presentation. During this sigmoidectomy, she was noted to have a redundant sigmoid and 16.5 cm of colon was resected and anastomosis achieved with an 80-mm gastrointestinal anastamosis stapler. Pathology of the surgical resection showed eosinophilic infiltration of the fibrotic submucosa with extension through the muscularis propria into the subserosa and involving serosal adhesions (Figures 3 and 4). The findings were present in areas underlying mucosa with the ulceration at the anastomotic site and away from the anastomotic site in areas with normal overlying mucosa. No additional biopsies from other colonic sites were taken. Features to suggest inflammatory bowel disease were not identified in the mucosa, and there was no evidence of parasitic organisms. A review of the previous 2017 resection demonstrated no evidence of eosinophilic infiltrate. Since surgery, the patient has done very well with no further recurrence of symptoms. DISCUSSION EC is rare, with only a handful of cases reported in the literature since 1970. 1 At this time, no established diagnostic criteria are making definitive diagnosis challenging. Common presentations of EC include abdominal pain, cramping, diarrhea, and weight loss. Given that colonic eosinophils are present in allergies, inflammatory bowel disease (IBD), and parasitic/ helminth infections, primary EC is a diagnosis of exclusion requiring extensive clinical and laboratory correlation. 1,3 Our patient had no history of allergies, no symptoms or pathology consistent with IBD, and no history to support parasitic infection. Primary EC more typically affects neonates and juveniles, with rare adult presentations. In younger patients, the disease is typically mild and self-limited. Adult presentations depend on the layer of colon affected. Mucosal infiltration typically presents with malabsorption, diarrhea, and protein-losing enteropathy. Transmural infiltration, on the other hand, presents with colonic thickening and intestinal obstruction. 1,4 Histologic features demonstrate a dense eosinophilic infiltration of the colon in either a segmental or diffuse pattern. Diagnosis typically requires multiple biopsies and elimination of any secondary causes of colonic eosinophilic infiltration. 1,3 Therapeutic treatment of EC is not standardized because of the rarity of diagnosis, and current management is based on the limited number of case reports available in the literature. Typical treatment of EC includes dietary modifications, given the common presentation with immunoglobulin E-related food allergens and avoidance of amino acid-based foods. Diet modifications typically involve elimination with slow reintroduction of food groups. In addition to dietary modifications, symptomatic EC can be treated with glucocorticoids and azathioprine through inhibition of eosinophil growth factors. 2,4 Additional medical intervention includes leukotriene receptor antagonists as well as antihistamines and mast cell stabilizers. 2,4 In our patient, the main question of her presentation revolves around EC as the cause of recurrent sigmoid volvulus. The initial volvulus in 2017 had no evidence of eosinophilic infiltrates on pathology; however, dense transmural infiltration was seen in the 2020 sample. There have been no previous reports on EC-related sigmoid volvulus. Given she had no history of IBD, no food allergens, and no symptoms/history to support parasitic infection, the main differentials for her recurrent volvulus are primary EC vs an allergic reaction and the staples used in her previous anastomosis site. Since the second resection, she has had no gastroenterological symptoms, supporting the 2 remaining differentials rather than a secondary cause of colonic eosinophilia. DISCLOSURES Author contributions: K. Zucker wrote article. F. Pradhan, A. Gomez, and R. Nanda edited the article. R. Nanda is the article guarantor. Financial disclosure: None to report. Informed consent was obtained for this case report.
2021-09-03T05:29:14.050Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "c712e92861941152a382e0c472b55e2746136169", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c712e92861941152a382e0c472b55e2746136169", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15752905
pes2o/s2orc
v3-fos-license
Landau Gauge Fixing supported by Genetic Algorithm A class of algorithms for the Landau gauge fixing is proposed, which makes the steepest ascent (SA) method be more efficient by concepts of genetic algorithm. Main concern is how to incorporate random gauge transformation (RGT) %, mutation in genetic algorithm (GA) terminology, to gain higher achievement of the minimal Landau gauge fixing, and to keep lower time consumption. One of these algorithms uses the block RGT, and another uses RGT controlled by local fitness density, and the last uses RGT determined by Ising Monte Carlo process. We tested these algorithms on SU(2) lattice gauge theory in 4 dimension with small $\beta$s, 2.0, 1.75 and 1.5, and report improvements in hit rate and/or in time consumption, compared to other methods. INTRODUCTION Gauge fixing degeneracies (existence of Gribov's copies) are generic phenomena [1][2][3] in nonabelian continuum gauge theories , and thus there exist fundamental problems, e.g., what is the correct gauge fixed measure in the path integral formalism. Although the foudation of lattice gauge theories does not necessitate gauge fixings, they are, however, often requeired for field theoretic transcriptions of their nonperturbative dynammics. The principle of the Landau gauge fixing algorithm is given as an optimization problem of some functions along the gauge orbit of link variables [3]. There are many extrema of the optimization function in general, and all these extrema points on the gauge orbit correspond to the Landau gauge, Gribov copies. The absolute maximum corresponds to the minimal Landau gauge [4]. In order to fix the Landau gauge uniquely, the minimal Landau gauge is the most favourable goal. Local search algorithms were developed by a chain of gauge transformations [5,6]. . Since there are many Gribov copies, the gauge orbit paths are easily captured by these extrema. We call these method as the steepest ascent method (SA). In such a situation that SA fails to attain an absolute extremum, one can try a simple random search[7] on the gauge orbit with succeeding SA. There is only unique method of Hetrick-de Forcrand[8] (HdeF) aiming at the minimal Landau gauge fixing, which is systematic in the sense that random trial is not involved. However it works successfully only for large β samples, rather smooth configurations. Thus finding efficient algorithm for the minimal Landau gauge fixing is still an open problem. We report results of an attempt in some GA type methods in comparison with other methods as the simple RGT method. We work on SU (2) gauge theory of 8 4 lattice, and define gauge fields , then the optimization function, fitness, is given as where The fitness F U (G) can be viewed as negative of energy of SU (2) spin system G sitting on sites, and thus the problem is equivalent to finding the lowest energy state under the randomized interaction U . Straightforward GA strategy was applied to the minimal Landau gauge fixing [9]. Their preliminary results of the Landau gauge fixing show that the straightforward application of GA is not so good as to become a practical method. We tested three types of GA methods, which are different from each other in how RGT are incorporated in the algorithms. We compare them with the simple RGT method in performance. Algorithms Our aim is to develop algorithms efficient for small βs. Basic building block for the algorithms is RGT. The simple RGT (TRGT) method which applies RGT on the whole lattice takes time, while it is difficult for SA paths to escape from a Gribov copy if the RGT is restricted in blocks of too small area on the lattice. Thus main features of our algorithms consist in how to determine blocks to which RGT is applied. Given a Gribov copy link configuration U , the following three types of definition of blocking for RGT are devised. 1. The whole lattice is partitioned into N div d chequered blocks, where the number of dimension, d = 4. Then RGT is applied on white blocks and a constant RGT on each black one, and vice versa. We call this method as blocked RGT (BRGT) method. 2. We set two parameters R1 and R2, and sites i where RGT is applied are chosen according to local fitness density, f (i), by f (i) < R1 or R2 < f (i). We call this method as local fitness density RGT (LFRGT). 3. Ising spin interaction on randomly chosen coarse lattice is defined from gauge spin interaction given by U such that at least one antiferro interaction should be involved. Through Monte Carlo simulation of this Ising system, one obtains up-spin blocks B + and down-spin blocks B − . Blocks for RGT is given by one of these blocks. β Ising is so chosen that size of both blocks B ± becomes comparable. We call this method as Ising RGT (IRGT). We use the local exact algoritm [5,6] for SA method. Given a Gribov copy U , the SA method following RGT in use of one of three blockings, brings U to the extrema of fitness by steps of gauge transformations. This new copy in the Landau gauge is put as an initial copy for the next iteration. This sub-procedure is repeated M itr times. The original Gribov copy and the maximum fitness copy among M itr new gauge copies are compared in fitness value. If the fitness of the new one is higher, the sub-procedure is to be started again with this new one as an initial copy. Otherwise the process stops, and the initial copy is considered as the expected configuration with the maximum fitness value. In addtion to the above basic algorithm, two types of modification are devised as follows: 1.As the initial copy in the sub-process, the better fitness copy between the current copy and the preceding, is always chosen. We call this as the inter-selection-on (IS) scheme. 2.As the initial copy in the sub-process, the product of crossing is adopted, where chequered block crossing is done between the best fitness copy and the second best one among the obtained copies so far. We call this as the crossing-on (C) scheme. We tested these algorithms on SU (2) 8 4 lattice with β = 2.0, 1.75, 1.5 and tuned some parameters, as N div , β Ising and M itr , with or without IS and/or C scheme. Results and Performance Our three GA type methods were executed on the same set of randomly produced 50 copies from a suitably chosen copy from samples, β = 2.0, 1.75, 1.5. The TRGT method and the method of HdeF were also tested on the same set. Performance of these methods, hit rate of the minimal Landau gauge and average time consumption for the gauge fixing, are compared. We fix M itr = 5, and in tests of parameters search for β = 2.0, we found that BRGT with N div = 2 shows a sufficient global search power, while BRGT with N div = 4 shows a lower hit rate. The IS scheme could be viewed as a kind of elitism, and it is known that elitism is a suitable strategy when the global search power is available. BRGT with IS scheme, N div = 2, shows a powerful and efficient search, while IRGT with IS, however, does not. For LFRGT, the cut parameters, R1 and R2, affect its search power. Without R2 or with too large R2, even high R1 does not work well, nor low R2 without R1. Since LFRGT with both paramters suitabley chosen, achieves the efficient search, IS scheme helps. From Table 1, our algorithms, BRGT with IS, N div = 2, and LFRGT with IS, R1 = 0.5 and R2 = 0.85, have high hit rates and exhibit good performance comparable with TRGT. The HdeF method with small β is known that it does not tend to the maximun fitness [8]. The hit rate performance for various methods is given in Figure 1, and the average time consumption is shown in Table 1. This work is supported by Japan Society for the Promotion of Science, Grant-in-aid for Scientific Research(C) (No.11640251).
2014-10-01T00:00:00.000Z
1999-09-09T00:00:00.000
{ "year": 1999, "sha1": "068f828a9a2a734fef5850e5279bab34fe2f0fc3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-lat/9909064", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "068f828a9a2a734fef5850e5279bab34fe2f0fc3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
264437329
pes2o/s2orc
v3-fos-license
Potential of phage depolymerase for the treatment of bacterial biofilms ABSTRACT Resistance of bacteria to antibiotics is a major concern in medicine and veterinary science. The bacterial biofilm structures not only prevent the penetration of drugs into cells within the biofilm’s interior but also aid in evasion of the host immune system. Hence, there is an urgent need to develop novel therapeutic approaches against bacterial biofilms. One potential strategy to counter biofilms is to use phage depolymerases that degrade the matrix structure of the bacteria and enable access to bacterial cells. This review mainly discusses the methods by which phage depolymerases enhance the efficacy of the human immune system and the therapeutic applications of some phage depolymerases, such as single phage depolymerase application, combined therapy with phage depolymerase and antibiotics, and phage depolymerase cocktails, for treating bacterial biofilms. This review also summarizes the relationship between bacterial biofilms and antibiotic resistance. Introduction It is estimated that antimicrobial-resistant bacterial infections will become the leading cause of death by 2050 [1].The formation of bacterial biofilms is considered to be one of the major causes of antibiotic resistance in bacteria [2].Bacterial biofilm acts as a virulence factor.Biofilms can prevent the penetration of drugs through their matrix and reduce the ability of antibiotics to reach the surface of bacterial cells.Furthermore, biofilms enable bacteria to evade the host immune system [3].Some pathogenic bacteria, such as E. coli, K. pneumoniae, P. aeruginosa, S. aureus, and E. faecalis, can form biofilms on the surfaces of medical instruments and human and animal tissues [4].In fact, it has been estimated that bacterial biofilms are related to approximately 80% of chronic and recurrent bacterial infections in humans, including cystic fibrosis, endocarditis, meningitis, osteomyelitis, rhinosinusitis, and periodontitis, as well as kidney and prosthesis infections [5].Bacteria that are susceptible to antibiotics when in planktonic form can develop increased antibiotic resistance after biofilm formation on a suitable surface [6].The possible reasons may be bacterial biofilms reducing the penetration of antibiotics into the deeper layers, biofilm bacteria growing slowly in the deeper layers or biofilm bacteria developing molecular mechanisms of antibiotic resistance. There is, thus, an urgent need to develop novel therapeutic approaches, besides the conventional antibiotics used against bacterial biofilms. The use of phage depolymerases is a promising therapeutic strategy for preventing and controlling bacterial biofilm-associated infections [7].Some phages have developed the ability to degrade the polysaccharide-based structures produced by biofilmforming bacteria to gain entry into bacterial cells and replicate their genetic information [8,9].Not all phages can encode depolymerases with exopolysaccharide-degrading activity.The depolymerases are encoded by some phages that infect the encapsulated bacteria.Phage depolymerases are encoded in the same open reading frames as phage structural proteins or in close proximity to these genes, which are located mainly on tail fibres, base plates, and neck [10].Phage depolymerases can be divided into two main groups: hydrolases (EC 3.2.1.-)and lyases (EC 4.2.2.-), based on the degradation mode of the carbohydrate polymers on the surface of the bacteria [11].These depolymerases can degrade polysaccharides by recognizing specific ligands on the bacterial surface.The specific binding to capsular polysaccharides (CPSs) and lipopolysaccharides (LPSs) results in the destruction of repeating units of the polysaccharide [12].Phage depolymerases do not directly kill the bacteria; instead, they strip the protective polysaccharide layers from the bacterial cells, which exposes and sensitizes them to components of the immune system or to antibacterial agents.Moreover, the degradation of polysaccharides by phage depolymerases can increase the penetration of antibiotics into the biofilm.Thus, phage depolymerases demonstrate a synergistic effect with some antibiotics against biofilm-forming pathogens [13][14][15][16].In the treatment of bacterial infections, phage depolymerase is compared with phage endolysin and phage holin in Table 1. This review discusses the enhancing effect of phage depolymerase on the human immune system and the therapeutic applications of some phage depolymerases, such as single phage depolymerase application, combination therapy with phage depolymerase and antibiotics, and phage depolymerase cocktails, for the treatment of bacterial biofilms.Additionally, the relationships between bacterial biofilms and antibiotic resistance have been examined. Conventional antibiotics can kill growing and dividing bacterial cells, such as planktonic bacteria, that is, those in free-living form [18].However, the minimum inhibitory concentrations (MICs) of conventional antibiotics for biofilm bacteria are 100-1000 times higher than those for planktonic bacteria [19]. Formation of the bacterial biofilm Biofilm formation can be influenced by numerous factors, such as the condition of the surface on which the biofilm form is formed, cellular structures, and chemical and physical growth factors [20].Hence, the formation of bacterial biofilm is a complex process requiring quorum sensing and different sets of genes for transcription and translation. Bacterial biofilm formation can be classified into five stages (Figure 1).The first stage is reversible attachment, wherein the cells newly and loosely attach to the surface via electrostatic forces, van der Waals forces, and hydrophobic interactions [24].At this stage the attachment is very weak and reversible.The bacteria either commit to the biofilm mode or return to the planktonic lifestyle.Fimbriae, pili, and flagella contribute to their attachment to rough and hydrophobic substances.The second stage of bacterial biofilm formation is monolayer formation, wherein the loose bacteria begin to produce extracellular polymeric substances and consolidate the attachment process [21].After monolayer formation, irreversible adhesion occurs.The third stage is microcolony formation, wherein various microbial cells continue to accumulate and grow into multilayered cell clusters surrounding EPS, thus leading to the formation of microcolonies and three-dimensional structures [22,24].In the fourth stage, the microcolonies develop into mature biofilms that may be mushroom-or tower-like in shape with fluid-filled channels.These channels ensure the diffusion and circulation of nutrients, oxygen, and essential substances within the microenvironment [23].According to Marchabas, matured biofilm structure includes bulk of biofilm, linking film, conditioning film, and the substratum from the outside to the inside [25].Biofilm dispersal, the last stage of biofilm formation, involves detachment of the bacteria from the mature biofilm and their conversion into a planktonic state.This is a cyclical process, with the detached bacterial cells potentially being able to colonize new surfaces Bacterial biofilms reduce the penetration of antibiotics into deeper layers Bacterial biofilm is a complex three-dimensional structure comprising bacteria, EPSs, metabolites, and nutrients.EPS forms the skeleton of the biofilm [30], which maintains the integrity and persistence of the biofilm architecture.EPS can prevent or retard the penetration of antibiotics into the deeper layers of the biofilm independently or in combination with eDNAs [5]. The ability of antibiotics to penetrate a bacterial biofilm varies among the various classes of antibiotics and bacterial genera.Singh et al. [31] found that the average percentage reductions in the penetration of vancomycin, chloramphenicol, amikacin, ciprofloxacin, imipenem, cefotaxime, and tetracycline were 57%, 34%, 22%, 18%, 14%, 11%, and 9%, respectively, in S. epidermidis, S. aureus, K. pneumoniae, and E. coli biofilms.In another study, polysaccharide intercellular adhesin (PIA) was shown to reduce the penetration of many antibiotics, such as oxacillin, cefotaxime, teicoplanin, and vancomycin, via the biofilm [32].PIA, also known as poly-β-1-6-N-acetylglucosamine (PNAG), is the only EPS produced by staphylococci [33].PIA determines the cell surface hydrophobicity of S. epidermidis and mediates the initial adherence of the biofilms to some extent [34].Different bacterial strains might have different types of EPSs, and while some bacteria produce several different types, others produce only one dominant EPS molecule.Anderl et al [35].found that ampicillin did not penetrate the biofilm of K. pneumoniae, which indicates the resistance of this species to ampicillin when in biofilm form.Hoyle et al [36].reported a decrease in the diffusion of piperacillin through P. aeruginosa biofilms formed on dialysis membranes.P. aeruginosa biofilms prevented the entry of antibiotics into bacterial cells, and imipenem or ceftazidime at a concentration of 2560 μg/mL could not eradicate these biofilms.It was also reported that the permeation rates of macrolides, fluoroquinolones, beta-lactams, gentamicin, and amikacin through the alginate of P. aeruginosa strain 214 were reported to be 100%, >75%, >75%, 73%, and 59%, respectively, which indicates that biofilms can limit the permeation rates of antibiotics to various degrees [37]. Biofilm bacteria grow slowly in deeper layers The bacteria in the deeper layers of biofilms lack oxygen, as reported by Wu et al [38].In one study, the oxygen concentration within the gel biofilm was measured using microelectrodes.A decrease in the oxygen concentration to < 3% of air saturation at a depth of 500 μm was reported in S. aureus biofilm [39].Elsewhere [40], it was documented that the strains exhibited low oxidoreductase activities, which included reductions in the expression levels of pyruvate dehydrogenase, ethanol dehydrogenase, glycerol-3-phosphate dehydrogenase, succinate dehydrogenase, and cytochromes bo and aa3 under anaerobic conditions.Biofilm formation starts with reversible adhesion of planktonic cells, and then with irreversible adhesion to the surface.The microbial cells continue to multiply and form micro-colonies and eventually develop into the mature biofilm.In the last stage, biofilm bacteria detach from the mature biofilm and disperse as planktonic state [21][22][23]. In another study on a P. aeruginosa clinical isolate, bottom-up proteomic analysis showed that the levels of L-arginine and polyamine metabolism were higher in anoxic regions of biofilms than in oxic ones [41]. Besides oxygen limitations, biofilms also suffer from nutrient limitations that affect bacterial growth.Several nutrients exert multiple effects on the metabolism of the biofilm bacteria, with L-arginine being particularly prominent.Indeed, arginine and aspartic acid were reported to exert opposite effects on biofilm formation in P. putida KT2440 and Rup4959 [42].Similarly, Mills et al [43].observed that low concentrations of L-arginine induced an increase in the concentration of c-di-GMP in Salmonella typhimurium.Thus, L-arginine may be used to cope with biofilms based on the nutrient defining solution.Anderl et al [44].found that bacteria in stationary phase could be protected from antibiotics in the absence of carbon and nitrogen within the culture medium.The pH of the biofilm is a key factor determining bacterial metabolism and ranges from 5.6 in deeper layers of biofilms to 7.0 in superficial layers [45].Low pH can directly reduce the activities of oxacillin [46].A low nutrient level influences the metabolic state of biofilm bacteria. These slow-growing bacteria deep in biofilms are more resistant to antimicrobial agents than those growing at an intermediate rate [47].For example, Zheng and Stewart [48] noted that rifampin penetrates the biofilm of S. epidermidis but does not kill these bacteria.The reason for this failure to achieve bacterial killing was not the inadequate penetration of rifampin through the biofilm, but the slow or absent bacterial growth.Results showed that the average growth rate of biofilm bacteria was 0.035 ± 0.004 h −1 , whereas the average growth rates of the stationary-phase and exponentialphase bacteria were 0.15 ± 0.06 h −1 and 0.82 ± 0.34 h −1 , respectively.Other studies also showed that the slow growth of biofilm bacteria enable them to evade the effect of rifampicin [49,50].These studies establish that metabolically inactive bacteria can escape the effects of conventional antibiotics. Biofilm bacteria develop molecular mechanisms of antibiotic resistance Biofilms provide an the ideal environment for horizontal gene transfer and the development of multidrug resistance [51].Conjugation is the most common mechanism by which the horizontal transfer of resistance genes occurs within biofilms.Indeed, a study reported that the transfer rate of the conjugative plasmid pGO1 in S. aureus biofilms was nearly 16,000 times higher than that observed in the planktonic state [52].Moreover, Kouzel et al [53].quantified the acquisition and spread of multidrug resistance and observed that the transfer efficiencies of ermC and aadA were higher during the early stages of N. gonorrhoeae biofilm formation.Resistance genes can also be transferred between different bacterial species in biofilms.For instance, an in vitro biofilm experiment demonstrated the transfer of the blaNDM-1 gene from Enterobacteriaceae into P. aeruginosa and A. baumannii via conjugation [54].Transduction, which occurs via temperate phages, is another mechanism by which resistance genes can be exchanged between bacteria.ϕ731 is a Shiga-toxin-encoding phage with a chloramphenicol resistance gene, which was reported to be transferred to pathogenic E. coli in biofilms at both 20°C and 37°C.It was also found that S. epidermidis temperate phages could spread antimicrobial resistance genes via transduction [55].Transformation in the biofilm is the third mechanism of horizontal gene transfer.For example, S. pneumoniae can acquire genes conferring resistance to streptomycin or trimethoprim resistance genes by recombination via the transformation of DNA from the environment.Two pneumococcal strains, S2 Tet and S4 Str , were incubated together, and the emergence of double-resistant pneumococci was observed at a recombination frequency of 2.5 × 10 −4 4 h post-inoculation [56]. In addition to gene transfer, gene mutations in biofilm bacteria can lead to resistance.The spontaneous mutation rates of several bacteria were reported to range from approximately 10 −10 to 10 −9 per nucleotide per generation [57].Furthermore, in biofilms, the occurrence rates of mutations are higher than the spontaneous mutation rates.This kind of mutation is usually called high mutability or hypermutability.In biofilm bacteria, mutations can confer an evolutionary advantage, especially under the pressure of nonlethaldose antibiotics and growth restrictions.The presence of bacteria with hypermutability can lead to antibiotic resistance.P. aeruginosa is one of the most prone to producing biofilms bacteria.Thus, its hypermutation is considered to be the main driver for the development of antimicrobial resistance in this species in patients with chronic infections.One study reported a 105-fold increase in mutation frequency in biofilm bacteria compared with that in planktonic bacteria [58].Furthermore, several enzymes that protect DNA from oxidative damage were found to be downregulated in P. aeruginosa biofilms.For instance, the major pseudomonal antioxidant catalase encoded by katA was found to be downregulated by 7.7 folds.The downregulation of antioxidant enzymes leads to the accumulation of DNA damage, which accelerates the rate of mutagenic events [58].Sultan et al. [59] observed the coexistence of qnrA, qnrB, qnrS, and gyrA gene mutations and biofilm production in almost 40% of quinoloneresistant uropathogenic E. coli.These findings indicate the relationship between gene mutations and biofilm production. It is worth mentioning that some researchers believe that efflux pumps are not associated with biofilm formation.For example, Türkel et al [60].explored the relationship between efflux pump-associated gene expression levels and biofilm formation by collecting 100 clinical extended-spectrum β-lactamase-producing K. pneumoniae isolates and examining their biofilmforming capabilities [60].The expression levels of AcrA, ketM, kdeA, kpnEF, and kexD, which are related to efflux pumps, were measured.Interestingly, no correlations were observed between the expression levels of these efflux pump genes and biofilm formation.In another study, Knight et al. reported that mutation of the major efflux pump gene ΔadeJ led to a minor decrease in biofilm formation [61].Moreover, Li et al [62].documented significant correlations between the efflux pump MexAB-OprM phenotype and biofilm formation in 110 carbapenem-resistant P. aeruginosa strains.Thus, there is a need for additional studies to determine the relationship between efflux pump genes and biofilm formation. Applications of phage depolymerase for the treating of bacterial biofilms Antibiotic treatment often fails in clinical medicine owing to the impermeability of bacterial biofilms.Thus, other methods, such as the use of phage depolymerases, have been attempted to counter the resistance conferred by such biofilms.Phages are natural predators of bacteria, but not all phages can encode depolymerases with exopolysaccharide-degrading activity.Depolymerases are encoded by some phages that infect the encapsulated bacteria, such as E. coli K1 [63], E. coli K20 [64], K. pneumoniae K22 [65], K. pneumoniae K23 [66], K. pneumoniae K64 [67], A. baumannii K26 [68], and A. baumannii K92 [69].Although the use of a single phage depolymerase can combat bacterial biofilms, the complete eradication of bacterial biofilms may require the application of multi-phage depolymerase cocktails or their combined use with antibiotics.An overall classification of phage depolymerases reported in the treatment of bacterial biofilm and their corresponding bacterium genus targets are summarized in Figure 2. The structure of phage depolymerase may be differ among different species of depolymerases.Whereas, phage depolymerase is a trimer crystal state in general [70,71].According to the crystal structure of KP32gp38 and K1 CPS depolymerase, each monomer has three distinct domains: the N-terminal particle-binding domain, the central receptor-binding domain, the C-terminal β sandwich domain.The N-terminal domain consists of β-sheet and α-helix.Three monomers fold into a barrel-like structure.The central receptor-binding domain mainly features β-helix and contains at least one distinct carbohydrate-binding site.The C-terminal β sandwich domain has a lectinlike fold formed by β strands.Three parallel chains tightly packed together to form a highly stable screwlike trimeric structure [70,71].The protein sequences of phage depolymerases are compared in Figure 3. Sialidases (EC 3.2.1.18),also called neuraminidases, are a group of enzymes that hydrolyse the α-linkage of terminal sialic acids in glycans.Endosialidase E (endoE), derived from the E. coli K1 strain phage A192PP, selectively degrades sialic acid.However, endoE cannot kill pathogens or inhibit their growth, so its potential therapeutic efficacy remains unclear.Mushtaq et al [72,78].conducted two studies to determine whether endoE could improve the outcome of E. coli K1 systemic infections in bacteraemia and meningitis rat models.The findings showed that the intraperitoneal administration of 20 µg of endoE can protected 3-day-old rats from systemic infection.The enzyme hydrolysed α-2,8-linked sialic acid and removed the capsular polysaccharides from the E. coli surfaces.Sensitization to the bactericidal effect of the complement system also occurred, and the phagocytic activity of the macrophages was enhanced when the capsular polysaccharides were removed. Hyaluronate (HA) lyases (4.2.99.1) and hyaluronidases (EC 4.2.2.1) are classes of enzymes that digest hyaluronate.Baker et al [80].purified hylP-derived hyaluronidase from the S. pyogenes phage H4489A and reported that the phage HA lyase cleaved the N-acetylglucosaminidic bonds of hyaluronan and belonged to the category of hyaluronate lyases.The researchers further tested the substrate specificity and found that the phage HA lyase specifically cleaved hyaluronan, but not dermatan sulphate, keratan sulphate, chondroitin 4-sulphate, heparan sulphate, or heparin.This study may provide an alternative reagent to digest the S. pyogenes hyaluronan capsule.Another hyaluronate lyase named HylP1 from Streptococcus pyogenes prophage SF370.1 was tested to catalyse a βelimination reaction of hyaluronan [79].Although phage HA lyases and hyaluronidases play important roles in removing the streptococcal hyaluronan capsule, they can transform the nonvirulent strains into virulent strains [96]. Alginate lyases (EC 4.2.2) are a class of enzymes that catalyse the degradation of alginic acids, including mannuronate lyase (EC 4.2.2.3) and guluronate lyases (EC 4.2.2.11) [97].To date, only P. aeruginosa and A. vinelandii phages are known to encode alginate lyases [97].Glonti et al [81].reported that P. aeruginosa phage PT-6 could rapidly reduce the viscosity of the alginic acid capsule by 62%-66% within 15 min.Furthermore, PT-6 alginase was purified from A. vinelandii phage suspensions.The enzyme corresponding to a 37 kDa band was shown to degrade polysaccharides to a series of oligouronides.By analysing these oligouronides and together with kinetic information, the authors concluded that PT-6 alginase exhibited polyuronide-degrading activity.The A. vinelandii phage is another example of a phage with alginate lyase activity [98]. Similarly, Liu et al [83].reported that the phage IME200 exhibited polysaccharide depolymerase activity against A. baumannii.Based on the analysis of the complete genome sequences, open reading frame 48 was predicted to encode a polysaccharide depolymerase with pectate lyase activity (Dpo48), which was subsequently expressed, purified, and characterized.Dpo48 demonstrated high efficacy at a wide range of temperatures (20°C-70°C) and pH (5.0-9.0).It also exerted a synergistic effect with 50% serum on A. baumannii strains.Interestingly, another tail spike protein (TSP) with pectate lyase activity was derived from the P. mirabilis strain BB2000 phage (vB_PmiS_PM-CJR), which not only degraded the biofilms in vitro and reduced the adherence of bacteria to plastic pegs, but also improved the survival rates of Galleria mellonella larvae infected with the host cells [84]. Apart from the defined classes of phage depolymerases, several undefined classes of phage depolymerases can inhibit the formation of bacterial biofilms or degrade those already formed.Shahed-Al-Mahmud et al [7].reported that the TSP from the A. baumannii φAB6 phage inhibited the colonization of host cells on the surface of Foley catheters.The researchers also evaluated the therapeutic effect of TSP in zebrafish infected with A. baumannii 54149 and reported that the survival rate of zebrafish administered TSP (80%) was significantly higher than that of zebrafish administered PBS (10%).Dp49 is another capsule depolymerase identified from the A. baumannii phage vB_AbaM_IME285.Positivity for the depolymerase activity of Dp49 was found in 25 out of 49 A. baumannii clinical isolates [85].In addition, Dp49 increased the serum killing ability against the strains Ab387 and Ab220 in vitro, whereas Dp49 the administration improved the survival rates of mice infected with Ab387 in vivo.Moreover, Oliveira et al. [86] identified a 604-amino-acid virion protein, gp52, with depolymerase activity.The tail spike gp52 purified from the P. stuartii phage vB_PstP_Stuart made the host bacteria susceptible to serum killing by degrading the exopolysaccharide. In another study, the K. pneumoniae phage P560 depolymerase P560dep was shown to inhibit biofilm formation.Intraperitoneal administration of a 50 μg dose of P560dep protected 90%-100% of mice infected with the KL47 carbapenem-resistant K. pneumoniae from mortality.Although the depolymerase was not related to the bacterial killing, the authors considered it an attractive and promising agent to combat infectious diseases [8].The mechanisms by which parts of phage depolymerases degrade polysaccharides have been illustrated in certain studies focusing on the protein structures.A recent study by Squeglia et al. [70] revealed the crystal structure of the Klebsiella phage capsule depolymerase KP32gp38.It presented as a trimer in solution and in the crystal state.The monomer comprised four protein domains, a flexible N-terminal domain, a catalytic domain, a carbohydrate-binding domain, and a lectin-like fold C-terminal domain. Phage depolymerase can also be used to treat Shiga toxin-producing E. coli (STEC) infections.The STEC strain HB10 O91 phage PHB19 was isolated, and the whole genome was sequenced and annotated by a phage study group.The authors [99] identified a novel phage depolymerase, Dep6, from the PHB19 TSP.In vitro, Dep6 effectively removed STEC biofilms and enhanced the susceptibility of host bacteria to serum killing.Furthermore, no toxic effects of Dep6 were observed in human red blood cells, lung carcinoma cells, or embryonic kidney cells in vitro and in vivo.In an STEC infection mouse model, pretreatment with Dep6 resulted in 100% survival compared with that in mice in the control group.Delayed treatment (3 h post infection) resulted in only 33% survival, whereas mice that were simultaneously treated with infection presented with a survival rate of 83%.In addition, the levels of proinflammatory cytokines, such as tumour necrosis factor-alpha, gamma interferon, and IL-1β, were reduced 24 h post infection in the Dep6-treated mice.However, the levels of IL-6 were not reduced.Thus, Dep6 appeared to be safe for mice based on the in vivo and in vitro assays.P. aeruginosa LPS can be degraded by phage depolymerase.LKA1gp49 is from P. aeruginosa phage LKA1 and cleaves β-band of LPS [100].K. pneumoniae K63 capsule can also be degraded by phage depolymerase KP36gp50 [101]. These phage depolymerases possess the exopolysaccharide-degrading activity and glycoside hydrolases domains.One of the possible inhibitory mechanisms of a single phage depolymerase to bacterial biofilm may be the enzymes hydrolyse the glycosidic bonds of polysaccharide.There are four glycosidic bonds, including alpha-C-glycosidic bond, alpha-O-glycosidic bond, beta-C-glycosidic bond and beta-O-glycosidic bond.Each phage depolymerase may hydrolyse at least one kind of glycosidic bonds.However, phage depolymerases have specific and only one or a few types of capsular polysaccharide.We suspect that another possible inhibitory mechanisms may quench quorum sensing.Further studies will be done to explore whether phage depolymerases can disturb quorum sensing.And more phage depolymerases will be found to hydrolyse the glycosidic bonds of polysaccharide. Phage depolymerase cocktail application The narrow specificity of phage depolymerases is one of the major obstacles in their use for removing bacterial biofilms and has considerably restricted their application.Several studies have attempted to broaden the specificity via protein engineering and combining several different phage depolymerases to form cocktails (Table 2). Few phage depolymerases have been tested in terms of their ability to degrade different types of bacterial capsules in vivo.Whether phage depolymerases can exhibit generalized therapeutic efficacy towards different kinds of capsules has remained unclear.For example, more than 80 different types of capsules were discovered based on the serological, biochemical, and genetic properties of E. coli [102].Lin et al [63].used five phage depolymerases to combat three capsule types in mouse infection models.The phage depolymerases K1E, K1F, K1H, K5, and K30 (gp41 and gp42) were cloned and purified in vitro.In a mouse thigh model, the majority of the mice were rescued by treatment with all of the phage depolymerases, except K1E, when the enzyme (20 µg per mouse) was injected within 0.5 h after the bacterial injection.Preliminary trials showed that the effective doses of K1F and K1H ranged between 2 and 5 µg per mouse.The effective dose of K5 ranged between 2 and 20 µg per mouse, whereas the effective dose of K30 gp41 was 20 µg per mouse.K30 gp42 did not appear to improve the survival rate, and K30 gp41 was found to be less effective than K1F, K1H, and K5.At a dose of 20 µg per mouse, the mixture of K30 gp41 and K30 gp42 rescued all three mice (3/3), whereas K30 gp41 rescued three out of eleven (3/11) mice, and K30 gp42 did not rescue any of the mice (0/3).Although the survival outcome appeared to be somewhat better in the mixture group than that in the two individually treated groups, the sample size was too small to draw any definitive conclusions.The potential acute toxicity of the five phage depolymerases (K1E, K1F, K1H, K5, and K30 gp41) was evaluated by injecting 100 µg of the different phage depolymerases into the right thigh and monitoring the survival, body weight gain, and behaviour of the mice.All of the mice appeared healthy without any change in behaviour over an observation period of 5 days.Statistical analysis indicated no significant differences in body weight gain between the phage depolymerase-and PBS-treated mouse groups.Nonetheless, further studies are required to confirm the safety and efficacy of applying phage depolymerase cocktails. Although a cocktail can broaden the antibacterial biofilm spectrum, no studies have focused on the different phage depolymerases used in cocktails to combat the bacteria capsules.Phage cocktail therapy has been used to treat M. abscessus infection [104], P. aeruginosa respiratory infection [105], and E. coli urinary tract infection in a mouse model [106].In addition, some phage cocktails have been used in clinical trials, such as P. aeruginosa phage cocktail to treat burn wound infection (phase 1/2 trial) [107], oral E. coli phage cocktail to treat acute bacterial diarrhoea [108], and topical C. acnes phage cocktail to treat skin acne (phase 1 trial) [109], have been used in clinical trials.Additional studies are required to explore the effect and safety of phage depolymerase cocktails. Phage depolymerase cocktail application mainly use the mechanism and effect of single depolymerase.Due to the high specificity, a single phage depolymerase can degrade only one or a few types of capsular polysaccharide.This kind of single phage depolymerase application limits the antibiofilm spectrum.In order to enlarge the antibiofilm spectrum, phage depolymerase cocktail application is one of the solutions.Two or more different phage depolymerase may hydrolyse different kinds of glycosidic bonds of polysaccharide.This cocktail application can improve the antibiofilm effect.In addition, phage depolymerase cocktail application can also deduce the resistance issue.Although the probability of polysaccharide mutation is low, the resistant mutation may generate when phage and the host (pathogenic bacteria) have coevolved over time. Combination therapy with phage depolymerase and antibiotics Although a single phage depolymerase tends to be somewhat effective in controlling bacterial biofilms, the high specificity of depolymerases limits the complete removal of the biofilm owing to variations in the EPS.In some cases, phage depolymerases might exert anti-polysaccharide activity against a small set of bacterial strains.To address the limitation of the narrow host spectrum, combination therapy with phage depolymerases and antibiotics, an approach widely adopted in humans, may prove useful in combating infections caused by biofilm-forming pathogens.The degradation of the biofilm matrix by phage depolymerases can increase the penetration of antibiotics to exert synergistic effects (Table 3). Ciprofloxacin is one of the most commonly used antibiotics in clinic [113].Some phage capsule depolymerases exhibit synergy with ciprofloxacin, whereas others demonstrate no synergistic effects with ciprofloxacin.Verma et al [110].discovered a depolymerase derived from K. pneumoniae phage KPO1K2.The depolymerase and ciprofloxacin could reduce the bacterial numbers in mature 3-day (72 h) biofilms.The antibiofilm effect was reported to be significantly greater in the combination treatment group than in the group treated with ciprofloxacin alone (P > 0.05).Interestingly, the concomitant application of depolymerase and ciprofloxacin produced different effects on the biofilm.When applied concomitantly for 6 h, an insignificant reduction of 1.21 + 0.62 logs was observed in the biofilm bacterial count.However, when treated with depolymerase for 60 min followed by 6 h treatment with ciprofloxacin, a significant reduction of 3.72 + 1.2 logs was observed in the bacterial count.These findings indicate that the combined treatment with phage depolymerase and ciprofloxacin is effective in mature biofilms.However, In vitro [112] some studies have reported contradictory findings.For example, Latka and Drulis-Kawa [111] identified a K. pneumoniae phage depolymerase called KP34p57, which had no impact on ciprofloxacin activity.Their findings showed that the The combination of KP34p57 and ciprofloxacin did not improve the antibiotic activity.Moreover, the phage KP36 capsule depolymerase DepoKP36 did not affect the susceptibility of biofilmforming K. pneumoniae strains to antibiotics such as ciprofloxacin, oxytetracycline, and chloramphenicol [13]. Colistin is the last-resort antibiotic for clinical multidrug-resistant Gram-negative bacterial infections.The combination of phage depolymerase and colistin may produce a synergistic effect to combat A. baumannii infections.Chen et al [16].expressed a novel depolymerase Dpo71 from the A. baumannii phage IME-AB2 in vitro, which inhibited biofilm formation and interfered with the preformed biofilm.In addition, Dpo71 enhanced the bactericidal effect of colistin.Single-dose colistin (1 µg/mL, 1/2 MIC) alone or with 5% serum did not influence its antibacterial effect against the host cells.However, the application of 10 µg/mL Dpo71 + 1 µg/mL of colistin along with 5% serum resulted in a significant reduction in the counts of A. baumannii.This combination treatment resulted in a 7-log reduction in the bacterial count, and the antibacterial effect was boosted to nearly complete eradication.In comparison to 1 µg/mL colistin alone and 10 µg/mL dpo71 alone treatment, bacterial reduction was a 1-log and 0-log, respectively.The authors also revealed the underlying mechanisms for the superior action of the combination therapy compared with the individual treatments.Scanning electron microscopy showed that the bacterial capsule was stripped by Dpo71 and that the host cell surface did not contain any pilus-shaped protrusions.In contrast, the surfaces of the bacterial cells in the untreated group presented with both capsules and pilus-shaped protrusions.The removal of the capsule by phage depolymerase Dpo71 significantly enhanced the outer membrane destabilization ability of colistin, which promoted the interaction between antibiotics and the bacteria and facilitated the entry of the drug into the bacterial host.In infection models, the combination of Dpo71 and colistin could improve the survival rate of A. baumannii-infected Galleria mellonella.Although Dpo71 itself had no bactericidal efficacy, treatment with Dpo71 alone rescued 40% of the infected Galleria mellonella over a period of 72 h.However, the combined treatment of Dpo71 and colistin rescued 80% of the infected worms during the same observation period.Approximately 70% of the infected worms died within 18 h, and the mortality rate increased to 90% after 48 h.These results indicate that phage depolymerases can act as adjuvants with some antibiotics to enhance antibacterial activity.Nevertheless, the combination of phage depolymerase and colistin may not produce a synergistic effect.Luo et al [112].identified another A. baumannii phage depolymerase called TF, which did not exhibit any additive or synergistic effects with colistin on the host bacteria.Surprisingly, a temporary increase in the resistance of A. baumannii to colistin was observed after the EPS was peeled by phage depolymerase TF from the bacteria.Thus, it was considered that the loss of EPS may have reduced the colistin attachment, causing a temporary increase in resistance. Polymyxin has been shown to exert the synergistic effects with phage depolymerase.K. pneumoniae phage SH-KP152226 depolymerase Dep42 increased the antibacterial activity of polymyxin when they were used together [15].The average bacterial count of the Dep42 + polymyxin treatment group was 5.260 ± 0.05 log, whereas those of the groups treated with Dep42 or polymyxin alone were 6.317 ± 0.01 and 6.013 ± 0.125 log, respectively, which demonstrates a significant reduction of 0.743 ± 0.05 log compared with that in the polymyxin group.Collectively, the findings of these studies strongly indicate that phage depolymerase Dep42 and polymyxin have a synergistic effect on multidrugresistant K. pneumoniae. Some antibiotics is neither synergistic nor antagonistic in combination with phage depolymerases. A. baumannii MK34 phage vB_AbaP_PMK34 capsule depolymerase DpoMK34 was neither synergistic nor antagonistic in combination with different antibiotics, such as colistin, imipenem, and amikacin [14].However, the addition of 500 µg/mL of DpoMK34 did not change the MIC values of the three tested antibiotics upon pretreatment or cotreatment with MK34 and DpoMK34.It was, thus, the authors concluded that DpoMK34 did not produce synergistic or antagonistic effects in combination with colistin, imipenem, and amikacin. Some antibiotics, such as ciprofloxacin, colistin, polymyxin, have a synergistic effect with a certain phage depolymerase.While some antibiotics, such as imipenem, amikacin, have no synergistic effect with phage depolymerases.The most likely reason for the synergistic effects of phage depolymerases and some antibiotics is that the removal of CPS by phage depolymerases helps some antibiotics to access the bacterial surface.This changes the arrangement of bacteria in the biofilm to a dispersed state, which enhances the ability of antibiotics to penetrate into the biofilm.Eventually, the intensity of the antimicrobial attack is enhanced with the help of the phage depolymerase. Phage depolymerase enhances the effects of human and mouse immune systems CPSs are important virulence factors that help bacteria to escape the human immune system.Surface polysaccharides provide shields against components of the host immune system, such as the complement system and phagocytosis.Loss or alteration of CPSs can make bacteria more susceptible to clearance by the human immune system.Table 4 lists some studies on how phage depolymerases enhance the susceptibility to human serum and kill bacteria. Phage-derived capsule depolymerases can remove bacterial CPSs, and some phage depolymerases are reported to make the pathogens susceptible to serum killing.In recent years, A. baumannii has been considered to be an important nosocomial pathogen in intensive care units [116].Most clinical isolates are resistant to all available antibiotics and the current treatment methods are becoming less effective.One reason for this resistance is that the CPSs protect the bacteria from antibiotics via the polysaccharide structure and aid in their evasion of the host immune system.In other words, CPSs are a major virulence factor.Oliveira et al. [87] purified a K2 capsulespecific depolymerase from the tail spike C-terminus of the A. baumannii phage vB_AbaP_B3 and assessed its activity in vivo.It was found to protect caterpillar larva and mice against bacterial infections.In a mouse sepsis model, the intraperitoneal injection of K2 depolymerase (dose, 50 μg) resulted in 60% of mice avoiding mortality due to infection.Additionally, significant reductions in the expression levels of tumour necrosis factor-alpha and interleukin-6 were observed.K2 depolymerase enhanced the ability of human serum to kill the host cells and reduced the number of bacteria to < 10 colony-forming units (CFU)/mL.The NIPH 2061 strain was not susceptible to serum killing following the inactivation of K2 depolymerase.It was, thus, concluded that the human complement system was activated to control the infection via the action of K2 depolymerase, which degraded the CPS and, consequently, affected the bacterial virulence. DpoMK34 is another A. baumannii phage depolymerase, which was reported to increase the ability of the serum to kill A. baumannii MK34 in a concentration-dependent manner [14].Cells treated with 100 µg/mL DpoMK34 presented with a 1.8 ± 0.34 log reduction in 25% (v/v) human serum and a 5.05 log unit reduction in both 50% (v/v) and 75% (v/v) human sera when compared with that In vitro [115] in PBS-treated cells.Moreover, heat inactivation of DpoMK34 did not cause any reductions in bacterial cell numbers, even in the 75% (v/v) human serum.Similar findings were reported in another study, wherein a 5 log reduction in Dpo48-treated bacteria was observed following treatment with a 50% volume of serum.The inactivation of the complement in the serum did not result in any reduction in the serum-dependent bacterial count [83].Oliveira et al [88].identified another A. baumannii phage depolymerase called B9gp69, which rendered K45 strains susceptible to serum killing in vitro.The number of depolymerase-pretreated bacteria incubated with serum was reduced to below the detection limit (<10 CFU/mL).Notably, B9gp69 digested the capsule polysaccharides of both K30 and K45 strains.Furthermore, the optimal activity of the enzyme could be maintained at temperatures ranging from 20°C to 80°C and pH values ranging from 5 to 9.These are clear indications of the effect of phage depolymerase on the bacterial susceptibility of bacteria to human serum killing.The use of other phage depolymerases, such as Dep-ORF8 from P. multocida phage PHB02, which specifically degrades the serogroup A capsule, has been reported.The purified Dep-ORF8 significantly increased the survival of mice infected with P. multocida.Additionally, it did not increase the eosinophil and basophil counts or cause any other pathological changes when compared with the control group.Human serum, mouse serum, and mouse whole blood alone exhibited minor bactericidal effects of 1.2-1.7 log CFU reductions in P. multocida strain HB03 cell counts.Treatment with Dep-ORF8 + serum further reduced the cell counts (by 3.5-4.5 log CFU).However, no significant difference in viable cell counts was observed between the Dep-ORF8 + whole blood and the Dep-ORF8 + serum treatment groups.Heat inactivation of the serum resulted in an increase in the survival counts of the bacterial cells to levels equal to those in the PBS control group [114]. (A) pneumoniae phage depolymerases can sensitize cells to serum complement-mediated killing.KP32gp37 and KP32gp38 obtained from Klebsiella phage KP32 increased the sensitivity of serum-resistant cells to complementmediated host bacterial killing.Decapsulation of the strains by depolymerases resulted in the exposure of the ligands to phagocytic cell attachment [103].Thus, phage depolymerases could combat the resistance of biofilm bacteria and increase the phagocytic activity of the cells, thereby killing the biofilm bacteria.KP32gp37 increased the phagocytic cell uptake of strain 271 by approximately two folds.Similarly, the geometric mean fluorescence intensity (gMFI) value of the KP32gp38-treated strain 358 (146.7 ± 13.6 gMFI) was higher than that of the untreated bacteria (83.9 ± 5.7 gMFI).In the case of strain 968, the gMFI value in the KP32gp38-treated group (102.3 ± 6.8) was lower than that in the untreated group (293.6 ± 24.4).Evaluation of the time-dependent bactericidal effect of depolymerases + serum against K. pneumoniae strains 271, 45, and 358 revealed that 0.1 µg/mL of KP32gp37 could lead to a 1.6 log decrease in the bacterial number after 3 h of exposure and a 4 log decrease after 7 h of exposure in the case of strain 271.For the K21-type strains, including strains 45 and 358, 100 µg/mL of KP32gp38 caused only a 1.7-4.6 log reduction in the bacterial number compared with that in the initial inoculum after 3 h of exposure.After 7 h of exposure, only minor reductions were observed in the depolymerase-treated Klebsiella strains 358 and 45 [103].Similarly, Dep6 derived from the STEC phage PHB19 TSP enhanced the serum sensitivity of the host strain.Dep6 at 30 µg/mL resulted in a 4.2 log reduction in the number of host bacteria, whereas that in the PBS-treated group remained at 8 × 10 8 CFU/mL [99].(B) aeruginosa phage depolymerase DP is another example of enhancing bactericidal activity mediated by serum in vitro [115].The authors have isolated a lytic P. aeruginosa phage named IME180 from the sewage of a hospital.Through genomic sequence analysis of IME180 phage genome, DP has two catalytic regions, the Pectate lyase_3 super family and Glycosyl hydrolase_28 super family.The phage depolymerase DP can degrade exopolysaccharide of P. aeruginosa and enhance serum bactericidal activity in vitro.In the bactericidal assay, the bacterial enumeration reduces by two orders of magnitude in the serum and DP group.However, when either serum or DP is applied to bacteria individually, the bacterial enumeration is no obvious decrease or increase.In other words, the P. aeruginosa phage depolymerase DP has the potential to be an anti-microbial agent targeting P. aeruginosa. The human immune system can not recognize pathogens when bacteria generate CPSs and encase themselves.CPSs help bacteria to escape the human immune system and bacteria fail to induce human immune response.When phage depolymerases strip the protective polysaccharide layers from the pathogen cells, which exposes them to the immune system.And the uncapsulate bacteria expose the lipoteichoic acids of the Gram-positive bacterial cell wall, lipopolysaccharide and outer membrane protein of the Gram-negative bacterial outer membrane.These structural component of the bacterial cell wall induce and activate the immune response, especially the complement system and macrophagocytes. Conclusions The resistance of bacterial biofilms to antibiotics and the human immune system prompted scientists to search for alternative methods to counter antibioticresistant strains.Studies have characterized several phage depolymerases that act against biofilm-forming bacteria, such as K. pneumoniae, E. coli, A. baumannii, P. mirabilis, P. aeruginosa, S. pyogenes, B. subtilis, and P. stuartii.As natural antimicrobial agents, phage depolymerases clearly have a potential role in preventing and treating bacterial biofilm-associated infections.Although phage depolymerases have been demonstrated to possess potential antibiofilm activity in vitro, no clinical trials on this approach have so far been conducted thus far.There is, thus, a need for detailed exploration of the use of phage depolymerases as potential therapeutic drugs.Fortunately, some phage endolysins have entered clinical trials.Considering the current biosafety standards and regulations, the clinical application of phage depolymerases is much easier than that of the phage itself.Nonetheless, there is a need for further studies confirming the therapeutic use of phage depolymerases in humans.Besides, the use of phage depolymerases in combinations with some innovative treatments such as antibacterial peptide, nano particle, silver, copper and zinc will be explored in the future.As it acts as a kind of enzymes, phage depolymerase can be synthesized with nanomaterials to form phage depolymerase-incorporated nanomaterials.Or liposomecoupled phage depolymerases were prepared.Then, the potential therapeutic effect of innovative treatments will be evaluated in future. Disclosure statement No potential conflict of interest was reported by the authors. Figure 1 . Figure 1.Schematic overview of bacterial biofilm formation and development stages.Biofilm formation starts with reversible adhesion of planktonic cells, and then with irreversible adhesion to the surface.The microbial cells continue to multiply and form micro-colonies and eventually develop into the mature biofilm.In the last stage, biofilm bacteria detach from the mature biofilm and disperse as planktonic state [21-23]. Figure 2 . Figure 2. Wheel diagram summary depicting the classification of phage depolymerases reported in the treatment of bacterial biofilm and their corresponding bacterium genus targets. Table 4 . Phage depolymerases enhance the effect of human immune system in the treatment of bacterial biofilm. Table 1 . Comparison of depolymerase, endolysin and holin derived from phages in the treatment of bacterial infections. Table 2 . Therapy of phage depolymerase in the treatment of bacterial biofilm. Table 3 . Combination therapy of phage depolymerase and some antibiotics in the treatment of bacterial biofilm.
2023-10-25T06:17:33.436Z
2023-10-23T00:00:00.000
{ "year": 2023, "sha1": "489e94a45fa3cba85c05c78cc46f343c9f1885c5", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21505594.2023.2273567?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dc3bdb95c017e71fcaa4fb0caf10d292d5f7fd56", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235227986
pes2o/s2orc
v3-fos-license
High Incidence of Thrombotic Thrombocytopenic Purpura Exacerbation Rate Among Patients With Morbid Obesity and Drug Abuse This study aims to identify the baseline patient characteristics, clinical presentation, and response to treatment of 11 patients who were diagnosed with thrombotic thrombocytopenic purpura (TTP) between 2014 and 2020 at Brookdale University Hospital Medical Center, Brooklyn, NY. Laboratory and clinical parameters were recorded for 29 patients who received plasmapheresis in this time period. Of 29 patients, 11 had confirmed TTP and one was diagnosed with hereditary TTP. Young, black, and female patients made up the majority of our patient population. A high prevalence of obesity and drug abuse were seen among our patients. Five out of 11 were obese and four of them were morbidly obese; six out of 11 patients were positive for the drug screen including cannabinoids (3), opiates (2), benzodiazepines (1), PCP (1), and methadone (1). Four patients with a positive drug screen had acute kidney injury (AKI), and plasmapheresis helped them enhance their kidney function. We observed a high incidence of AKI and high TTP exacerbation rates in patients who were drug abusers and those who were morbidly obese. There is a paucity of data on the relationship of TTP with obesityor drug abuse and this needs further study. Introduction Thrombotic thrombocytopenic purpura (TTP) is a rare disease with high mortality affecting two people per million per year . It is characterized by thrombotic microangiopathy secondary to thrombocyte aggregation caused by a disintegrin and metalloprotease with thrombospondin type 1 repeats, member 13 (ADAMTS13) deficiency [1,2] . ADAMTS13 deficiency can be acquired or congenital. Acquired TTP is primarily caused by autoantibodies leading to the accumulation of large von Willebrand factor (vWF) multimers causing platelet aggregation and microangiopathic hemolytic anemia (MAHA). TTP classically presents as a pentad: thrombocytopenia, MAHA, kidney injury, neurological involvement, and fever. Even though the "classic pentad" is described to help with the diagnosis of the disease, it is neither sensitive nor specific as in our study. Therefore, the lack of a full pentad should not be used to exclude the diagnosis. Replenishing ADAMTS 13 and removing the associated antibody by plasma exchange is now the mainstay treatment for TTP patients which can achieve remission in approximately 80% of the cases [3] . Materials And Methods This study was approved by the Research and Clinical Projects Committee (IRB) of Brookdale University Hospital and Medical Center (BUHMC) on April 28, 2020. A chart review was performed on 29 patients who received plasmapheresis at Brookdale University Hospital and Medical Center in Brooklyn, New York, between 2014 and 2020. Clinical data of our cases were collected from HYPERSPACE® Epic 2019 Aug, electronic medical record system by reviewing their demographics, medical history, home medications, presenting symptoms, comorbidities, laboratory parameters, ADAMTS13 level, and treatments received. Clinical features included thrombocytopenia with microangiopathic hemolytic anemia, the presence of acute kidney injury, fever, and neurological symptoms. Laboratory parameters included initial hemoglobin, lowest hemoglobin, reticulocyte count, initial platelet count, lowest platelet count, lactate, lactate dehydrogenase (LDH), serum creatinine on admission, baseline creatinine, alanine aminotransferase (ALT), aspartate transaminase (AST), initial and peak troponin, presence of antinuclear antibody (ANA), ADAMTS13 level and ADAMTS13 inhibitor level. Measurements of ADAMTS13 activity and antibodies to ADAMTS13 Blood samples to measure ADAMTS13 activity and inhibitor level were collected from the patients before the plasma exchange. ADAMTS13 activity was measured by Quest Diagnostics Nichols Institute, Chantilly, VA, using a chromogenic, enzyme-linked immunosorbent assay (ELISA). Inhibitor testing was performed when ADAMTS13 activity levels are equal to or less than 30%. The patient's sample is heat-inactivated and mixed with an equal volume of pooled normal plasma (PNP) before testing. After testing, the residual activity is calculated, and the inhibitor concentration is expressed in Bethesda equivalent units (BEU). Assessment of therapeutic effects Complete remission was defined as full clinical recovery and recovery of a normal platelet count (>150 x 10 9 /L) for at least two days. Exacerbation was defined as a recurrent disease within 30 days after reaching a treatment response. Relapse was defined as recurrent disease 30 days or longer after reaching a treatment response. Statistical analysis Mortality, exacerbation, and relapse rate were described using descriptive statistics. We used median and interquartile range (IQR) as data were not normally distributed. Categorical variables were presented as frequency and percentages. The mean of continuous variables was compared using the Mann-Whitney U test, while for categorical variables Fischer's exact test was performed as of small data size. Results Eleven patients were diagnosed with TTP (thrombotic thrombocytopenic purpura) at Brookdale University Hospital and Medical Center (BUHMC) between 2014 and 2020. One of the 11 patients had a hereditary form of TTP whereas the rest had acquired TTP. Nine of the 10 patients with TTP were newly diagnosed, and one had relapsed TTP. The median age was 44 years and there was a slight female predominance (64%). The majority of our patients were African Americans (72%) as reflective of our general patient population at BUHMC. Five of 11 patients were obese (BMI >30 kg/m 2 ) and four of them were morbidly obese (BMI >40 kg/m 2 ). Three patients had autoimmune disorders at the time of diagnosis (two patients with Hashimoto's thyroiditis and one with antiphospholipid syndrome). Six patients had positive drug screens which included cannabinoids (3), opiates (2), benzodiazepines (1), phencyclidine (PCP, 1), and methadone (1). Only one patient had a history of cancer (cervical cancer in remission). Five of the 11 patients had a history of prior surgeries. The main characteristics of the patients are summarized in Table 1. The presenting symptoms on admission were gastrointestinal (63%) neurological (54%), cardiopulmonary (18%) which all are summarized in Table 2. Classical pentad of TTP was present in only 9% of our patients. A combination of microangiopathic hemolytic anemia and thrombocytopenia was observed in 100% of the patients. A total of 63% of patients had acute kidney injury, 54.5% had neurological symptoms, and only 9% had fever on admission. Subdural hematoma and acute ischemic stroke were diagnosed in two of the patients who presented with neurological symptoms ( Table 2). AKI was observed in four out of six patients with a positive drug screen and one of those was morbidly obese. All patients with drug abuse and AKI had a complete recovery of their renal function after TTP treatment whereas one patient had only partial improvement due to chronic kidney disease at baseline. All patients had low ADAMTS13 activity of <10%. Ten of the 11 patients had high ADAMTS13 inhibitor levels and one had an undetectable level of inhibitor as the patient had hereditary TTP. The average PLASMIC score was calculated as six [4]. Mean admission hemoglobin was 9.17 g/dl (standard deviation {SD} 2.57) and the lowest hemoglobin was 6.46 g/dl (SD Discussion TTP can be divided into two categories (1) more commonly seen acquired or autoimmune form (94.5% of cases ) and (2) very rare hereditary form (4.5% of cases ) caused by ADAMTS13 bi-allelic gene mutations also known as Upshaw-Schulman syndrome [5,6]. Severely decreased activity of ADAMTS13, which is primarily synthesized by stellate cells in the liver, is the cause of TTP [7,8] . ADAMTS13 activity levels less than 10% along with clinical features of the disease and laboratory findings are used to make a definitive diagnosis. The main reason for ADAMTS13 deficiency is acquired autoantibodies causing TTP but there may be additional causes that lower the ADAMTS13 levels, such as sepsis, cardiac surgery, pancreatitis, and liver disease [9][10][11][12][13]. An inhibitory antibody is detected in the majority of the cases [7,14] . TTP typically affects adult populations in their fourth decade of life, and is reported to be more common in young females, African Americans, and patients with a history of autoimmune disease [15] . Our patient population had 63% females, 72% African Americans, and 27% with previously diagnosed autoimmune conditions. The median age in our patient population was 44 years. African American population was the highest among our patient population (73%) compared to Oklahoma (36%) and Harvard registry (20%), respectively [2,16]. This can be explained by the fact that most of our patients in the community we serve are African Americans. Although active cancer and cancer-related chemotherapy have been associated with TTP, one of our patients had a history of cervical cancer that was in remission. Comparison of Harvard and Oklahoma registries with our study is summarized in Table 5. TTP: thrombotic thrombocytopenic purpura, ADAMTS13: a disintegrin and metalloproteinase with a thrombospondin type 1 motif, member 13, PEX: plasma exchange TTP causes widespread thrombosis affecting different organ systems, and some are more commonly affected than others [17]. GI symptoms tend to be commonly seen in patients with TTP ranging from 35% to 40% [18] . The range of the GI symptoms in our population was very broad from more commonly seen abdominal pain, nausea, vomiting, and diarrhea to less commonly seen symptoms like dysphagia or hematemesis. GI symptoms in our study population were observed in 64% of the patients which is slightly higher than previously reported studies. The nervous system has been reported to be the most commonly affected visceral organ at presentation, occurring in 67% of cases [2]. The symptoms might be minor such as headache or transient altered mental status (26%) to severe such as focal neurological deficits, seizures, or coma (41%) [2] . In our study, 54% of the patients presented with neurological symptoms ranging from mild to severe (Table 2). Moreover, two patients who presented with severe neurological symptoms such as syncope, focal neurological deficits, and seizures were diagnosed with a subdural hematoma and acute ischemic stroke. The incidence of neurological manifestations was noted to be 39.7% in the Harvard registry and 67% in the Oklahoma registry [2,16]. Renal dysfunction is frequently seen in patients with TTP, more commonly reported in older patients [2,19] . Acute kidney injury was seen in 63% of our patients, which is slightly higher than patients in the Oklahoma registry (52.5%) [2] . Our patient population had a higher incidence of AKI than neurological symptoms compared with other registries which had a higher incidence of neurological manifestations rather than AKI. A detailed analysis into the subgroup of patients with AKI revealed that 57% of them had a positive drug screen and 28% of them had morbid obesity. We observed that the renal functions in most drug abusers (five of six) completely reversed back to normal within one to two weeks after treatment of TTP. Even though baseline renal functions were not available for many of these patients, the observation of improvement of renal function with TTP treatment proves the fact that substance abuse did not play a confounding role in renal impairment. Fever was noted to be common in the Harvard registry, including 35% of their patients [16] . Despite fever being one of the criteria in the classic pentad of TTP, it was rarely seen in our patient population (one of 11 patients). Even though cardiopulmonary symptoms were relatively uncommon, we had two patients (18%) who presented with chest pain and shortness of breath. In addition, we found significantly elevated mean troponin I of 0.282 ng/ml on admission and mean peak troponin I of 2.43 ng/ml ( Table 3). Elevated troponin level is associated with a worse prognosis, as reported by Alwan F. et al in 68% of their patients [20]. The mean hemoglobin, hematocrit, and platelet count in our study group was similar to those reported in other registries except for mean creatinine and LDH levels which were higher in our study population as illustrated in Table 5. Complete remission of TTP was observed in 80% of our patients suggesting a good response to initial therapy (days to remission: 10.5, SD 3.93). However, we found a very high rate of exacerbations (70%) compared to previous TTP studies including the Oklahoma registry which reported an exacerbation rate of 55% [2,[21][22][23][24][25][26] . The relapse rate is 20% in our patient population compared with 44% in the Oklahoma registry [2] . This could be secondary to our limited number of patients who presented to our center. Our study group had 36% of morbidly obese (BMI >40 kg/m 2 ) patients who had significantly lower initial platelet count, high AST, and high initial troponin levels compared to the non-morbidly obese patients ( Table 6). Our patients with obesity had high exacerbation (75%) and relapse rates (50%). There is a paucity of data on the relationship of TTP with obesity [27,28] . Therefore, this association needs to be further evaluated. The other subgroup of patients who had a high exacerbation rate were patients with a positive drug screen. Four of the five patients (80%) with a history of drug abuse had TTP exacerbations in addition to a high incidence of AKI (66%), as previously mentioned ( Table 7). It is possible but difficult to prove that these drugs induce the autoimmune reaction of TTP or the causative for increased exacerbation in these patients. Further studies are needed to clarify the relation between drug use and TTP presentation and exacerbation. This may be faced with difficulty due to the rarity of the disease. TTP is one of the few hematological emergencies that need to be addressed immediately with accurate diagnosis and initiation of therapy. Therapeutic plasma exchange (TPE), glucocorticoids, and rituximab are the most commonly used efficacious treatments available for TTP. Additional treatments and strategies are being investigated to more effectively treat and prevent exacerbation or relapse. Caplacizumab, a humanized antibody derived from nanoparticles, showed promising results in phase 2 (TITAN) and phase 3 (Hercules) studies and approved by Food and Drug Administration (FDA) on February 2019 for initial treatment of TTP along with plasma exchange and immunosuppressive therapy [29] . The mechanism of action is by inhibiting the interaction between uncleaved vWF multimers and platelets. Additionally, the use of recombinant ADAMTS13 is now offering treatment for people with congenital TTP and offers linear improvement in autoimmune TTP . Interestingly, N-acetylcysteine was found to disrupt the disulfide bonds in the vWF [30] . These new mechanisms of treatment offer great hope to patients diagnosed with TTP, yet further studies are needed to ascertain their efficacy and safety. Conclusions We observed a high incidence of acute kidney injury and high TTP exacerbation rates among patients with morbid obesity and a history of substance abuse. TTP is a rare disease and there is a paucity of data on the relationship of TTP with obesity and drug abuse. This relationship should be further investigated by doing retrospective and prospective studies with larger patient populations. In addition, patients diagnosed with TTP should be counseled for weight loss in case they have obesity (BMI> 30 kg/m 2 ) and also be screened and counseled for substance abuse. These interventions might decrease the chance of having TTP exacerbations. Please note that an annual progress report for continuing review is due before April 26, 2021. A final progress report for closure is required should the project finish before that date. Please send your request for renewal at least one month in advance of the expiration date. If there are any changes to the protocol they must be submitted for IRB approval for prior implementation. Please refer to the Brookdale protocol number listed above in any future correspondence with the IRB office regarding this protocol. Sincerely, Signature Dr. Hal Chadow, MD, Chairman Research and Clinical Projects Committee (IRB). Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the
2021-05-29T05:18:31.333Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "bca4d5a4f1e307cd5a81c12f388d05eb3f3ce94e", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/56644-high-incidence-of-thrombotic-thrombocytopenic-purpura-exacerbation-rate-among-patients-with-morbid-obesity-and-drug-abuse.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bca4d5a4f1e307cd5a81c12f388d05eb3f3ce94e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
268554844
pes2o/s2orc
v3-fos-license
Insurance barriers and inequalities in health care access: evidence from dual practice Background We investigate access disparities in pharmaceutical care among German patients with type 2 diabetes, focusing on differences between public and private health insurance schemes. The primary objectives include investigating whether patients with private health insurance experience enhanced access to antidiabetic care and analyzing whether the treatment received by public and private patients is influenced by the practice composition, particularly the proportion of private patients. Methods We estimate fixed effect regression models, to isolate the effect of insurance schemes on treatment choices. We utilize data from a prescriber panel comprising 681 physicians collectively serving 68,362 patients undergoing antidiabetic treatments. Results The analysis reveals a significant effect of the patient's insurance status on antidiabetic care access. Patients covered by private insurance show a 10-percentage-point higher likelihood of receiving less complex treatments compared to those with public insurance. Furthermore, the composition of physicians' practices plays a crucial role in determining the likelihood of patients receiving less complex treatments. Notably, the most pronounced disparities in access are observed in practices mirroring the regional average composition. Conclusions Our findings underscore strategic physician navigation across diverse health insurance schemes in ambulatory care settings, impacting patient access to innovative treatments. Supplementary Information The online version contains supplementary material available at 10.1186/s13561-024-00500-y. Introduction Use of the latest technology is linked to productivity growth and improvement in patient health.At the stage when health technologies are established, persistent adoption of innovation is driven more by physicians and providers than by knowledge diffusion [1,2].Such unwarranted variation in physician treatment styles may be driven by differences in health insurance schemes and may vary by composition of patients per health insurance scheme in a practice.This can result in inequalities in access to cost-effective innovation and can have severe consequences on patient health, and efficiency of health care systems.We analyze the impact of health insurance schemes on choices that determine access to novel treatments with less complex medication regimen when physicians work in dual practice.Treatment choices can have important consequences for patient health and efficiency of health care delivery.In health care settings with mutually exclusive insurance schemes, where physicians can freely choose between treatments that may vary by quality and cost-effectiveness, and patients can freely choose physicians, policymakers would like to know whether mechanisms are needed to regulate treatment styles if access to more novel but effective treatments is not equal per insurance scheme. Miraldo et al. [1] suggest that to understand the role of the physician in technology adoption, any investigation of differences in access beyond the physician's office door needs to account for differences in treatment style of physicians and patient-level factors.Access to prescription medicines is determined by the physician's decision and can only be obtained from seeing a physician who diagnoses medical conditions and subsequently decides on treatment.Our approach complements previous studies that have investigated the effects of insurance schemes on access by waiting times to see a provider by studying the effects on obtaining prescription medicines by health insurance status and the within practice variation in prescriptions [3][4][5]. Delaying access to care of effective prescription medicine because of a patient's health insurance scheme may lead to the loss of potential health benefits, ultimately lower life expectancy as well as preventable costs arising from non-adherence [6,7].This is of particular interest for chronic conditions with a progressive nature like type 2 diabetes (T2D), as most patients at some point need more than one active ingredient to ensure long-term glycemic control.Patients have a significantly higher adherence to medicine regimen with few tablets (i.e.monotherapy) [8,9], and that patients changing from a single-to a two-pill regimen leads to higher adherence (71% vs. 87%) [10].Our study expands the literature by capturing unwarranted treatment variation when choosing between technologies of different complexity within the same practice. This study focuses on physicians working in dual practice, offering services to two types of mutually exclusive health insurance schemes in Germany, public and private.Important to note is, that there are no direct incentives for physicians to differ their prescribing between patients with public and private health insurance.We analyze physician prescription decisions between treatments for patients with T2D receiving oral antidiabetic therapy.First, we study whether patients with private health insurance are more likely to receive a less complex single-pill treatment compared to a two-pill treatment.Second, we analyze whether the treatment of publicly and private patients with T2D depends on practice composition, hence the share of private patients in the practice. We find evidence of unequal access to prescription medicines depending on the patient's insurance scheme.Private patients are about 10 percentage points more likely to receive the less complex single-pill treatment compared to a two-pill treatment.We document significant inequalities in treatment choices by practice composition.The beneficial effect of the private insurance status on access to the single-pill is, with about 12 percentage points, larger in a practice with a high relative share of private patients.The inequalities are highest when the share of private patients is most similar to that of the regional average of the respective federal state. Private and public health insurance in Germany We compare differences in access between the two mutually exclusive health insurance schemes of nonprofit public health insurances and private health insurances [11].In Germany, health insurance is compulsory for all citizens.87.7% of the population are insured in a public insurance and about 10.5% in a private insurance [12].The eligibility criteria and financing of the insurance schemes differ.Employees whose income does not exceed a compulsory insurance threshold (€5,212.50 in 2020) obtain insurance by one of the 103 sickness funds of a public insurance [13,14].If the monthly income exceeds the threshold, citizens can voluntarily opt into one of the plans provided by 44 companies that offer a private insurance [15].Civil servants and self-employed citizens are insured with a private insurance, even below the compulsory income threshold. With respect to financing, public insurance is based on a solidarity-based health system.Contributions are calculated based on income, including family members of insured individuals and the retired having access to the same benefits [11].This contrasts with private insurance where premiums are determined according to health status, age, and individual risk assessment when entering the system [4,13,16].Switching from public insurance to private insurance is possible given eligibility criteria [17]. The provision of ambulatory care services in both health insurance schemes is characterized by fee-for-service reimbursement.Physicians are largely autonomous when making prescription decisions compared to health systems with managed care or a national health service that often restrict prescription choice [18].Regarding prescription medicines, the scope covered by public and private insurance is almost identical.All active ingredients that received marketing authorization can generally be prescribed in public insurance unless listed on a rather narrow negative list [19].What differs are the incentives to prescribe novel treatments. Access to technology per insurance scheme We study two channels how health insurance schemes influence treatment decisions in dual practice settings: financial incentives and practice composition in dual practice.Both channels are indirect, as upfront there are no restrictions to prescribe medicines at the level of a single prescription in both, the public and private health insurance scheme as long as the medicines are not listed on a negative list.Dual practice refers to a physician treating patients of different health insurance schemes simultaneously in the same practice [20].In Germany, about 97% of the ambulatory physicians work in dual practice [21] and working in this setting has been associated with prioritization of patients due to financial interests, differential waiting times, patient selection according to profitability, over-or under provision of healthcare ultimately impacting the health outcomes [5,20,[22][23][24][25]. Financial incentives to unequal access per insurance scheme We examine financial incentives as a channel to choose a more novel and potentially costlier treatment in one group of patients per health insurance scheme, but not the other.Such financial incentives originate from differing physician compensation depending on patient types visiting the practice and the management of prescription medicine spending by insurance scheme. The major difference between the public and private insurance schemes is that public insurance monitor prescription expenditures using cost control measures which may lead to different but indirect financial incentives.These measures include a quarterly budget and prescription quotas based on the physicians' specialization and number of patients in the previous year.The quarterly budget and prescription quotas are monitored at the practice level [21,26,27].When the physician reaches the quarterly budget, additional services and prescriptions are only reimbursed partially [21].If physicians do not comply with the control measures, they face the risk of a recourse claim of expenditures beyond the budget.Prescribing expensive medicines, will lead to a physician overspending her budget quicker.To stay in the budget, thus to optimize her compensation the physician is incentivized to limit ambulatory services provided and the prescription of expensive medicines if cheaper alternatives are available.For private patients, no such costcontrol measures exist.The private insurance is based on a fee-for-service basis and the patients pay their services up-front by entering into a contract with the physician and paying physicians directly after receiving an invoice.The set of reimbursable services is listed with a uniform baseline price across different private plans though can be multiplied with leverage factors depending on the complexity of treatment and time spent with the patient.Ultimately, there is no cap on service or prescription volume in private insurance and comparing similar services across these two reimbursement schemes, charges of physicians were, on average 2.28 times higher for privately insured compared to publically insured patients [21,28].The difference in reimbursement rate is relevant in the compensation of an ambulatory care physician of which about 22% stems from the about 10% of privately insured or self-paying patients in the practice [29]. Given the financial incentive and institutional differences regarding cost-control measures that apply to public but not to private insurance in Germany, we will empirically test the hypothesis whether patients with private insurance are more likely to obtain access to more novel treatments compared to patients with public insurance.If we do not find such differences, it may be that physicians put value on treating patients according to the same criteria which may be an intrinsic value to physicians [30].Such notions of fairness would constrain profit-maximizing choices of physicians so that we would not find differences in technology use across health insurance schemes.If the opposite is the case, we would conclude that the differences in financial incentives and cost-control measures of the German public insurance scheme is a factor that leads to different decision-making structures of physicians [31]. Practice composition and treatment choices Practice composition captures the proportion of patients of different health insurance schemes treated in a practice which indicates how much a physician is exposed to a certain regulatory framework.Practice composition may also reveal how much a physician may need to customize a treatment for a given patient population.In settings with mutually exclusive insurance schemes, physicians typically see patients from a variety of arrangements by public or private insurance schemes that vary in payments and management of patients.We will test whether a higher share of private patients in the practice increases the publicly insured patient's likelihood to receive the less complex treatment in a setting where the vast majority of patients is insured in the public insurance scheme. Previous literature has almost exclusively analyzed the effect of practice composition on practice intensity and efficiency in the context of traditional fee-for-service settings compared to managed care in the United States [32][33][34].The setting we study does not differ by payment type, but management and monitoring of prescription medicine spending at physician level, most importantly delivery arrangements when there is no direct financial gain, loss or additional cost from prescribing a medicine. For the German setting, we consider practice composition by the share of how much a physician relies on either insurance scheme as indicator of how much costs differentiated into fixed and variable costs is allocated to the patients by insurance scheme.Physicians tend to shift their resources onto private patients to compensate for lower expected reimbursement by a larger share of public patients [35,36].We consider physicians trying to limit variable costs to be less likely to adopt a new technology and that practice composition is indicative of the strength of the physician's profit maximizing motive. An important aspect regarding practice composition concerns how the practice composition differs from the practice composition of physicians practicing in the same region.That way we can analyze whether physicians stick to a norm following behavior in which physicians optimize their decisions at the aggregate level for a representative patient instead of taking decisions individually per patient [37].If physicians stick to a certain norm, this means that physicians choose treatments for the median or mean patient in their practice and not customize treatments.Such a strategy may be sensible to reduce the cost of customization that have been related to behavioral factors and heuristics [31].If the share of privately insured is relatively higher compared to regional average, physicians may adapt their treatment choices such that patients of public insurance will also be more likely receive that treatment that represents the standard for the privately insured.We assume that physicians in practices with a higher share of privately insured than the regional average to have a different practice style than physicians with a relatively low share of privately insured and that due to costs of customization, the treatment of publicly insured will resemble the treatment of the privately insured such that more novel therapies are prescribed for the publicly insured. Oral antidiabetic care of T2D patients We consider patients suffering from T2D which are treated with a novel combination therapy of oral antidiabetics (i.e.dual therapy).In Germany, around 7 million people were diagnosed with diabetes in 2015, with an estimated 140,000 annual deaths in 2018 [38][39][40].T2D prevalence is continuously rising and forecasts predict an increase by 54-77% by 2040 [40].The global healthcare spending for the treatment of diabetes was estimated at 673 billion US Dollar in 2015 [41], indicating a high financial burden.In Germany, expenditures for oral antidiabetics were growing by 7-11% annually with a total value of 1.1 billion Euros in 2018 [42].Moreover, diabetes is a significant risk factor for several cardiovascular diseases [43]. We concentrate on oral antidiabetic treatments comprising approved active ingredients for managing hyperglycemia, aimed at adjusting a patient's blood glucos level.The standard therapy after diagnosis involves a patient's lifestyle change (e.g.weight loss, smoking cessation).If such measures fail to achieve the desired outcomes, and if there is no contraindication, oral antidiabetic therapy is initiated [44].There are eight prescription medicine classes available: biguanides (thereof most importantly metformin), sulfonylureas, glinides, glitazones, glucagon-like peptide-1 receptor agonists (GLP-1-agonists), sodium-glucose cotransporter 2 inhibitors (SGLT2-inhibitors), dipeptidyl peptidase IV inhibitors (DPP-IV-inhibitors), and alphaglucosidase inhibitors [45].The different ingredients aim to alter blood glucose levels, but work through different mechanisms [46].The variety of active ingredients is meaningful both for patients with comorbidities, or intolerances towards certain active ingredients, as well as in dual therapy, as it shows a potentially complementary and additive effect of the different mechanisms [47].According to clinical treatment guidelines [44], metformin is the ingredient of first choice.In patients where a monotherapy using solely metformin is not achieving desired treatment outcomes, as well as with the progression of the disease, metformin can be combined with one or multiple active ingredients of eight oral antidiabetics classes. Based on the clinical evidence proving the effectiveness in T2D therapy, we assume dual therapy to be more effective than monotherapy [48,49].Within dual therapy, we assume that a single-pill is superior to a two-pill treatment [48,49].A commonly used reference point for evaluating the effectiveness of medicines in T2D therapy is the change in blood glycated hemoglobin levels (HBA1c).A randomized clinical trial found significantly lower HBA1c levels in patients receiving dual therapy of metformin and DPP-IV-inhibitors compared to patients with metformin monotherapy [47].A recent randomized clinical trial by Rosenstock et al. [50] indicates significantly lower HBA1c levels in patients receiving dual therapy of metformin and GLP-1-agonists compared to patients receiving dual therapy of metformin and DPP-IV-inhibitors. Data sources and study sample We combine data from two sources.To capture physician prescribing decisions, we rely on the CEGEDIM MED-IMED prescriber panel.The dataset contains prescription data of 3,026 office-based physicians in Germany over the years 2011 to 2014.It is a representative sample covering about two percent of all practicing physicians registered in Germany.The panel includes details about the prescription, selected patient characteristics including insurance scheme, and selected physician characteristics.Compared to administrative claims data, the panel allows comparing a physician's treatment behavior of patients from both public and private insurance in the same practice.To classify products by ingredient, we rely on the EphMRA/PBIRG Anatomical Classification for information on active ingredients [45]. Study Sample We narrow our sample to patients who use 'Drugs used in Diabetes' (A10H -A10S according to EphMRA) to be able to assume similar prescription behavior [45].We focus our analysis on oral antidiabetics that account for about 2.69% of prescriptions in ambulatory care and belong to the more frequently prescribed medicines.We exclude insulin products, both animal and human, and medical devices used for antidiabetic care. We included patients of age 18 and older who have received an oral antidiabetic treatment based on dual therapy (i.e.metformin and DPP-IV-inhibitors or glitazones or sulfonylurea), excluding patients who receiveonly one active ingredient.No patient changed insurance scheme during the observation period.With respect to physicians, we included general practitioners and internal medicine specialists working in dual practice (i.e.treating both, publicly and privately insured patients) and treating a minimum of 60 patients with oral antidiabetics across the whole observation period.Our final analysis sample includes 979,949 prescriptions for 68,362 patients prescribed by 681 physicians. We calculate the share of private patients in a practice and the average share of private patients in a practice at regional level the practice is located in.The regional level is divided in 17 physician association ("Kassenärztliche Vereinigungen") regions that are represented by the 16 federal states and the state of North-Rhine Westphalia split in two regions.We calculate the relative share of private patients within one practice compared to the average share by physician association region.The relative share is the difference between the regional average and the absolute proportion in a practice, with positive values indicating that a practice has more private patients than the regional average.We classified patients' comorbidities using a prescription based risk score based on 46 comorbidity classes classified in Pratt et al. [51], corresponding with the codes of the WHO's Anatomical Therapeutic Chemical (ATC) classification. 1 Summary statistics Table 1 shows summary statistics of the study sample stratified by insurance scheme and treatment choice.We focus on initial prescription decisions of T2D dual therapy.Compared to publicly insured, privately insured have a ten percentage point higher share of receiving the single-pill.Patient characteristics suggest that T2D patients do not differ strongly by insurance scheme.Private patients are more often treated by specialist physicians (42% vs. 33%), this could be due to regional variation of both, share of private patients as well as specialists.Urban regions have larger proportions of private patients and specialists than more rural regions or regions in the former East-German area [21].(Table 1) Figure 1 shows the density of the mean risk score of patients included in our sample stratified by health insurance scheme.There is a large overlap in comorbidity profiles across the distribution of risk scores with a larger proportion of publicly insured that have higher risks scores. Figure 2 shows the regional variation of the share of private patients receiving oral antidiabetics.This share varies substantially between 2 and 8% and is lower in Eastern states of Germany compared to Western parts.The North-Eastern parts correspond with the former territory of the German Democratic Republic where the share of privately insured is historically lower.The regional variation suggests that the profit maximization motive from treating private patients in dual practices could vary across regions [22].As providing services to private patients is generally more profitable to physicians [28,29], we assume that profit maximization is easier to achieve in areas where the share of private patients is higher. Figure 3 shows the density of the difference in the share of private patients in the practices relative to the regional average.Positive values indicate that a practice has more private patients than the regional average.The distribution of the regional relative share suggests that the majority of practices have less private patients than regional average, that means values below zero.The relative share of private patients shows an even distribution over the 17 regions. Measuring novel antidiabetic use We capture treatment choices for novel antidiabetic dual therapy as an outcome to determine whether patients Fig. 1 Mean patient comorbidity risk score by health insurance scheme have access to the conventional two-pill or the less complex single-pill treatment.We combined all products available in single-and two-pill treatments into one category irrespective to their active ingredient and brand.We denote the physician's prescription options to be the pair of outcomes for each patient j of the study popu- lation.y j = 0 denotes that patient j receives metformin in combination with one or more additional active ingredients as a two-pill treatment with one ingredient per pill ('two-pill').y j = 1 denotes the novel mixture of metformin and an additional ingredient in a single-pill ('single-pill'). The single-pill is available in the following combinations (Table 2): (1) Metformin and DPP-IV-inhibitors, (2) Metformin and sulfonylurea, or (3) Metformin and glitazones.The descriptive results of Table 2 indicate differences in the prescription prevalence of the single-pill.Single-pills account for about 35% of the oral antidiabetic prescriptions of private patients, compared to about 25% for public patients. Empirical strategy We aim to identify the effect of health insurance scheme on the probability of obtaining a single-pill compared to a two-pill treatment.We ideally would want to analyze if identical patients of different health insurance schemes would receive the same prescription medicine by the same physician.Since the prescription of a medicine happens in a face-to-face consultation, the risk of bias arises from the possibility that the physician remembers the patient.This bias makes a blinded, randomized experiment as is done in studies on inequalities in patient waiting times close to impossible [5,52].As a second best possible identification strategy, we account for any potential bias by considering confounders of the effect of health insurance scheme on the prescription of dual therapy treatments.We estimate fixed effect regression models by physician and region [53], clustering standard errors on physician level.Specifically, we estimate the following linear probability model (LPM): (1) where 'Single-Pill' denotes the binary outcome variable equaling to 1 if the single-pill is prescribed to a patient j by physician i. 'PHI' denotes the private health insurance scheme of patient j in comparison to public health insurance.β 1 reflects the probability to which the pri- vate health insurance scheme increases the probability of receiving the less complex treatment, x ij is a vector of control variables, including patient characteristics, unobserved physician heterogeneity, and timing of the treatment decision.'Patient Age' , 'Patient Sex' and 'Patient Risk Score' reflect patient related factors at level j.Diabetes prevalence is higher in men compared to women, and in older age groups [38].The prescription-based 'Patient Risk Score' reflects the co-morbidity of the patient.'Quarter' is a variable that controls for the quarter the patient j receives her first prescription by physician i.As we consider patients receiving access to antidiabetic treatments across time, we account for any time effects that might occur from early compared to late adopting physicians by including an indicator of the quarter the physician first prescribes the single-pill to the individual patient. To assess the robustness of our estimates of β 1 , we estimate separate variants of Eq. ( 1), most importantly to account for unobserved variation.Depending on the model, α i is a fixed effect for physician i to account for unobserved heterogeneity in practice style, δ r is a fixed effect for region r to account for unobserved heterogeneity of the distribution of patients with public or private insurance at regional level.We consider the region in which the physician practices to account for the unobserved differences in cost-control measures across the 17 physician associations, differences in the composition of the population by the health insurance schemes and diabetes prevalence.Moreover, there are considerable regional differences in diabetes prevalence in Germany, with higher rates (> 13%) in the counties of Eastern Germany compared to urban areas (for example in Hamburg 7.3%) [38]. In addition, we estimate Eq. ( 1) and account for unobserved heterogeneity in practice style of physician i by including a physician-level fixed effect α [54].Although we do not quantify treatment style by dimensions like aggressiveness or persistence, the physician-fixed effect controls for any bias from time-invariant variables of the unobserved preferences of the physician to prescribe single-pill compared to two-pill treatments [53,55].Finally, we estimate variants of the model that account for practice composition instead of physician fixed effects.The variable 'PHI Share' captures the absolute share of private patients in a practice i (Table 3, Model 4).ǫ ij is the error term capturing unobserved environmental characteristics. To empirically test how practice composition compared to regional average affects the likelihood to receive the single-pill, we estimate separate regression models Fig. 3 Regional relative share of private patients in practice relative to regional average.Note: The relative share shows the difference in the share of private patients in the practices relative to the regional average at the level of a physician practice that examine the effect of the relative share of patients with private insurance in a practice on the effect of the patient's insurance scheme, standard errors are clustered on physician level: with δ r as a region fixed effect.Compared to Eq. ( 1), we include an interaction term of insurance scheme and patient composition in comparison with the regional average ('Relative PHI Share').'PHI' captures the patient j's insurance scheme.'Relative PHI Share' is a binary variable that captures whether the difference of the share of private patients j in a practice i compared to the regional average of region r is larger or smaller than zero.An interaction of both binary variables, results in coefficients for four possible scenarios: (1) a patient with private insurance in a practice with a low relative share of private patients (2) (2) a patient with private insurance in a practice with a high relative share of private patients (3) a patient with public insurance in a practice with a high relative share of private patients (4) a patient with public insurance in a practice with a low relative share of private patients To assess whether an effect might be driven by a specific combination of active ingredients, we estimate separate linear regression models by three groups of active ingredients that have a single-pill available.These are metformin plus DPP-IV-inhibitors, sulfonylurea, and glitazones.According to clinical guidelines, DPP-IVinhibitors and sulfonylurea are equally suitable as a dual therapy if metformin monotherapy does not achieve the set therapeutic objective, yet both have disadvantages [44].The combination of metformin and sulfonylurea has been associated with increased cardiovascular mortality, while DPP-IV-inhibitors have been associated with pancreatitis and pancreatic tumors [44,56]. Insurance scheme and treatment choice We analyzed the channel of financial incentives to prescribe novel treatments and find that the insurance scheme of a patient has a significant effect on the likelihood to receive the less complex single-pill treatment (Table 3).The estimate of the LPM (Table 3, Model 1) reports the estimate for the baseline effect of private insurance, excluding all potential confounders.The estimated marginal effect is 0.1267 (p < 0.001).The low R-squared in Model 1 and Model 2 highlights the importance to account for physician specific characteristics.When accounting for patient characteristics like age, sex and comorbidity risk score, the timing at which the patient received the prescription and accounting for practice style variation in choosing oral antidiabetics across physicians (Model 3), the effect estimate of the probability to receive the novel single-pill treatment decreases to 0.1033 (p < 0.001), yet still positive and significant.This means, that patients with private insurance are about 10 percentage points more likely to receive the single-pill treatment compared to public patients.Adding physician level fixed effects is increasing the variance explained (Pseudo R 2 0.206 compared to 0.0718 in Model 2).Yet not accounting for unobserved physician specific heterogeneity does not strongly bias our estimates of insurance scheme on the use of single-pill treatments.Accordingly, we cannot reject the hypothesis that patients with private insurance are more likely to obtain access to less complex treatments compared to patients with public insurance.In Model 3, we find a slight reduction in the standard errors compared to the other model specifications, as we compare the variation in the access to single-pill within one physician's practice rather than between all patients. Practice composition and treatment choices The second channel we analyzed was the influence of practice composition in dual practice.We find the practice composition to have significant influence on the likelihood to get the less complex treatment.When we account for the absolute share of private patients in the practice to account for the physician's exposure to private patients (Model 4), the overall insurance effect slightly decreases to 0.0973 (p < 0.001).The estimates that account for absolute practice composition (Model 4) suggest that when the absolute share of private patients is around 10 to 15% the probability of receiving the single-pill treatment increases by 6.58 (p < 0.01) percentage points. Table 4 shows the possible outcomes of a patient by public and private insurance scheme by practice composition relative to the regional average.We tested the hypothesis whether public patients in practices with relatively more private patients than the regional average had a higher likelihood to receive the single-pill treatment.The idea is that physicians provide "readyto-wear" treatments that are suitable for the representative private patient.Public patients in practice settings with a low relative share of private patients represent the reference category.In line with Model 1-4 (Table 3), private patients have a higher probability of receiving the single-pill treatment in both practice settings, with either a low relative share (0.1059, p < 0.001) or a high relative share (0.1241, p < 0.001).The effect of public patients visiting practices with a high relative share of patients with private health insurance share is 0.0253, indicating that public patients in a practice setting with a high relative share are more likely to receive the single-pill treatment.Yet this effect is not significant, thus, we cannot conclude that the treatment of public patients resembles the treatment of the privately insured such that more single-pill treatments are prescribed for the publicly insured. Patients with private insurance are more likely to receive the novel single-pill treatment disparate of the practice composition.However, the differences in access between privately and publicly insured are larger in practices with a low relative share of privately insured (10.59 percentage points) compared to a high relative share (9.88 percentage points) (Table 4). Figure 4 presents the predicted probabilities to receive the single-pill treatment by the relative share of privately insured by insurance scheme.When the practice composition is equivalent to the regional average share of private patients, the probability for patients with public insurance to receive the single-pill treatment is 57.4% and 67.8% for private patients.The differences between public patients and patients with private insurance are largest (10.4 percentage points) in practice settings with a relative share around the regional average (at zero).We find that the probability of public patients to receive the single-pill is monotonically increasing by relative share of private patients in a practice.Especially, in settings where the relative share is very high (values above 20), the predicted probabilities to receive the single-pill treatment of private and public patients almost align, independent of the respective insurance scheme.On the contrary, if the relative share is very low, for example -10 percentage points, the difference in the probability to receive the single-pill increases to 54.4% for public patients compared to 67.1% for private patients.The probability of private patients remains constant around 67% independent of the relative share of private patients in a practice.3 Abbreviation: PHI, Private Health Insurance The relative share is the difference between regional average share and the absolute proportion in a practice.A high relative share indicates that a practice has more private patients than the regional average, a low relative share vice versa Heterogeneity analyses We assess whether the results are driven by the choice of the second ingredient besides metformin.The subgroup analysis for metformin plus DPP-IV-inhibitors and metformin plus sulfonylurea suggest homogeneous effects compared to our pooled estimates (see Tables A1 and A2 in the appendix).For the case that metformin and a DPP-IV-inhibitor is included in the single-pill, the effect of the health insurance scheme on the probability to receive the single-pill is 0.0157 (p < 0.05, Model 3, Table A1).When we consider metformin plus sulfonylurea, the effect estimate of the probability to receive the single-pill is 0.0644 (p < 0.001, Model 3, Table A1).This means that private patients independent of the second active ingredient in the single-pill are more likely to receive the single-pill.When we test for the effect of the share of private patients in a practice (Model 5, Table A2), the insurance effect of private patients increases to a 9.06 percentage point (p < 0.001) higher probability of receiving the single-pill when they visit a practice that has a high relative share of private patients.We do not report effect estimates in the third group of metformin plus glitazones.P-values of the F-test indicated that none of the independent variables were statistically significant.This subgroup consisted of 2,596 patients and 477 physicians. Discussion This study examined two channels of how different health insurance schemes impact treatment decisions in dual practice environments: firstly, the financial incentives promoting the prescription of innovative treatments, and secondly, the composition of practices in dual practice scenarios.The findings reveal that, within the same practice, individuals with private insurance enjoy enhanced access to less complex antidiabetic care compared to patients covered by public insurance.The estimated treatment effect of having private insurance indicates an about 10 percentage points higher probability to receive the single-pill treatment as oral antidiabetic therapy in Germany.This effect does not alter much when controlling for confounders including patient characteristics, unobserved regional or physician fixed-effects and the timing of the prescription decision. We find the practice composition in dual practice settings to have significant influence on the likelihood to get the less complex treatment prescribed.The beneficial effect of the private insurance status on access to the single-pill is even larger in a practice with a high relative share of private patients (about 12 percentage points).The differences between publicly and private patients are largest in practices with a relative share lower or around the regional average of private patients, which is the most common practice setting in Germany.In this case, the probabilities for publicly and private patients to receive the single-pill differ by 10 percentage points. While previous evidence comparing access to medical treatment per health insurance scheme has emphasized waiting times and utilization by the number of physician visits and hospitalizations [3,5,16], our results suggest that private patients have better access to novel treatments once they have entered the physician's practice in a setting where private insurance imposes no cost-control on prescribing medicines.The average 10 percentage point difference that we identify poses an additional lever of access to receiving the best possible treatment even when a technology has reached substantial uptake levels [57].The difference that we estimate is considerable as there are no direct financial incentives that would suggest this inequality to exist.As less complex therapy generally may be equally enhancing adherence to medication [58], our findings might explain why the privately insured utilize fewer physician visits [16]. Our results are in line with previous observations that private patients have better access to more novel therapy.Krobot et al. [59] assessed health insurance related barriers in accessing new migraine medicines in Germany developing a three-dimensional person-time-related hurdle model, albeit not controlling for the practice style of the physician and the regional distribution.In this study, patients with public insurance had a 2.4 times lower hazard to receive initial migraine therapy compared to their privately insured counterparts.Additionally, we expand the descriptive evidence that patients with private insurance in Germany proportionally receive more innovative as well as more expensive medicines, in particular, for medicines with proven added value [60]. Although the oral antidiabetic treatments that we study are very similar in terms of their effectiveness to control T2D, the single-pill has a potential to improve the adherence behavior of patients, as it entails a less complex regimen.Evidence from retrospective cohort studies investigating secondary adherence suggests that a singlepill shows higher adherence, in particular, in patients who receive dual therapy.This applies to both, patients switching from monotherapy to dual-therapy and to patients switching from a two-pill treatment to a singlepill [8,10].Literature finds, that more complex medication regimen reduce medication adherence [61] and, in particular for patients with T2D, a high medication count and a complex medication complexity is associated with poor glycemic control, with medication adherence being the mediator between the two [62].Studies have assessed consequences of non-adherence in different chronic conditions.It was found that T2D patients that are nonadherent are more than twice as likely to be hospitalized [63] and that despite the increase of prescription costs through better adherence, the overall healthcare costs decrease as hospitalization and emergency department usage goes down [64].Even though we do not have ex ante adherence rates of the patients, adherence is multidimensional such that a patient's insurance status alone would not be sufficient to extract differences [65].However, we cannot assess long-term health outcomes like mortality due to differential treatments, as we are using data from ambulatory care not covering the decease of a patient. The findings of this study are still applicable in 2023, as the German health insurance system did not undergo substantial changes since the ending of our observation period, that means 2014.Metformin is still the first line choice in T2D treatment along with a number of combination of therapies [66].Moreover, our empirical approach can be adopted to new treatment options like semaglutide or exenatide that have received raised attention due to the potential long-term weight loss benefits and ease in adhering to anti-diabetic therapy through once-weekly injections.The progressive nature of T2D therapy often involves step-up regimen through combination of different active ingredients [67,68].More complex treatments demand even closer attention in ensuring equal access, highlighting the applicability of this study's research aim. We find a small effect of the effect of the patient's health insurance scheme on receiving metformin plus a DPP-IV-inhibitor as a single-pill.This might be explained by the lack of added benefit of some medicines with DPP-IV-inhibitors, according to prescription guidelines for DPP-IV-inhibitors by the Federal Joint Committee [56].Clinical guidelines state DPP-IV-inhibitors and sulfonylurea as interchangeable, yet both have stated clinical limitations [44].The combination of metformin and sulfonylurea has been associated with increased cardiovascular mortality, while DPP-IV-inhibitors have been associated with pancreatitis and pancreatic tumors [44,56].Sulfonylurea is usually the second ingredient when metformin monotherapy is insufficient, yet this is not in line with clinical guidelines [69]. Publicly insured patients do not have advantageous access to the single-pill in settings with a practice composition of a high relative share of private patients compared to practices with a low share.This shows, that physicians customize their treatments or choose treatments for the mean patient within one insurance category.This finding is in line with Glied and Zivin [32], suggesting that physicians vary fixed and variable costs by practice composition.The fixed costs of physicians include investments for equipment or practice capacity.Fixed components do not vary across patients with different insurance scheme and, are set by the physician in advance.Such choices can be based on practice composition, as physicians with a high share of private patients will likely make other investment decisions.Variable costs of physician's may include waiting times till appointment, time spent with the patient, and the scope of services provided or medicines prescribed.In a practice setting with a high share of privately insured, publicly insured are likely to receive lower variable costs to compensate fixed costs set for a practice composition with a high share of patients with private insurance share.Vice versa, privately insured in practice settings with relatively few privately insured and lower fixed costs receive more variable effort to compensate [32].For the United States, Glied and Zivin [32] find that practices with a higher share of managed care patients (e.g.publicly insured) treated fee-for-service (e.g.privately insured) and managed care patients about similar.In contrast to this, when health insurance schemes differ by management and monitoring of prescription behavior, the private patients are more likely to receive the single-pill disparate of the practice composition.This shows, that despite the practice composition, physicians are optimizing variable costs as well. The data used in this study limits us to completely rule out if patients with variable socioeconomic backgrounds beyond age, gender and comorbidity demand the singlepill.Cutler et al. [2] finds the influence of patients on physician treatment variation to be small.One can argue that this is the case in the German setting as well.The drug advertising law ("Heilmittelwerbegesetz") prohibits the advertisement of prescription drugs to patients, thus it is unlikely that a layperson knows of pharmaceutical alternatives to their treatment and specifically demands such alternative [70].The existing information asymmetry between physician and patient demands that the physician uses her information advantage to provide the most suitable and beneficial treatment in the best of the patient's needs [71,72].Hence, if our measured supply-effect is biased by patient demand, it would mean that physicians are not moderating patient demand adequately. One threat to our empirical identification strategy is potential selection by patients opting into treatment to receive the single-pill instead of the two-pill treatment.This means that patients would change their insurance scheme from public to private insurance to receive the less complex treatment.Considering the possibilities to switch health insurance scheme, this, however, is an unlikely event in our context.Patients face a compulsory income threshold to opt into private insurance, including age and health assessment to define the individual monthly premium [16].This leads to patients switching, if at all, at a healthier stage.Second, as T2D is a chronic condition, it is unlikely that patients opt into private insurance once diagnosed, as it would come along with much higher monthly premiums.Therefore, we consider this potential bias to be small.In our sample, there are no T2D patients switching insurance supporting that patient incentives to switch insurance are low once receiving antidiabetic prescriptions. Another limitation is that, although, oral antidiabetics are equally reimbursed in private and public insurance, prices and corresponding rebates might influence physician choices.The data we analyze allows observing gross prices excluding rebates only.Physicians are price sensitive in their prescription behavior and account for costs when information on medicine prices is transparent [73].In the German context, to write prescriptions, manage medicine budgets and compliance with cost-control measures, physicians observe gross prices and cannot infer the negotiated rebates of the public insurance schemes.Additionally, we cannot observe when physicians choose the two-pill treatment to flexibly adjust dosages of ingredients.Clinical guidelines suggest that dual therapy is advised if metformin monotherapy is not achieving the set treatment goals in Germany [44]. Conclusion This study analyzed the effects of health insurance schemes on medical treatment decisions that determine access to novel therapeutic pathways in T2D patients.We find unequal access to antidiabetic care for patients based on insurance schemes within the same physician practice.Patients that hold private insurance are about 10 percentage points more likely to receive a treatment with a less complex medication regimen compared to patients with public insurance.The inequality in access that we document is largest in settings with a practice composition with a share of private patients around the regional average, thus in most German ambulatory practices.Our results show that even when there is a predominant health insurance scheme (i.e.public health insurance) that covers 90% of the population, there is unequal access to novel therapies.Our findings contribute to the literature that considers treatment inequality due to indirect financial incentives that shape physician treatment behavior. Table 4 Percentage point increase in receiving single-pill treatment-Interaction of a patient insurance schemes and practice composition * p < 0.05, ** p < 0.01, *** p < 0.001 Results are based on Model 5 Table Fig. 4 Fig.4 Linear probability to receive the single-pill treatment per health insurance scheme and relative share of private patients in a practice Table 1 Descriptive Statistics of patients using oral antidiabetics, total and stratified by insurance scheme Data source: CEGEDIM MEDIMED prescription data 2011-2014 Abbreviation: PHI, Private Health Insurance Age j +β 3 Patient Sex j +β 4 Patient Risk Score j +β 5 Quarter ij +β 6 PHI Share i +α i +ǫ ij Table 2 Oral antidiabetic prescriptions for dual therapy by treatment pathway, stratified by insurance scheme and type of therapy Two-pill therapy indicates patients receiving metformin and an additional active ingredient in two different pills.Patients receiving only metformin are excluded from the sample Table 3 Linear probability model of likelihood to receive single-pill treatment
2024-03-22T13:18:34.329Z
2024-03-21T00:00:00.000
{ "year": 2024, "sha1": "df057978e6ff7399de466f486f6c0fdd8010cd9a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "506f4072dde2f489e687522cd5508fae7aefcb6a", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
237397532
pes2o/s2orc
v3-fos-license
Study on the Effect of Unilateral Sand Deposition on the Spatial Distribution and Temporal Evolution Pattern of Temperature beneath the Embankment State Key Laboratory of Frozen Soils Engineering, Northwest Institute of Eco-Environment and Resources, Chinese Academy of Sciences, Lanzhou Gansu 730000, China University of Chinese Academy of Sciences, Beijing 100049, China School of Civil Engineering, Lanzhou University of Technology., Lanzhou Gansu 730050, China College of Civil Engineering and Architecture, Jiaxing University, Jiaxing 314001, China Introduction e Qinghai-Tibet Engineering Corridor (QTEC) is the most important link between the Tibet Autonomous Region and inner China, the width of which is several kilometers at wider sections but only hundreds of meters at narrow section [1]. Within the corridor, there are several major linear infrastructures including the Qinghai-Tibet Highway, Qinghai-Tibet Railway, Qinghai-Tibet Power Transmission Line, and embankments, particularly in some river valleys and basins that the highway and railway traverse ( Figure 1). In the permafrost, the construction of the embankment will disturb the heat balance of the original surface. e strong heat absorption of asphalt pavement will lead to more rapid permafrost degradation beneath the highway [7][8][9]. In addition, the change of the thermal-mechanical properties of permafrost will cause embankment and pavement damage, including longitudinal cracks and uneven settlement [10][11][12]. Hence, in order to reduce the impact of the human activities, a lot of effective engineering structural measures have been considered and used in the construction of highway and railway embankment including the crushed-rock and ductventilated embankments. Due to the excellence of the crushed-rock and duct-ventilated embankments, they have been widely used to ensure the thermal stability of highways and railways in permafrost regions of the QTEC [13][14][15][16][17][18][19]. However, in the context of climate warming and the expanding scope of human activities, the wind-sand hazards around the embankment in the QTEC have become more and more serious in the last few years [20][21][22][23]. e sand particles deposition around the embankment will not only change the surface boundary but also affect the cooling effect of the construction of crushed-rock or duct-ventilated embankments [24][25][26]. Research studies have shown that with the increasing thickness of sand in the rock layer, the critical temperature difference between the sand-free layer increases, and the Ra number decreases, and the natural convection intensity weakens gradually [27][28][29][30]. However, in these research studies, several assumptions about the sand deposition around the embankment are proposed. e distribution of sand particles deposition is generally assumed to be the same at the two slopes of the embankment. e prevailing wind direction within the corridor is west and northwest, and the corridor is from northeast to southwest [1]. en, the two slopes of embankments of both the railway and highway are generally windward and leeward slopes, respectively. us, there should be a considerable difference in the sand deposition on the two slopes. However, the difference was not considered within these previous research studies, which will undermine the accuracy of evaluation on thermal impacts of sand particles deposition on the embankments installed with air-cooled structures. In this study, a two-dimensional numerical model of the heat transfer for highway embankment is established. In the numerical model, three different mean annual ground temperatures (MAGTs) of the permafrost, in which the embankment is located, are considered, as well as the unilateral sand particles deposition at the toe of the embankment. Field observed data of soil temperatures around an experimental zone are used to validate the numerical model. rough numerical simulations, the effects of unilateral sand deposition on the spatial distribution and temporal evolution pattern of temperature beneath the embankment are investigated. e results of this study could provide informative references for highways constructed in permafrost zones, in which the wind-sand hazards frequently occur around the embankment. Governing Equations. According to the experiment, in freeze-thaw soil layers, the ratio of heat conduction is far larger than that of the heat convention [31]. In this article, by ignoring the heat convection, the heat transfer progress considering the heat conduction and phase change in freezethaw soil layers can be described as follows [32][33][34]: where C * e is the equivalent volume heat capacity and λ * e is the equivalent thermal conductivity. According to the method of sensible heat capacity, the equivalent volume heat capacity C * e and the equivalent thermal conductivity λ * e in freezethaw soils can be written as follows [32][33][34]: where T m ± T is the temperature range of phase change; C u and λ u are the volume heat capacity and thermal conductivity of unfrozen soil; C f and λ f are the volume heat capacity and thermal conductivity of frozen soil; L is the latent heat of phase change per unit volume. Physical Model. A physical model of a highway without sand particles deposition near the toe of the embankment (NSE) is shown in Figure 2(a), and a highway with sand particles deposition near the toe of the embankment (SE) in Figure 2 Advances in Materials Science and Engineering 3 (0∼3 m) is gravel and clayey layer, part III (3∼8 m) is silty clay, and part IV(8∼30 m) is weathered mudstone. e height of the embankment is 4 m and the width of its paving is 10 m. e gradient of the slope is 1 : 1.5. e widths of the sand deposition are determined as 7.1, 10, and 16.7 m. e thicknesses of sand particles deposition are determined as 0.3, 0.5, and 0.7 m. e computational domain is extended by 30 m wide from the outside slop toe of the embankment in the horizontal direction and 30 m height beneath the natural surface in the vertical direction. e thermal parameters of soil layers are given in Table 1 [33]. Boundary. According to the IPCC report, the air temperature in Qinghai-Tibetan Plateau (QTP) will be warmed up by 2.6°C in the future 50 years because of climate change [35]. Based on the adhered layer theory [12,36], the thermal boundary conditions of the computational domain are expressed as follows: e temperature at natural surfaces of the NSE model (AB and EF) and the SE model (AM and EF) is as follows: (6) where t h is the time; T a is the mean annual air temperature at which the embankment is located, being determined as −3.0, −3.5, and −4.0°C, respectively. e geothermal heat flux of 0.03 W/m 2 is applied to the bottom boundary (IJ) in both models. e lateral boundaries (ALKI and FGHI) are assumed to be adiabatic. With the governing equations and boundary conditions above, the problem is solved numerically using the commercial software of Fluent 14.0. e temperature boundary of the natural ground surface without consideration of climate warming is used to calculate the initial temperature fields beneath the embankment (parts II, III, and IV). e obtained stable temperature fields on July 15 are taken as the initial temperature condition of these parts, as shown in Figure 3. deposition surface are compared with the field measured data on April 15 and July 15 [37], as shown in Figure 4. It could be seen that the agreement between field measured data and numerically simulated results is good. However, there are some slight discrepancies between the simulated results and field measured data, especially on July 15. e simulated results of soil temperature are lower than those of the measured data within the active layer. e discrepancies may be attributed to the simplifications of boundary and soil strata in the computational model. Overall, the comparison shows that the computational model and the parameters can be used for simulating the spatial distribution and temporal evolution pattern of temperature beneath the embankment in permafrost zones. Advances in Materials Science and Engineering Variation of Temperature beneath the Embankment without Sand Deposition. For the NSE, without consideration of the sunny-shady slope effect, Figure 5 shows soil temperature distribution beneath the NSE, in which the embankment is located with different MAGTs, in October 15 in the 5th, 20th, and 50th year after the embankment construction. e soil temperature distribution is given from the centerline of the embankment to 20 m away from the slop toe (Parts 1-3 in Figure 2(a)). In the 5th year after the embankment construction ( Figure 5(a)), the distribution pattern of the ground temperature field under the NSE in different MAGT permafrost zones is basically the same. e permafrost table beneath the centerline of the embankment (CE) is higher than that beneath the natural surface (NS). As for the permafrost zone with different MAGTs of −0.5, −1.0, and −1.5°C, the increase of the permafrost table is 0.3, 1.6, and 2.0 m, respectively. In addition, due to the increase in the thermal area of the asphalt pavement of the embankment and the structure, the internal heat absorption of the embankment will increase sharply. e trend of the geothermal line changes in different permafrost zones. In the permafrost zone with the MAGT of −0.5°C, it can be seen that the depth of the geothermal line of −0.4°C beneath the NS is the same as that beneath the CE. In contrast, in the permafrost zone with two MAGTs of −1.0 and −1.5°C, the depth of the geothermal line of −0.4°C beneath the NS is lower than that beneath CE ( Figure 5(a)). It reveals that the thermal disturbance to different MAGT permafrost zones is different in the early stage of embankment construction. With the operational time increasing, the permafrost table and the permafrost warming beneath the NS and the embankment decline, especially for the CE. e internal heat absorption of the embankment will lead to a significant downward trend of the geothermal line beneath the CE. In the 20th year after construction of the embankment, the permafrost table beneath the NS in the permafrost zone with the MAGT of −0.5, −1.0, and −1.5°C is 2.7, 2.2, and 2.0 m, respectively. Meanwhile, the permafrost table beneath the CE in the permafrost zone with the MAGT of −0.5, −1.0, and −1.5°C is 3.8, 2.6, and 1.2 m, respectively (Figure 2(b)). In the 50th year after the construction of the embankment, due to the "heat gathering" effect of the asphalt pavement and the structure of the embankment, the permafrost table beneath the CE in permafrost zones with three different MAGTs is lower than that beneath the NS. e difference in the permafrost table is 5.5, 2.1, and 1.7 m, respectively ( Figure 5(c)). Hence, considering the warming of the permafrost beneath NS and the CE in the context of climate warming and human engineering activities, the change rate of the permafrost can be used to evaluate the degradation of the permafrost. e variation of the permafrost table beneath the CE and the NS is listed in Table 2. As for the three different MAGT permafrost zones, it can be seen that the higher the MAGT is, the greater the rate of degradation of the permafrost beneath the CE and NS will be. In addition, with the operation of the embankment, the degradation of permafrost beneath the CE is higher than that of beneath the NS, which shows the thermal influence of the embankment. Variation of Temperature beneath the Embankment with Unilateral Sand Deposition. As an obstacle structure, the construction of the embankment will change the progress of the initial flow field, including the wind flow and wind-sand flow. e redistribution of the wind-sand flow around the embankment will influence the capacity of the flow field, resulting in the sand particles deposition around the embankment. In this part, considering the influence of the ambient wind speeds, the difference of sand particles deposition around the embankment is significant, which may affect the temperature beneath the embankment. Hence, the influence of the unilateral sand particles deposition (USPD) at the toe of the embankment with the height of 4 m is investigated in this part. e width and thickness of the sand layer are 10 m and 0.5 m, respectively. e variations of the temperature beneath the embankment over the simulation period of 50 years for three initial MAGTs of −0.5, −1.0, and −1.5°C are shown in Figures 6-8. In the 5th year after the sand particles deposition at the toe of the embankment, the USPD will induce the asymmetrical distribution of the temperature beneath the embankment. As for three different MAGTs, the lower the MAGT of the permafrost zone is, the more significant the asymmetry along the depth direction will be. From Figure 6, it can be seen that due to the USPD at the toe of the embankment, the maximum depth of zero annual amplitude of ground temperature is offset from the CE. e offsets are 5, 7.5, and 13.0 m for three different MAGTs, respectively ( Figure 6). Otherwise, the permafrost table beneath the centerline of the sand particles deposition (CSD) has moved 1.0, 1.3, and 1.4 m above the NS for three different MAGTs, respectively. It reveals that the sand particles deposition can delay the degradation of the permafrost in the early stage of the sand particles deposition. With sand particles deposition time increasing at the toe of the embankment, the influence of the USPD for the temperature beneath the embankment is different in the three different MAGT permafrost zones. In the 20th year after the sand particles deposition at the toe of the embankment, in the permafrost zone with the MAGTs of −0.5°C, the temperature field beneath the embankment is symmetrically distributed with the maximum depth of zero annual amplitude of ground temperature locating at the CE. e influence of the USPD on the temperature beneath the embankment gradually weakens. In contrast, for the per- Advances in Materials Science and Engineering time of USPD increasing, the sensitivity of the permafrost to sand particles deposition gradually weakens and the protective effect of sand particles deposition on permafrost gradually disappears (Figure 8). In addition, in the 50th year after the sand particles deposition at the toe of the embankment, the influence of the USPD at the toe of the embankment could be ignored for three different MAGT permafrost zones. e distribution of the temperature beneath the embankment is symmetrical. However, with the sand particles deposition lasting for 50 years, the permafrost table beneath the CSD is lower than that beneath the NS in the permafrost zone with MAGT of −0.5°C. Meanwhile, the permafrost table beneath the centerline of the sand particles deposition is basically the same as the NS for the two MAGTs of −1.0 and −1.5°C, respectively ( Figure 9). us, by comparing the variation of the permafrost beneath the sand particles deposition and the NS in three different MAGTs, it could be found that with the time of sand particles at the toe of the embankment increasing, the asymmetrical distribution of the temperature beneath the embankment is gradually weakening. What is more, the influence of the sand particles deposition can mitigate the degradation of the permafrost beneath it, especially for the permafrost zone with the MAGRs of −1.0 and −1.5°C. Within 50 years of the Advances in Materials Science and Engineering sand particles deposition, the permafrost table beneath the CSD is higher than that beneath the NS (Table 3). Effect of Unilateral Sand Particles Deposition on Temperature at Different Locations around the Embankment. e variations of the temperature beneath the CE and the toe of the embankment (TES), at which the sand particles are deposited, are shown in Figures 9 and 10. It could be seen that with the increase of time, the variations of the temperature beneath different surface boundaries in three different MAGT permafrost zones vary significantly. In Advances in Materials Science and Engineering the 5th, 20th, and 50 th years with three different MAGTs. Comparing the permafrost table beneath the CE and CES with different MAGTs and times, it can be found that the USPD at the toe of the embankment has little influence on the variation of the permafrost table beneath the CE (Table 4). However, as for the toe of the embankment (Table 5), 5 years after the sand particles deposition, the permafrost table beneath the toe of TES in three different MAGTs is 1.52, 1.04, and 0.64 m, the value of which is 0.67, 0.80, and 0.91 m higher than that beneath the TE, respectively. With the sand particles deposition increasing to 20 years, the permafrost table beneath the TES is 0.29, 0.68, and 0.81 m higher than that beneath the TE, respectively. Moreover, after 50 years of the sand particles deposition, the value of the increase is −0.21, 0.00, and −0.15 m, respectively. Hence, from the result of the comparison between the permafrost table beneath the TES and TE, it could be found that as for the three different MAGTs permafrost zones, the USPD at the toe of the embankment has a significant influence on the permafrost table beneath the toe of embankment. From the 5th year of sand particles deposition to 20th year, the sand particles deposition could improve the permafrost table beneath the toe of the embankment, especially for the permafrost zone with the MAGT of −1.5°C. However, with the time of sand particles deposition increasing, the influence of the sand particles deposition may induce the temperature of permafrost to increase and accelerate the degradation of the permafrost. Hence, as for the permafrost zone with different MAGTs, the adoption of sand control measures is beneficial to protect the permafrost by considering the time of sand particles deposition around the embankment. Effect of Different Forms of Sand Particles Deposition on the Temperature of Permafrost. Based on the study in the previous section, it can be concluded that within 20 years of the sand particles deposition, the sand particles deposition has a significant protective effect on the permafrost beneath it. Hence, in order to study the thermal effects of different forms of sand particles deposition (thick and thin of the sand layer) on the permafrost, under the condition of the MAGT −1.0°C, the thermal effects of sand particles deposition on the permafrost ground temperature with different thin and thick of the sand layer are given in Figures 11 and 12. In Figure 11, with the thickness of the sand layer being 0.3 m, the isotherms beneath the toe of the embankment with sand particles deposition vary significantly, inducing the asymmetrical distribution of the geothermal field beneath the embankment. e depths of −1.0°C isotherm beneath the NS are compared, which are 5 m away from the two toes of the embankment. e depth of the −1.0°C isotherm beneath the toe without sand particles deposition has moved 6.5, 4.0, 3.3, and 1.7 m upward compared with the toe with 0.3 m thick sand layer after the sand particles deposition of 5, 10, 15, and 20 years, respectively. In addition, in Figure 12, with the thickness of sand layer being 0.7 m, it can be found that the distribution of the ground temperature field is basically the same as that of the 0.3 m condition. As for the −1.0°C isotherm, the depth of it beneath the toe without sand particles deposition has moved 4.8, 2.8, 1.4, and 1.3 m upward compared with the toe with 0.7 m thick sand layer after the sand particles deposition of 5, 10, 15, and 20 years, respectively. is means that with the time of the sand particles deposition increasing, the "wide and thin" type of sand particles deposition has a greater influence on the depth of the lower ground temperature field than that of the "narrow and thick" type of sand particles deposition. To investigate permafrost warming beneath different thin and thick sand layers in the context of climate warming, the variations of the permafrost table beneath the CE, TE, TES, and CSD are shown in Tables 6 and 7 within 20 years of the sand particles deposition. It could be found that the two different sand particles deposition forms have less effect on the variation of permafrost table beneath CE and TE. However, there is a significant effect on the permafrost table beneath the CSD and TES. In Table 6, with the thickness of the sand layer being 0.3 m, the degradation rate of the permafrost beneath the CSD, TES, and TE is 0.03, 0.05, and 0.04 m·a −1 , respectively. Additionally, in Table 7, with the thickness of the sand layer being 0.7 m, the degradation rate of the permafrost beneath the CSD, TES, and TE is 0.03, 0.04, and 0.04 m·a −1 , respectively. is means that, within 20 years of sand particles deposition, no matter which form of the sand layer is, the deposition of the sand particles could weaken the degradation rate of the permafrost. However, as for the TES, the more the thickness of the sand layer, the smaller the degradation rate of permafrost. As for the embankment in the permafrost zone, the main disease is thaw collapse, which is closely related to the variation of the permafrost table. After 5 years of the two forms of sand particles deposition ("wide and thin" and "narrow and thick"), the difference of the permafrost table, the location of which is beneath the two toes of the embankment, between the two toes of the embankment is 0.49 and 0.88 m, respectively (Tables 6 and 7). However, the degradation rate of the permafrost table with the sand layer thickness of 0.3 m is greater than that with the sand layer thickness of 0.7 m. With the sand particles deposition time increasing and the context of the climate warming, the difference of permafrost table between two toes of the embankment will induce uneven settlement significantly, especially for the embankment with the "narrow and thick" sand particles deposition at the toe of the embankment. Hence, in order to reduce its impact on the long-term thermal condition beneath the embankment, it was necessary to clean the thicker deposition sand particles at the toe of the embankment. Conclusion In permafrost areas, the engineering activities will have a considerable influence on the project and the underlying permafrost. As an engineering structure, the constructions of the road project will produce certain disturbance to the original relatively stable wind-sand flow along the route, leading to the redistribution of the flow field around the embankment and the emergence of sand particles deposition. e deposition and coverage of sand particles around the embankment can change the thermal conditions of the permafrost embankment. In this article, a mathematical model of heat transfer for freeze-thaw soil is constructed to investigate the long-term thermal effects of sand particles deposition at the toe of the embankment in the permafrost with different MAGTs. Based on the above numerical analyses and comparisons, the following conclusions can be drawn: (1) e thermal disturbance of the construction of the embankment in the permafrost zone with different MAGTs varies significantly. With time increasing, the degradation rate of the permafrost largely differs in different parts around the embankment. After 50 years of the construction of the embankment, the permafrost table beneath the CE in the permafrost zones with three different MAGTs is 11.5, 6.5, and 5.0 m. In contrast, the value of which beneath the NS is 6.0, 6.5, and 5.0 m. erefore, the construction of the highway embankment will increase the heat absorption within the embankment and accelerates permafrost degradation. (2) e influence of the USPD will induce the distribution of the temperature beneath the embankment asymmetry significantly, especially for the embankment in the permafrost with different MAGTs. e lower the MAGT of the permafrost, the more significant the asymmetry of the temperature distribution. With the time of sand particles deposition increasing, the asymmetry of ground temperature field in three different MAGTs weakens gradually. Moreover, e influence of the sand particles deposition can mitigate the degradation of the permafrost beneath it, especially for the permafrost zone with the MAGRs of −1.0 and −1.5°C. Within 50 years of the sand particles deposition, the permafrost table beneath the CSD is higher than that beneath the NS. (3) From the variation of the temperature distribution at different parts of the embankment, the influence of the USPD on the permafrost table beneath the CE could be ignored. In contrast, for the TES, with the time of sand particles deposition increasing, the degradation rate of permafrost table at the TES is greater than that at the TE, especially for permafrost with the MAGT −0.5°C. After 50 years of the sand particles deposition at the toe of the embankment, the permafrost table has moved 0.21 m upward beneath the TE compared with that beneath the TES. (4) With different sand thickness and width conditions, the effect of "narrow and thick" form sand particles deposition on the temperature field beneath embankment was greater than that of "wide and thin" form sand deposition. Hence, in order to reduce its impact on the long-term thermal condition beneath the embankment, it was necessary to clean the thicker deposition sand particles at the toe of the embankment. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest related to this manuscript.
2021-09-01T15:13:12.053Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "9511b0e8567d20a1ffe83137c847b5addc856e58", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/amse/2021/5403567.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ee6a1d9d1417b9de47f1c1f65b74631829be81a9", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
2287587
pes2o/s2orc
v3-fos-license
Confounding by linkage disequilibrium Linkage disequilibrium (LD) and confounding are two widely discussed concepts in Genetics and in Epidemiology, yet their relationship has received only intuitive or no considerations. Taken in the narrow sense, LD refers to the nonrandom association of alleles at two or more linked genetic loci. The degree to which these alleles are associated is the basis behind genetic association studies whereby the genetic variation across target chromosomal intervals or over the entire genome is captured by a set of representative genetic markers (tagging markers). The denomination of ‘linkage' is somehow misleading because LD is a special case of the more general gametic phase disequilibrium (GPD),1 which also occurs between variants at the mitochondrial and nuclear loci in the form of cytonuclear disequilibrium.2 Confounding occurs when an extraneous factor, or a set of factors, can at least partially explain an apparent association or a lack of an apparent association between a risk factor and the outcome. In the former case, the confounding variable, or confounder, causes the association to appear, whereas in the second case the confounder masks a real association.3 The classical example is the apparent increase of risk for heart disease with alcohol consumption, an association that might in fact be due to cigarette smoking, a covariate highly correlated with alcohol consumption.4 With the flood of genetic association data published over the last few years, and the many others to come, it is the expectation that an increased number of the reported associations will be false findings or considered as such. As false finding is the common explanation for unexpected (irrelevant) but statistically significant associations, in most cases these findings are not followed-up. Nevertheless, associations appealing to common sense (association between marital status and cutaneous lymphoma) or as intriguing as they may appear (validated association between height and uterine fibroids) have been reported; these associations could be explained by mere LD, more precisely they might result from confounding by LD or GPD between loci influencing height and uterine fibroids. Linkage disequilibrium (LD) and confounding are two widely discussed concepts in Genetics and in Epidemiology, yet their relationship has received only intuitive or no considerations. Taken in the narrow sense, LD refers to the nonrandom association of alleles at two or more linked genetic loci. The degree to which these alleles are associated is the basis behind genetic association studies whereby the genetic variation across target chromosomal intervals or over the entire genome is captured by a set of representative genetic markers (tagging markers). The denomination of 'linkage' is somehow misleading because LD is a special case of the more general gametic phase disequilibrium (GPD), 1 which also occurs between variants at the mitochondrial and nuclear loci in the form of cytonuclear disequilibrium. 2 Confounding occurs when an extraneous factor, or a set of factors, can at least partially explain an apparent association or a lack of an apparent association between a risk factor and the outcome. In the former case, the confounding variable, or confounder, causes the association to appear, whereas in the second case the confounder masks a real association. 3 The classical example is the apparent increase of risk for heart disease with alcohol consumption, an association that might in fact be due to cigarette smoking, a covariate highly correlated with alcohol consumption. 4 With the flood of genetic association data published over the last few years, and the many others to come, it is the expectation that an increased number of the reported associations will be false findings or considered as such. As false finding is the common explanation for unexpected (irrelevant) but statistically significant associations, in most cases these findings are not followedup. Nevertheless, associations appealing to common sense (association between marital status and cutaneous lymphoma) or as intriguing as they may appear (validated association between height and uterine fibroids) have been reported; these associations could be explained by mere LD, more precisely they might result from confounding by LD or GPD between loci influencing height and uterine fibroids. CONFOUNDING BY LD To illustrate the notion of confounding by LD, let us consider the simple scenario of a Mendelian disease with complete or quasicomplete penetrance and a typed marker at gene locus 1 in LD with an untyped causal gene variant at gene locus 2. Let us further assume locus 1 affects a rare Mendelian form of obesity and locus 2 affects cancer but none of these gene functions is known. An epidemiological study that has collected measurements of potential confounders such as body mass index (BMI) may find this trait as an important predictor of cancer in the studied population, whereas the positive association may actually be explained by LD between the typed marker and the cancer-causing allele. Thus, LD can be a source of confounding as BMI predicts the cancer outcome in the absence of an apparent causal relationship. Fortunately, most of the genetic determinants of the common form of obesity identified and replicated so far explain only small fractions of the risk and therefore the impact of confounding by LD on association studies of common conditions may be limited but not negligible. By contrast, if the linked marker tags a gene with a sizeable effect, for example, a major gene, then confounding by LD can significantly affect the results of association studies, be they genetic or not. In our example above, if BMI is controlled for (that is, adjusted, stratified or restricted), the genetic effect of locus 2 on the cancer outcome cannot be accurately estimated and the observed association would typically be biased toward the null depending on the variance explained by locus 1 and the degree of LD between the two loci. In this case, the fitted model for the test of association may become overspecified, more precisely overadjusted because of adjustment for a covariate (BMI) influenced by a genetic locus (locus 1) in LD with the causal locus (locus 2). Most importantly, the covariate needs not to be on the causal pathway, bias toward the null can still be observed if gene locus 1 contributes significantly to the variance of the obesity trait in the study sample. Furthermore, because of the population-specific pattern of LD, confounding by LD is expected to vary across populations and thus could explain failures to replicate genetic association findings in comparable studies (those that controlled for the same confounders) in different populations. UTERINE FIBROIDS AND OBESITY One of the illustrations of this phenomenon is the complex association between obesity and uterine leiomyomas (UL, also referred to as fibroids). Uterine fibroids are the most common tumors in women of reproductive age (about 75% of women will develop this condition by the time they reach menopause and 25% will have symptomatic fibroids). Cumulative exposure to estrogen is believed to be a major etiological factor 5 and factors that may influence the hormonal milieu, such as obesity, are believed to be associated with risk. 6 However, the clearly established risk factors are age (increasing risk with increasing premenopausal age), menopause (risk decreases with menopause) and African-American ethnicity (higher risk compared with that of non-Hispanic Whites). Epidemiological studies have shown mixed results with respect to the association between BMI and UL. Some studies reported positive or no associations, whereas other more adequately designed or larger studies reported an inverse J-shaped association, 7,8 with the association reaching a peak in the overweight category. The independent replication of this atypical association of BMI with UL in two cohort studies of UL 7,8 ruled out possible detection bias. In the absence of dose effect, plausible detection bias and homogeneity of the association across populations, a possible explanation for this atypical association is confounding by LD, that is, the presence of a major gene for obesity in the vicinity of a UL locus. In a recent study, we have evaluated the role of chromosome 1q43 in the development of UL and we have identified two closely linked loci influencing differentially the risk of UL in European and African-American women. 9 The peak of association with UL overlapped a 100 kb-long genomic region separating two head-to-tail genes encoding RGS7 (regulator of G-protein signaling 7) and FH (tricarboxylic acid cycle fumarate hydratase enzyme; Figure 1). Mutations in FH have previously been associated with two rare syndromic forms of UL, hereditary leiomyomatosis and renal cell cancer and multiple cutaneous and uterine leiomyomatosis. 10,11 The current paradigm associates FH to a tumor suppressor gene [10][11][12] but several observations argue against it and models for an alternative 1q43 fibroid gene linked to FH and/or a pleiotropic FH have been postulated. 9 In an early study in the Quebec Family Study, we have mapped a quantitative trait locus for body fat to a chromosomal interval overlapping the FH locus. 13 In the National Institute of Environmental Health Sciences Uterine Fibroid Study, we observed several signals for the association with BMI, with the most significant ones peaking in the RGS7-FH genomic interval in European Americans and in RGS7 and PLD5 (phospholipase D member 5) in African Americans (unpublished observation). However, it is still not clear which of FH, RGS7, PLD5 or another linked gene is the actual obesity gene because the association pattern across the FH-linked region varied by the UL affection status and race stratum. The emerging model is not in disagreement with our postulate of linked obesity and fibroid genes. Interestingly, the variance explained by the candidate 1q43 obesity gene is 20-30 times larger (r 2 ¼ 2-3%) than those reported for typical candidate obesity genes identified in meta-analyses of genome-wide association study. 14 Nevertheless, the role of FH or the linked gene in non-syndromic UL is yet to be proven. As a proof-of-concept study for the phenomenon of confounding by LD, we analyzed the LD pattern among Chr.1q43 single nucleotide polymorphisms (SNPs) at risk for UL and/or obesity in the European group of the National Institute of Environmental Health Sciences Uterine Fibroid Study. Table 1 shows the list of 1q43 SNPs of interest, their relative position and the significance of their association with the UL and the BMI outcomes. To assess the independent effect of SNPs on each of the UL and BMI outcomes, we evaluated the association with UL in models with and without adjustment for BMI, and the association with BMI in models adjusted or not for the UL affection status. Confounding by LD is suspected if the LD pattern between SNPs associated with a given trait and SNPs associated with another trait is different among cases and controls. In our example, vanishing associations with UL at given 'UL' SNPs after adjustment for BMI would be an indication for confounding due to LD with 'obesity' SNPs. A direct demonstration of this phenomenon is suggested by the data in Figures 2a and b, which show a differential LD pattern at the tested 'obesity' and 'UL' SNPs in UL cases and controls, respectively. As can be seen, SNP375 (rs4391653) in RGS7 and SNP1320 (rs7531009) in PLD5, the two SNPs that remained significantly associated with BMI after controlling for the UL affection status, show different levels of LD with the 'UL' SNPs in cases and controls. Specifically, the SNPs at which the association was lost after adjustment for BMI (RGS7 SNPs 416-418 and RGS-FH intergenic SNPs 644 and 651) exhibit opposite levels of LD with the two 'obesity' SNPs 375 and 1320 in the UL cases compared with the controls. Although the current data support the presence of colocalized susceptibility loci for obesity 13,15,16 and reproduction-related traits, [9][10][11]17,18 it remains to be seen whether the present proof-of-concept study will be strengthen or weakened following gene identification and mutational analyses. In our example, the use of tag SNPs as proxies for the actual causal variants and BMI as proxy for body fat to assess genetic confounding by LD may mask the true impact of this phenomenon. Consequently, confirmation and validation of this hypothesis in a genomic context with known and validated genotype-phenotype relationships should be an important undertaking. In a meta-analysis study of age at natural menopause in populations of European descent, EXO1, a DNA repair gene that maps close to FH was highlighted as one of the 13 loci influencing this outcome. 18 Mapping of age at natural menopause, one of the established risk factors for UL, to a region implicated in UL and obesity makes the potential of confounding by LD more plausible. Thus, correlated traits may provide clues to the location of disease genes but can also confound the association. Actually, the UL example is not a simple illustration of genetic confounding by LD because UL is a hormonally dependent cancer believed to be influenced by steroid bioavailability, a level of which is in turn influenced by the body fat through downregulation of circulating sex hormonebinding globulin. 19 Coincidentally, linkage for the circulating level of sex hormone- binding globulin in the HERITAGE study peaked at D1S321, a marker within 150-200 kb distance from FH. 20 Thus, further complications arise as the linked candidate obesity gene may affect the association with fibroids in several ways: through the causal pathway by decreasing the serum level of sex hormone-binding globulin and consequent increase in bioavailability of free steroids, confounding by LD or through combination of both. LD AND SELECTION On the other hand, confounding by LD can be useful for association studies (gain of statistical power in bivariate models) because genes that have similar and/or coordinated functions tend to be clustered in the genome of eukaryotes. 21 In the obesity and UL example, it is tempting to think that the atypical association between obesity and fibroids may actually reflect evolutionary selective constraints on 'thrifty' obesity and reproduction-associated genes that have evolved different mutation patterns in the history of human populations. The selective advantages of the overweight-to-obese traits have amply been discussed in the past; 22 here I propose a thrifty phenotype model for the high incidence of UL in women. Reproductive suppression as an adaptive response to low-energy availability 23 would be an attractive evolutionary model for UL. With reproduction and metabolism (fat storage) representing the most selectively constrained traits and functions, occurrence and conservation of LD between their genetic determinants in populations are alike. In this line of thought, a large-scale study (CARethe National Heart, Lung, and Blood Institute Candidate Gene Association Resource) of ultraconserved polymorphisms in the human genome, which are believed to affect reproductive (age at natural menopause, number of children, age at first child and age at last child) and overall (longevity, BMI and height) fitness, 24,25 has shown an excess of associations with BMI. 26 Interestingly, the most strongly associated SNP, rs10818872, occurred in DENND1A, a locus previously shown to be associated with polycystic ovary syndrome and detected by a SNP in high LD (r 2 ¼ 0.826) with rs10818872. 27 Independently, however, whether the atypical association of BMI and UL is real or artifactual, and whether it is related to the candidate 1q43 region or not, confounding by LD should be real and a valid notion potentially explaining the growing reports on shared genetic polymorphisms between obesity-and reproduction-related traits [28][29][30][31][32][33] including those implicating chromosome 1q43. 9,18 Similarly, follow-up of genome-wide linkage and admixture signals for UL in the NIEHS cohort pointed to several known obesityrelated genes (Aissani et al., unpublished observation). CONFOUNDING BY GPD The occurrence of GPD may reflect demographic patterns or can be the result of evolutionary processes that favored interactions between genes (epistasis) contributing to the expression of specific traits in populations under specific environments. Similarly, GPD can confound association tests and may account for a number of the claimed associations with common diseases (for example, 349 hits with genome-wide distribution were found in a query of the rs4290050 rs4378194 rs2815842 rs2994976 rs3014580 rs2994979 rs11586971 rs12754512 rs2994981 rs12566016 rs10926494 rs2144 rs10802996 rs3845563 rs10926736 rs7531009 rs10926756 rs10754775 rs6673294 rs10803075 rs6679445 375 416 417 418 644 651 656 658 659 661 662 663 673 943 1037 1292 1320 1335 1587 1600 1602 Block 1 (0 kb) Block 2 (4 kb) Block 3 (10 kb) Block 4 (4 kb) rs4391653 rs4290050 rs4378194 rs2815842 rs2994976 rs3014580 rs2994979 rs11586971 rs12754512 rs2994981 rs12566016 rs10926494 rs2144 rs10802996 rs3845563 rs10926736 rs7531009 rs10926756 rs10754775 rs6673294 rs10803075 rs6679445 375 416 417 418 644 651 656 658 659 661 662 663 673 943 1037 1292 1320 1335 1587 1600 1602 Map viewer database-NCBI-with the keyword 'obesity'). Studies exploring the genetic basis of correlated traits and clinical phenotypes in relationship to coordinated epistatic gene expression are still in their infancy and new integrated approaches to tackle this complexity are needed. The existence of trans-regulation of expression phenotypes at the genome level 34 further suggests that GPD may underlie the correlation among human traits through coordinated gene expressions In this context, the emerging field of phenomics and its combination with genome-wide association study in the so-called phenome-wide association studies is an important advance in the design of the next generation of genetic association studies of common diseases. 35 Furthermore, the development of resources such as Population Architecture using Genomics and Epidemiology-National Human Genome Research Institute to characterize wellreplicated genome-wide association study variants in relationship to many traits and disease phenotypes 36 will provide opportunities to test new hypotheses on the genetic basis of correlated traits and comorbidities. CONFOUNDING BY PLEIOTROPY The other source of genetic confounding is obviously pleiotropy (one gene or one mutation affecting more than one trait or phenotype), a concept known for exactly a century 37 but the interference of this phenomenon in genetic association studies has not been fully investigated. Empirical data essentially from model organisms indicated that pleiotropy is a common phenomenon. 38 At least in yeast, pleiotropy is believed to be mostly of type 2, that is, a single molecular function resulting in multiple effects (as opposed to type 1 pleiotropy, which is one gene, multiple functions). 39 As outlined in the UL example, it is not yet clear whether the UL gene on 1q43 is pleiotropic (an obesity gene affecting UL through change in hormonal milieu), a hypothesis supported by the linkage of the FH region to the level of circulating sex hormone-binding globulin in the HERITAGE study, or a UL gene confounded by a linked obesity gene. In either scenario, FH remains a strong candidate pleiotropic gene. Indeed, a model has been proposed whereby a single inactivating mutation can affect distinct functions encoded by FH, 9 with cytosolic FH possibly acting in DNA repair activity 40 and mitochondrial FH in metabolism. Models that analyze the effects of genetic variation on the combined outcomes (bivariate or multivariate analyses) theoretically should increase the power to capture the underlying pleiotropic loci. CONFOUNDING BY CYTONUCLEAR LD Confounding by yet another type of LD, cytonuclear LD (for LD between DNA variants present in organelles and in the nucleus), is more challenging and remains largely unexplored. A priori, any positive associations with nuclear variants may actually be proxies for true causal mitochondrial variants and vice versa. Confounding by cytonuclear LD can be more subtle, given that as many as 966 or more nuclear-encoded factors are involved in the maintenance and function of the mitochondrion. Thus, mitochondrial diseases can have diverse modes of inheritance (maternal, Mendelian and a combination of the two) with variable expression of their phenotypes owing to the stochastic and quantitative nature of inheritance of heteroplasmic mutations. 41 As a result, current knowledge and methodological approaches to evaluate the joint effects of co-inherited mitochondrial and nuclear variants on human diseases are limited. For diseases such as cancer, infection and immune inflammatory diseases, the interaction effects of specific nuclear and apoptosis-related mitochondrial variants are likely to be important determinants of disease risk or progression. One possible way to control for the effects of mitochondrial variants in genome-wide association study is to type diagnostic mitochondrial polymorphisms to allow for the mitochondrial haplogroup membership to be inferred and used as a covariate. 42 The interplay between the nuclear and the mitochondrial genomes is likely to be central in the regulation of energy homeostasis and perturbations in the coordinated expression of the underlying nuclear and mitochondrial genes may lead to poor cellular performances and diseases. Conceivably, recent population admixtures may increase the risk for certain conditions that arise from selectively constrained human traits such as obesity, reproduction and resistance to infection, hence reaching genetic fitness in the parental populations of recently admixed populations. CONCLUSION The correlations among traits and consequently among clinical phenotypes undoubtedly have clouded the interpretation of many genetic association data. Beyond the initial quest to highlight aspects of LD that may affect the interpretation of association studies, I extended the reasoning to convey new thoughts on the possible existence of thrifty phenotypes to account for the high incidence of uterine fibroids and to some extent of reproductive cancer in general in human populations. Confounding by LD or GPD is a novel concept that may in part explain failures to replicate findings in genetic associations that incorporate covariates differentially correlated in different populations. Finally, while perplexing but legitimate interrogations have been raised in this paper on the meaning of genomephenome associations in the context of dynamic genomes, coordinated gene expressions and correlated traits and diseases, a framework of thought for genetically and evolutionary determined 'thrifty correlated phenotypes' is suggested to account for the high incidence of uterine fibroids in the contemporary obesogenic environment. CONFLICT OF INTEREST The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
2016-05-12T22:15:10.714Z
2013-12-19T00:00:00.000
{ "year": 2013, "sha1": "a6fef4e48805ac0bdc713581cd7775319274e63f", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/jhg2013130.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a6fef4e48805ac0bdc713581cd7775319274e63f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
192995511
pes2o/s2orc
v3-fos-license
Some Principles of Formal Variation in the Kolintang Music of the Maranao T he two principal groups of Moslems living on the island of Mindanao in the Southern Philippines are the Maranao and their neighbors, the Maguindanao. Although each of these groups regards itself as distinct from its neighbors, their languages are mutually intelligible and they share many common cultural traits. Islam has strongly imbued the daily life of both groups and among them the numerous Hadji-persons who have made the long pilgrimage to Mecca-are highly honored. The Maranao live in the northwestern part of Mindanao around the large Lake Lanao from which they take their name, Maranao, "people of the lake." They number slightly more than the Maguindanao, 190,000 according to the 1943 census, and are very proud of the large Mindanao State University in their capital city, Marawi, which is designed to serve Mindanao, the Sulus, and Palawan. but also employing the end-blown flute, the Insi, the Kobing and Sirongaganding, a bamboo gong; the processional music (Tagongko ) played by a pair of large cymbals, Pandaopan, two large European type street drums, Tambor, and two small gongs, Pong (Fig. 3); Kagandang, the singing of heroic epics to the accompaniment of a pair of Gandangan, double-headed drums; and several forms of religious as well as secular solo vocal music. The most frequently heard music among both the Maranao and Magindanaon peoples is that of the Kolintang ensemble. This group of instruments takes its name from the melody-playing instrument of the ensemble, a row of usually bronze gongs over a simple trough resonator in a frame that is often elaborately decorated. Ensembles of this type are found throughout the Moslem Philippines, that is, not only among the Maranao and Magindanaon but also among the various ethnic groups living in the Sulu Archipelago. Beyond the Philippines, this type of ensemble is found in northern Borneo, where the name Kolintang is also employed and further east in many of the small islands of the Moluccas of Indonesia. All of these ensembles have in common a row of melody-playing gongs whose number varies greatly and which is usually supported in performance by drums and larger gongs which supply rhythmic variation and structural emphasis. The structure of this ensemble reaches great subtlety and complexity among the Maranao. The Kolintang kettles themselves are frequently cast by the cire perdue method at the village of Togaya on the coast of Lake Lanao only a few miles from Marawi City. The alloy used in these gongs as well as the tuning may vary considerably from one set to another. Although the tuning of the Kolintang row varies greatly throughout the Moslem Philippines, there is often a similar preferred pattern of large and small intervals that results in some uniformity of contour when the same melody is heard on differently tuned sets. Unfortunately, not enough source material on the Kolintang tunings is available to allow any definitive statement to be made concerning the Maranao tuning at the present time. Jos6 Maceda points out, however, that several Magindanaon musicians were fascinated by a toy piano that he had in his house and by trial and error one of them came up with a scale that corresponded to a Pelog type and proceeded to play several Magindanaon tunes in this tuning.2 Similarly, Mr. Usopay Cadar has a tape recording of a Maranao street musician who plays Kolintang music on the harmonica and who, likewise, has hit upon a Pelog type of scale as most satisfactory. The Maranao people use Kolintang music frequently and any sizable gathering of people can become a Kalilang, an occasion for merrymaking, and consequently an occasion for music. The Maranao recognize two distinct kinds of gathering at which Kolintang music is employed. A formal affair is called Kapmasa-ala Ko Lima-Ka-Daradiat (Masa-ala = formal puzzle/gathering, Lima-Ka-Daradiat = a set of five proposals/instruments or players), and includes the recitation of lyric poetry, singing, and dancing (Fig. 4). A more informal gathering is called Kap'pakaradia-an (Pakaradia-an = merry-making). The formal affairs occur in connection with marriage ceremonies, first, at the formal marriage proposal, and at the occasions during which transactions are being made concerning the exchange of gifts by both parties before and after the wedding; second, the elaborate parties held in honor of the pilgrims returning from Mecca; and third, during the celebration which follows the transferral of the Sultanate from one family to another. The number of Kolintang sets in the Maranao region is great. In the village of Taraka which may have about 20,000 people, perhaps one family in three owns a Kolintang set. Since the occasions requiring the use of Kolintang are frequent, many people endeavor to obtain their own instruments rather than being forced to borrow them from neighbors. In orthodox Kolintang performances, two large bossed gongs called Agong must be included in the ensemble. These gongs measure about 22 inches in diameter with a flange of about 10 inches in width. These large gongs are often cast in Borneo and a particularly fine one may be quite expensive. The Agong are struck on the boss with large padded mallets and are usually dampened with the player's other hand or with the mallets themselves. The two Agong are suspended by ropes from a tree limb, the ceiling or in a wooden frame and two gongs always play in interlocking pairs. One part plays on the main beats and is called P'nanggisa-an (simple rhythm); the second gong, which is usually higher in pitch plays off the beat and is called P'malsan (from P'mals meaning "to pronounce"). In performance both players exhibit a spirit of friendly rivalry, trying to improvise variations without destroying the basic structure of the pattern. Pemalsan Penanggisa-an A small hanging gong, called Babndir (probably taking its name from the Arabic drum Bendir), about 10 inches in diameter, is beaten with one or two short unpadded sticks either on the rim or on the face of the gong. The Babndir plays a steady stream of rhythmic variations which is free to follow the contours of the Kolintang part, the drum pattern, or wha is being played on the two Agong. A large single-headed drum, called Dadabuan or Dbakan, and played with two long rattan sticks, has perhaps the greatest scope for improvisation in the ensemble. The name of the instrument is based on the word Dbak, which may in turn derive from one of the Arabic names for the vase-shaped drum, Tombak or Dombak. This instrument, almost always played by a man, most often begins the performance with a series of dramatic strokes followed by the steady ostinato of the player's own preferred pattern. The number of possible variations is great, yet once the ostinato pattern begins the Agong and Babndir players are expected to join in quickly. These instruments may play along together for some time before the Kolintang itself joins in. Among the Maranao, the Kolintang is almost always played by a female. It may require a certain amount of persistent persuasion to get her to agree to play. Once she agrees, she may walk gracefully to the Kolintang, seat herself before the instrument, play a short pattern in free rhythm called Ka-anon designed to allow the player'to be certain that the gongs have not been reversed in order for some other composition. Then she will casually arrange the folds of the Malong, the Maranao skirt-like garment equivalent to a sarong. Only after all this will she take general notice of the fact that the other instruments have been playing a kind of music-minus-one in expectation of the Kolintang. The Kolintang then begins to play a series of single strokes synchronizing the rhythm of the other instruments. These strokes are played with the right hand usually on the third kettle or less frequently on the sixth kettle. These single strokes are continued until the Kolintang player is satisfied that the rhythm is well established, she being free to speed up or slow down the tempo according to her personal taste and to suit the composition which she intends to play. During this portion of the performance only the Kolintang player has any idea which piece will be performed. After all the instruments of the ensemble have been stabilized the soloist may begin the first pattern of the composition proper. In certain compositions this may lead directly to the first part of the melody and yet in other pieces, notably the more complex ones like Kapagonor or Kapromayas, there is a kind of introductory pattern which is part of the composition but which can be repeated until the player feels ready to go on. The performance then continues with the Dadabuan, Agong, and Babndir players continuing to improvise on their basic patterns, frequently attempting to work some element of the Kolintang part into their own variations. The Kolintang also improvises but within somewhat differently imposed limits. In order to look more closely at the structure of the Kolintang part it will be necessary first to consider something of the nature of the compositons in the repertoire. Among the Magindanaon people, recreational performances of Kolintang music must consist of the performance of three compositions, or rhythmic/melodic patterns which are the basis of improvisation. These three compositions are Duyog, Sinulog, and Tidtu and they must be performed in this order. The three pieces can be played in either the old sedate Danden style or in the more modem and lively Binalig style.3 Among the Maranao no such formal order exists in performance. However, there are three compositions which are considered more difficult and which also allow the performer greater scope for variations. These three compositions are Kapromayas or Romayas, Kapagonor or Onor, and Katitik Pandai, also called Kapaginandang. At least one of these compositions is usually played at every Kolintang performance, formal or informal. Although the Maranao traditionally recognize no such division, for purposes of discussion here, the entire body of Kolintang music may be thought of as falling into three generic types. One, pieces which originated as songs; two, abstract compositions, and three, compositions which attempt to imitate extramusical sounds or effects. Before concentration on the abstract type of compositions, we may give a few examples of types two and three. The composition Kapmamayog is based on a song called Mamayog in which a young girl chides her young man (whose name is Mamayog) about the direction in which he is traveling, suggesting that perhaps there is some other girl he plans to visit. Compositions of both these types are usually very simple in structure, most often consisting of the main motive, its variation, and its restatement at one or sometimes two higher positions and often a second motive. Each of these elements can be repeated by the player at will before contrasting it with another element. Let us look once again at the composition Kapmamayog. The Kolintang version of the song is based on three sections which must be played in the prescribed order. Sections two and three are, in fact, direct variations of section one. Because of the great variation in tuning from one Kolintang set to another, all further transcriptions will be given in cipher notation with the gongs indicated by the number 1 to 8, from low to high. A comparative cents table of some Kolintang tunings is given at the end of this article. In these transcriptions each cipher is equal to one beat. Ciphers appearing close together and underlined are given half beats. Rests are indicated by a dash in place of the cipher. The transcriptions are basically for right hand, which plays the melody. The left hand most frequently plays on one gong, usually gong 2 or 3, and plays double notes during the rests of the right hand. The left hand is free and more subject to individual and personal interpretation than the right. Section IIB then leads to IIA which is a crucial figure in the structure of the piece. After playing it, the Kolintang player may go on, or return to IIA with the aid of a transitional figure. Also, after the completion of both sections III and IV, it is to IIA that the player returns. It is also after IIIA that the cadential pattern can be introduced. Throughout the Maranao repe Agong patterns are employed, m 4. Kanditagaonan is about a lover named Ditagaonan. In the story the woman reminds him of their relationship. Another version is a children's song: "My friend, Ditagaonan/Let us plant sweet potato(es) today/And then harvest it tomorrow/And then cook it the day after/To feed all the masses." 5. Kambongbong is based on a lullaby with onomatopoetic text. Another version is also a children's song: "Bong, bong, Javanese gong/ Play the big (Javanese) gong/ So that it be heard/ By the king's men/ Who will help cut bamboos/ And fell timbers/ To build a palace/ For the king and the queen." 6. Kap'panok from papanok meaning "bird." It is a song about a bird which is associated with a lover: "If only I were a bird/I would fly around/Surf the prevailing winds/To land wherever/You may be." 7. Kasirong from sirong meaning "to take shelter." This piece is a satire against a rich family (represented by a big tree or shelter) who preys on the poor. 8. Kandayo-dayo from dayo meaning "a friend." A song about a friend who is far away. 9. Kasulotan is a song about a certain sultan who went to Manila to seek a government job. He promises not to come home unless appointed for the position. But his lover promises, too, that unless he comes home soon she will have him replaced by a new candidate before long. 10. Kalabo-labo refers to a praying mantis; the song describes the funny shape of the insect and includes some meaningless vocables. 11. Kapagilala is a song asking the people to carry out their Islamic duties while there is peace and they have time and physical strength.
2018-12-13T17:23:16.335Z
1974-01-01T00:00:00.000
{ "year": 1974, "sha1": "084f710c656e9f4363e86368b6f4b1bbb2625b34", "oa_license": "CCBY", "oa_url": "https://escholarship.org/content/qt1fd434bq/qt1fd434bq.pdf?t=otp3kl", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "6d41b12a67e18965c5abc6bb8cc433168b460894", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Art" ] }
265601145
pes2o/s2orc
v3-fos-license
Ethnic group affiliation and second/foreign language accentedness in English and Mandarin among Hong Kong speakers ABSTRACT As part of a larger project that investigates the issue of identities in Hong Kong, this study anchored on the sociocognitive paradigm in second language acquisition (SLA) explores the potential relationship between one’s identity and perceived language accentedness. Our study set in Hong Kong (HK) aims to extend Gatbonton and colleagues’ works (e.g. [2005]. Learners’ ethnic group loyalty and L2 pronunciation accuracy: A sociolinguistic investigation. TESOL Quarterly, 39(3), 489–511. https://doi.org/10.2307/3588491; [2008]. The ethnic group affiliation and L2 proficiency link: Empirical evidence. Language Awareness, 17(3), 229–248. https://doi.org/10.1080/09658410802146867; [2011]. Ethnic group affiliation and patterns of development of a phonological variable. Modern Language Journal, 95(2), 188–204. https://doi.org/10.1111/j.1540-4781.2011.01177.x) that examine the relationship between ethnic group affiliation (EGA) and language proficiencies in diglossic contexts. HK is a multi-glossic context where Cantonese, English and Mandarin are the official languages, and they perform distinctive functions in various public and private domains. Through analysing participants’ (n = 65; born between 1970s–1990s) self-identification and their reported accentedness in English and Mandarin, we address the question of whether EGA as a set of social factors has a bearing on a person’s linguistic achievements. Findings indicate that participants’ identification with the Chinese/ HK identity is related to their perceived accentedness in the targeted languages in intricate ways that do not align completely with our predictions. We conclude by calling for further socio-cognitively informed research that investigates multiglossic situations where languages/ language varieties complement or compete with each other. Introduction The studies of second language acquisition (SLA) have evolved considerably over time.The focus on the cognitive and mentalistic factors (i.e.learner internal factors such as age, aptitude, etc.) among some earlier work has fuelled research that aims to better understand the nature of linguistic competence among second/ foreign language learners.However, researchers such as Firth andWagner (1997, 2007) have argued that the heavy focus on such factors at the expense of understanding the social, interactional and contextual elements of language use has led to a skewed perspective of the field.As they see interactions as the site that engenders language development, Firth and Wagner contend that 'the dichotomy of language use and acquisition cannot defensibly be maintained ' (2007, p. 800).They urged further explorations of the interrelationships between language use, language learning, and language acquisition in order to reach a more comprehensive understanding of the field.Such arguments have contributed to or even led to the 'social turn' in the field of SLA.Though it should also be noted that social factors and the notion of context have featured in some models and theorisation of SLA which predate Firth and Wagner's widely cited paper in 1997.They include Spolsky's general model of second language learning (1989) and Schumann's acculturation model (1978).The language socialisation paradigm (Duff & Talmy, 2011), the socio-educational model (Gardner, 2006) and the socio-cultural model (Lantolf, 2011) are other examples which place a strong emphasis on the social elements. With the social turn (Block, 2003(Block, , 2007) ) comes increasing attention towards the role that social factors and social context more generally play in the process of SLA (e.g.Leung & Young-Scholten, 2013).Most works in this paradigm, which often have a socio-cultural orientation, focus on analysing learners' socio-linguistic experience and how learners engage and invest in the process of learning (see Darvin & Norton, 2015). 1 However, fewer studies have investigated how these social circumstances might influence linguistic outcome.One could argue that the pendulum has once again swung to the extreme, where this time investigations and interests in 'the social' came at the expense of concerns over linguistics/ cognitive factors and outcome.The unfortunate state of affairs where the cognitive/ linguistic and the social paradigms have remained entrenched and separated can perhaps be attributed to the perceived ontological and epistemological 'dichotomy' that researchers from the two paradigms hold (cf.Leung & Young-Scholten, 2013;Hulstijn et al., 2014).But to be fair, the linguistics/ cognitive factors are not the prime concerns of researchers who have a more social orientation (and vice versa); researchers in the social paradigm are often interested in documenting the complexity of experience and learners' ability to negotiate in complex social settings as opposed to learners' ability to supply the correct verbal inflections, for instance. Yet, recent research in the 'socio-cognitive' paradigm allows us to see the potential complementarity and synergy of the two seemingly distinct and incommensurable perspectives.In addition to the traditional focus on learner-internal factors, the growing body of research under the socio-cognitive perspective has begun to consider how social factors such as attitude and identity mediate the linguistic experience of language learners which may in turn affect acquisition outcome (see Batstone, 2010;Hulk & Marinis, 2011;Moyer, 2013).For example, a stronger affiliation to the target language group may enhance the extent and diversity of language exposure, the enhanced input in terms of both quantity and quality in turn creates a richer or more favourable environment for linguistic development to take place.In our view, this socio-cognitive perspective aligns well with the recent and renewed interest in transdisciplinarity and the exploration of the interplay among the micro, meso, and macro contexts and factors that the Douglas Fir Group have drawn our attention to (2016,2019). One realisation of such socio-cognitive perspective is the work by Gatbonton andher colleagues (e.g. 2005, 2008) which examine the relationship between people's ethnic group affiliation (EGA) and their (self-perceived) language proficiencies in Quebec where French and English co-exist and function alongside one another (see also Trofimovich et al., 2013 andTrofimovich &Turuševa, 2015 for the Latvian context in relation to Russian and Latvian).The present study models after this line of research.Set in Hong Kong where Cantonese, English and Mandarin function alongside one another as official languages, our research study aims to investigate speakers' EGA and their self-perceived accentedness in English and Mandarin.It is hoped that this study will add to current understanding of the relationship between social factorsmore specifically EGAand SLA/ foreign language learning (FLL) from an Asian perspective. The remainder of the article first reviews some EGA studies.It then proceeds to provide some background information about the research setting.This is followed by the study design and findings from the present study.We conclude by calling for further investigations into contexts where different languages or language varieties function in a diglossic or multi-glossic manner. Ethnic group affiliation (EGA) and second/foreign language learning Although Gatbonton and her colleagues initially made no explicit reference to the sociocognitive orientation, their series of studies which explore the relationship between EGA, which is defined as 'one's sense of belonging to a primary ethnic group' (Gatbonton et al., 2005, p. 489), and second/ foreign language proficiency can be viewed in such a light.EGA is constituted by a set of social factors, e.g.depth of involvement in the ethnic group; pride in, familiarity with, and feelings of comfort with the group; perception of the place of the ethnic group in relation to other groups; perceptions of the group's vitality; and views towards the socio-political concerns of the group.Hence studies that aim to explore the relationship between EGA and linguistic outcomes, often operationalised as global proficiency, intelligibility, or accent in the target language (see below), can be considered to fall under the sociocognitive framework. In a series of pioneering studies, the potential relationship between EGA and (selfperceived) language proficiency and political affiliation in the context of Quebec was explored.For example, Gatbonton et al. (2005)'s seminal study on the topic investigated Francophone (n = 24) and Chinese (n = 84) L2 English learners' perception of L2 English in Quebec.They found a tension between perceived efficiency/ proficiency in the L2 and perceived affiliation to the ethnic group.Speakers who spoke a 'less-accented' form of L2 English are perceived to be less affiliated by listeners of their L1 group.However, at the same time, listeners preferred to have less accented speakers as political leaders, hinting at the importance of 'effectiveness' (intelligibility) in speech. The relationship between EGA and L2 proficiency is examined more directly in Gatbonton and Trofimovich (2008).59 adult French-English bilinguals from Quebec who participated in the study read English text and completed an EGA questionnaire which assesses the pride, loyalty and support for their ethnic group and its language.Results revealed that EGA is related to certain aspects of accent in a complex way.Both positive and negative links exist between EGA and L2 English speaking ability (self-rated and judged by natives).Those who strongly supported Quebec's independence sounded more accented, less comprehensible, less fluent and less proficient overall.While those who had double positive orientation (towards their own group and the L2 group) are judged to be the most proficient.The observed effect is mediated by language use.Doucerain (2019)'s exploration in the Quebec context further demonstrated that among recent multicultural immigrant participants, L2 experience, in terms of L2 use and L2 social contact (i.e.friendships in the 'mainstream' group), mediates their positive cultural orientation and selfassessed L2 competence. Apart from global measures of proficiency, the relationship between EGA and a specific linguistic element has also been studied.Gatbonton, Trofimovich, and Segalowitz's study (2011), for instance, investigated the production of the voiced interdental fricative, /ð/, by 45 Francophone L2 English learners.EGA is again found to play a role in participants' L2 English.Findings suggested that the stronger the Francophone EGA participants held, the less native-like their L2 pronunciation accuracy was, as evaluated by native speaker judges. EGA research has been extended to another research setting outside Quebec firstly by Trofimovich et al. (2013) and Trofimovich and Turuševa (2015).They examined the relationship between ethnic group identity and L2 speaking ability of ethnically Russian and Latvian speakers in Southern Latvia.Similar to the context of Quebec, where the majority French speaking community co-exist alongside the minority English speaking group, in Southern Latvia, the majority ethnic Latvian population co-exist with the minority ethnic Russian group.Their results indicated that for Latvians (n = 82), the stronger their EGA, the lower they rated their own L2 Russian ability.However, no such effect is observed for the L2 Latvian among ethnic Russian speakers (n = 26), which suggests that their ethnic identification is unrelated to their perceived Latvian ability.It is also interesting to note that Trofimovich and Turuševa (2015) have articulated their sociocognitive orientation more explicitly when they set out to 'integrate evidence from social psychology and applied linguistics, by focusing on the identity-language link from the perspective most relevant to second language (L2) development, namely, by considering how ethnic identity might be implicated in L2 learning' (234). More recently, related studies set in other contexts have emerged.They include Peng and Patterson's (2021) exploration of international students' cultural identification and their L2 English proficiency in the US higher education context, Gu et al.'s (2023) study of ethnic minority immigrant parents in Hong Kong, and Banyanga et al.'s (2018) investigation of Finnish-Swedish highschoolers in Finland.Peng and Patterson (2021) found a negative relationship between their 77 participants' ethnic identification and self-perceived English proficiency which is mediated by motivation in learning English.They also demonstrated that 'American identification promoted English proficiency through motivation in language learning' (67).Gu et al. (2023) found that their 655 ethnic minority participants' cultural knowledge was linked to better spoken Cantonese, while greater cultural identification with their own ethnic community was linked to better Englishspeaking skills.They also observed a difference between males and females with females reporting inferior Cantonese-and English-speaking proficiencies.Banyanga et al. (2018) showed that language and Finland-Swedish culture are important to their 1012 ninth-graders' self-identity. By and large, this body of work have demonstrated the relationship between EGA and language proficiency and/or language learning, though that relationship is often complex and multifaceted (but see also Tekin, 2019 where no correlations were found between EGA and ratings of comprehensibility, intelligibility, accentedness, and acceptability of L2 English speakers in the US from a range of L1 backgrounds).However, much of the existing literature, as reviewed above, focused on contexts with two languages at play, less is known about settings where more than two languages (or language varieties) function alongside each other.Our study aims to extend this body of work and further enhance our understanding of the relationship between EGA and language learning/ accentedness by considering Hong Kong, a context where three main linguistic codes, namely, Cantonese, English and Mandarin, exist and function in the society. Socio-political and linguistic background of Hong Kong Shaped by its past as a British colony, HK people claim a dual/ mixed identity, seeing themselves as both 'Hongkongers' and 'Chinese' (Brewer, 1999;Ma & Fung, 2007).This allows them to claim their ethnic heritage with the wider Chinese population, but also to differentiate themselves from mainland Chinese outside HK, prizing their cosmopolitanism and command of the English language (Chan, 2002;Joseph, 2004).With tourism liberalism measures easing travel restrictions between mainland China and HK and the Closer Economic Partnership Arrangement since the sovereignty was returned to China in 1997, there has been an increase in contact between residents of the two places.These have significant ongoing effects and repercussions on political, cultural, and socio-economical dimensions of people's life in HK, bringing the notion of identity to the fore.The complexity in identity and identification is once again under the spotlight during recent tension between part of the population and the local and central Chinese government.The social-political volatility has arguably led to a new wave of emigration and a renewed attempt by the HK government to attract talents both from mainland China and abroad.For instance, Ng (2022) reported that Hong Kong's Chief Executive John Lee in July 2022 announced government plans to set aside 30 billion HK dollars to launch the 'Top Talent Pass Scheme' to entice talents outside HK to pursue their careers in the city.All of these factors contribute to the complexity in (and perhaps impossibility of) pinning down the notions of identity and affiliation in the context of Hong Kong (see Leung & Lee, 2016). The language environment in Hong Kong is also rich and intricate with the vivid presence of Cantonese, English, and Mandarin which function as the official languages in various domains of public and private life.We can classify the linguistic context in HK as diglossic/ multiglossic, where two or more varieties/ languages are used for complementary purposes in the same society (Ferguson, 1959;Jaspers, 2017).Cantonese is the mother tongue and home language of around 96% of the population (Census and Statistics Department HK, 2022;Evans, 2013).It is also the medium of instruction in Chinese as a Medium of Instruction (CMI) primary and secondary schools.English is a second language and remains important in the city (Bolton et al., 2020), partly due to its status as a lingua franca for global as well as local communications (cf.Sung, 2023).English is the medium of instruction in English as a Medium of Instruction (EMI) primary and secondary education as well as the medium of instruction in many tertiary education programmes.While Mandarin is the national language which has seen an increasing presence especially in state media.The three main/ official languages are accompanied by languages of the ethnic minoritised groups in the population such as Tagalog and Indonesian, the languages of the foreign domestic helpers from the Philippines and Indonesia, as well as other Chinese 'dialects' such as Hakka, Fukien and Chiu Chau (Census and Statistics Department, Hong Kong, 2022). Language planning and associated language policies in postcolonial HK have undergone multiple rounds of fine-tuning and modifications (Chan, 2014).This includes the proposed adoption of a compulsory mother-tongue policy at junior secondary level soon after the return of sovereignty to China in 1997.The policy is subsequently 'finetuned' partly due to outcry from parents who wish for their children to continue be educated in English.In some primary and secondary schools, Mandarin has replaced Cantonese as the medium of instruction in Chinese subjects (Evans, 2013).Although the three languages co-exist and perform distinct functions in a largely multiglossic manner, e.g.Cantonese for everyday communications, English for education, Mandarin for state ceremonies, some have discussed the potential tension among them and even threat that one language poses to the other.For example, the vitality of Cantonese might be under threat due to the introduction of Mandarin as a medium of instruction (Li, 2018).The nature of this article precludes an extended discussion of the linguistic situation of HK, but readers interested in the topic can refer to Ng and Cavallaro (2019) for a recent overview. It is against this backdrop that we collected data to examine how native Cantonesespeaking informants identify themselves in such a complex geo-socio-political and linguistic environment and whether or not their identification is in any way related to their perceived accentedness in languages that represent the past coloniser, English, and the current governor, Mandarin. Study design Data were collected from 65 native Cantonese-speaking informants (born in HK) ranging from the birth years of 1970-1979 (n = 22), 1980-1989 (n = 21), and 1990-1999 (n = 22). 2 This grouping was designed to investigate potential generational differences among the population who have experienced British colonialism and post-colonialism in HK.We adopted this grouping/ categorisation initially as they correspond to local scholars' classification of the HK population (Lui, 2007in Cheung, 2014).That to us signals a degree of ecological validity, as these categorisations fall within the repertoire of the general public, commonly circulated and utilised in popular discourse in the current research context.Yet, we also wish to point out that categorising participants according to these age groups, which are hard-and-fast, pre-existing categories, can be problematic as we subscribe to the view that identities and identification are complex, fluid and multidimensional constructs (Norton, 2013;Trofimovich & Turuševa, 2015).In fact, as shall be seen below, participants in the various age groups do not actually differ significantly in their EGA scores.Participants who were born in HK and with Cantonese as their mother tongue were also asked to classify themselves into one of the following: Hong Konger (n = 28), Hong Kong Chinese (n = 4), Chinese Hong Konger (n = 18), Chinese (n = 13) or Others (n = 2). 3 To tap into their sense of identification, we adopted the EGA questionnaire which was developed to examine how a varying degree of ethnic affiliation and identification in the Quebec context would affect learners' level of attainment in L2 French/ English.Researchers were able to establish that EGA scores or a general sense of belonging to/ affiliation with the target language and its community positively correlate, to an extent, with how proficient a person is in the L2, as well as how accented they are deemed to be, e.g. the more they identify with the French ethnic group, the less accented they sound in L2 French (see literature review above).The EGA questionnaire, adopted from Gatbonton et al. (2014), contains 93 9-point Likert scale self-rating statements which pertain to five main themes (see 1-5 below) alongside some sub-themes. (1) depth of involvement in the ethnic group; (2) pride in, familiarity with, and feelings of comfort with the group; (3) perception of the place of the ethnic group in relation to other groups; (4) perceptions of the group's vitality, and (5) views towards the socio-political concerns of the group. The higher the rating, the stronger a participant identifies with the theme in question.To explore the potential relationship between EGA scores and participants' self-reported proficiencies, which is operationalised as self-rated accentedness in this study, in English and Mandarin (9-point scale: 1 = heavily accented, 9 = no accent at all), we conducted correlation analyses according to self-categorisation groupings reported above.It should be noted that we have considered obtaining participants' scores in standardised language tests such as the International English Language Testing System (IELTS) and the Hanyu Shuiping Kaoshi (HSK -Chinese Proficiency Test), however, we were unable to collect such data especially from those in the oldest group since such tests were not common nor compulsory at the time in which they completed their formal education.We have, therefore, followed previous studies in using self-ratings as a proxy for linguistic proficiencies (e.g.Gatbonton et al., 2005;Gatbonton & Trofimovich, 2008;Trofimovich et al., 2013).Informed by the findings from previous EGA studies, it is predicted that (a) participants who identify more with the Chinese identity will consider themselves to be less accented in Mandarin, which is the national language, (b) those who identify more with the Hong Kong identity will consider themselves to be more accented in Mandarin (due to the current antagonistic relationship between some in Hong Kong and China).Further to that, based on the assumption that English is seen as an integral part of HK identity (see Hansen-Edwards, 2015 for a discussion), it is anticipated that (c) informants who identify more with the HK identity will also see themselves as less accented in English, on the flip side (d) those who identify more with the Chinese identity will consider themselves to be more accented in English, the language of the past coloniser, which can be seen as an oppressing language that conflicts with or even undermines the development of local/ ethnic identity (cf.Le Page & Tabouret-Keller, 1985). Findings In order to explore the potential generational difference, we first conducted a Kruskal-Wallis test for the EGA means that participants assigned.This analysis revealed that EGA ratings assigned by each age group were not significantly different from one another (p = 0.128).Hence data were then analysed solely according to how participants categorised themselves, i.e.Hong Konger, Hong Kong Chinese, Chinese Hong Konger, Chinese or Others.To enhance the statistical power, we have opted to aggregate the data by collapsing sub-groups that logically belong to one another before conducting our inferential statistical analyses.Given the semantic prominence of 'Hong Kong' in the categories, Hong Konger and Hong Kong Chinese, they were aggregated into one group, resulting in a 'Hong Kong' group (n = 32); while Chinese Hong Konger and Chinese were combined to form another 'Chinese' group (n = 31) for analyses, as 'Chinese' appears to be more prominent in these latter two categories. 4Participants who identified as 'Others' were excluded from further analyses. Pearson correlations were employed to determine the potential relationship between participants' self-rated language accentedness scores and the mean EGA scores they assigned under each of the five sub-themes.Below, we first describe the findings obtained for the HK group before moving on to the Chinese group. The descriptive and correlation statistics for the HK group can be found in Table 1.Analyses indicate that aspects of the EGA scores are significantly correlated with participants' self-rated Mandarin accentedness.Specifically, theme 2, 'pride in, familiarity with, and feelings of comfort with the group', and theme 3, 'perception of the place of the ethnic group in relation to other groups', of the EGA scores negatively correlated with selfaccent rating in Mandarin with a medium correlation (r = -.401) and a small-medium correlation (r = -.375),respectively (see Plonsky & Oswald, 2014).Concurring with our prediction above (see (b)), the negative correlations suggest that participants in this HK group who had higher EGA ratings in the two themes consider themselves to be more accented in Mandarin (note back the scaling of the accentedness scores where 1 = heavily accented, 9 = no accent at all).In other words, the stronger participants feel pride, familiarity and comfort with their ethnic group (the Hong Kong identity), which relates to theme 2 in the EGA questionnaire, the more accented they see themselves in Mandarin.Moreover, participants in the HK group who have a stronger perception of the place of their ethnic group (i.e.HK group), which is related to theme 3 in the EGA questionnaire, see themselves as more accented in Mandarin.These findings mirror those reviewed above from the Quebec context, to an extent, in that learners who have a stronger Francophone identification were judged to be more accented in L2 English (Gatbonton et al., 2011).Similarly, our findings also resonate with that found in the Latvian context where Latvians' stronger identification with the Latvian identity is associated with weaker self-perceived L2 Russian proficiency (Trofimovich et al., 2013).Given the nature of questions in these two themes, it is plausible that informants view Mandarin antagonistically, potentially treating it as a possible threat to their HK identity (cf.Evans, 2013 andLi, 2018).On the other hand, going against our prediction in (c), no correlations were uncovered between EGA ratings of this group and the self-rating of their L2 English, which is perhaps surprising given the discussion in the background section about the importance of English and identity construction in HK.Though it should also be noted that the lack of observed relationships between EGA and language ability has indeed been reported in some contexts as mentioned above (Tekin, 2019).We will explore this particular finding further below once we have also presented the findings for the Chinese group. Table 2 details the findings for the Chinese group.As seen, for this group, EGA scores did not seem to be associated with their self-rated Mandarin ability.This contradicts our prediction in (a), where we anticipate a positive link between the Chinese identity and the proficiency of Mandarin, the national language.However, a significant positive correlation is found between EGA scores in theme 3 and English (r = .431;medium correlation).This indicates that the more securely participants feel about the Chinese group, as theme 3 pertains to perception of the place of the ethnic group in relation to other groups, the less accented they consider themselves to be in English, which runs contrary to our prediction (see (d) above).One possibility is that this finding reflects participants' confidence of the position of the Chinese group in the HK society, so much so that being proficient in English is not seen as a threat (cf.postcolonial contexts (Le Page & Tabouret-Keller, 1985)). Discussion, conclusion and future directions Broadly anchored on the sociocognitive paradigm (Batstone, 2010) we conducted our study in Hong Kong.Modelling after the series of investigations pertaining to EGA and language proficiencies by Gatbonton and her colleagues (e.g. 2005Gatbonton and her colleagues (e.g. , 2008Gatbonton and her colleagues (e.g. , 2011Gatbonton and her colleagues (e.g. , 2014)), we aim to explore whether social factors, in particular, ethnic group affiliations are related to self-perceived language accentedness in English and Mandarin among three generations of Cantonese-speaking and HK-born participants in our research context.As one of the few studies that examine a multiglossic context where there is a complex interplay among the three official languages (i.e.Cantonese, English, and Mandarin), we hope to extend our understanding of the potential relevance of EGA in accentedness and language learning beyond what was previously established in the literature vis-à-vis binary, diglossic situations.By conducting correlation analyses, we were able to identify potential relationships between aspects of the EGA and informants' self-reported accentedness rating in the two target languages.How participants identify themselves and their EGA scores seem to be related to how proficient they perceive themselves to be in languages targeted in this study, at least to an extent.However, the nature of the relationship between the two constructs is more nuanced than predicted, especially in comparison to some previous studies which are conducted in mostly binary contexts where two languages exist and function in the societies (e.g.French-English; Latvian-Russian). In agreement with existing literature, we were able to establish some links between aspects of the EGA and participants' self-evaluated language proficiency, which was opertaionalised as accentedness rating in this study.For those in the HK group, their EGA scores in theme 2, 'pride in, familiarity with, and feelings of comfort with the group', and theme 3, 'perception of the place of the ethnic group in relation to other groups', significantly correlated with self-reported accent rating in L2 Mandarin negatively.This concurs with our prediction that the stronger one feels affiliated to an HK identity, the more accented they deem themselves to be in Mandarin.Despite being the national language since the sovereignty was returned to China, some of our participants might not have learnt Mandarin formally since it was not a compulsory subject in HK before the handover (e.g. for the oldest group).For those who have learnt it, they would most likely have learnt it as a foreign language with limited classroom input of a few hours per week at maximum for a few years in late primary education or early secondary education.This low level of exposure compounded by the potentially antagonistic view that some participants might hold against the 'imposed' national language, which had not been part of their identity until the sovereignty was returned could be the reason why we have found a negative correlation between aspects of EGA and their L2 Mandarin self-rating.It may also be relevant to acknowledge the fact that our data was collected during a period marked by socio-political instability and turmoil, coinciding with the massive protests for more local autonomy when anti-mainland sentiment is arguably rife (e.g.Lowe & Tsang, 2018).Therefore, one could postulate that the negative correlation observed is attributable to a depreciated evaluation of participants' Mandarin ability as a defence mechanism for the sake of preserving/ bolstering their Hong Kong identity (see also Hansen-Edwards, 2020 for a recent examination of the construction of linguistic identities in this time of significant political tension). The only other significant correlation observed happens to be in an unexpected direction.Namely, for the Chinese group in this study, the more secure they feel about the ethnic Chinese group they are, the less accented they consider themselves to be in L2 English.According to existing literature, the more secure one feels about their identity the more likely they would consider themselves inferior in 'other' languages, in this case, English, which was the language of the past coloniser.As discussed above, this could be signalling that the group identity is so secure that they do not feel threatened by the language of 'the other'.Alternatively, one could also speculate that, given the importance of English as an ASEAN and global lingua franca (see Kirkpatrick, 2020), English has become part and parcel of a cosmopolitan Chinese identity where proficiency in English is seen as an important asset. The lack of relationship observed in many other instances for both groups in the direction predicted is also worth commenting on.Given previous literature on the importance of English in HK and arguably to HK identity, the lack of relationship observed between English and EGA scores among the HK group is perhaps surprising (cf.prediction (c)).It is possible that this finding suggests that English ability is regarded as independent of one's ethnic identification, hinting at the possibility that English is learnt for reasons other than integrative ones, which are normally associated with the desire to integrate into or identify with the target culture/ population.On the other hand, the lack of relationship between EGA scores and self-perceived accentedness in L2 Mandarin among the Chinese group is equally puzzling (cf.prediction (a)).It could also be the case that Mandarin like English is seen as a second or foreign language that has functional values, e.g. for enhancing career prospects, rather than integrative ones, e.g.signalling identity.In fact, we find support for this line of reasoning in a large-scale study of over 500 tertiary students in HK by Humphreys and Spratt (2008).Their study reported a strong instrumental orientation to the learning of both English and Mandarin (Putonghua) in the context of Hong Kong when compared to other foreign languages on offer such as French, German and Japanese. As explained in our study design section, we were unable to obtain standardised language test scores from some of our participants, which might have impacted our results.Future studies should try to gather such information or actual linguistic performance data as far as practically possible to verify our findings.In addition, it will be useful to conduct follow-up studies, perhaps of a larger scale with more participants, over time.Since identity and by extension EGA is a dynamic and fluid construct that interacts with the socio-political-economic context (cf.The Douglas Fir Group, 2016, 2019), our findings are therefore likely a manifestation of the specific circumstances around which the data was collected.As the social political milieu evolve, how participants in the research context identify themselves may also reasonably evolve. 5Hence, it will be useful to track such changes and investigate whether the relationship we reported in our study between EGA and language proficiency will change over time as well.We very much hope that our work as one of the earlier studies that looked into a multiglossic context in Asia has helped further knowledge in the study of EGA.Future studies could continue to examine settings where more than two languages are at work or in competition to enhance our understanding of the relationship between EGA and S/FLA.We conclude by calling for other researchers to join us in trying to tackle the challenge of bringing together the social and cognitive dimensions into their research so as to further augment our understanding of the field of SLA.Notes 1.It is noteworthy that studies in related disciplines such as sociolinguistics have also discussed the relevance of 'the social' in language use.For example, whether one identifies with their interlocutor might affect how they would diverge from or converge with their interlocutor's speech as stipulated in the communication accommodation theory (Giles & Ogay, 2007). There is also evidence from sociophonetics that speech perception can be affected by as small an artefact as a stuff animal that represents a specific country (e.g.stuffed toy kangaroos and koalas which are associated with Australia or stuffed toy kiwis which is associated with New Zealand) (Hay & Drager, 2010). 2. The project received ethical approval from the first author's institution.Participants were given a brief participant information sheet that explains the purpose of the project.Their consent was sought prior to the commencement of the project.They were also given the opportunity to withdraw at any time, no one withdrew.3. Other self-nominated categories were Han dynasty person and Asian.4. The group aggregations were further confirmed by 10 randomly selected participants who consider them to be legitimate.5.In fact, the relevance of the socio-political milieu and the temporal element is so salient that one of our participants explicitly commented on it in the larger study about identity (Leung & Lee, 2016) without being prompted. Table 1 . Means and correlations between 5 EGA themes and self-rating in languages for the HK group. Table 2 . Means and correlations between 5 EGA themes and self-rating in languages for the Chinese group. LANGUAGE, CULTURE AND CURRICULUM
2023-12-04T17:30:36.112Z
2023-11-27T00:00:00.000
{ "year": 2024, "sha1": "4a43799a933b71923496317ca3fe61d18f1d1f1b", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/07908318.2023.2285797?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "1c6c423ce8f5dc7ea5f852aa64ef5342ba190340", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [] }
13058113
pes2o/s2orc
v3-fos-license
Giant obscurins regulate the PI3K cascade in breast epithelial cells via direct binding to the PI3K/p85 regulatory subunit Obscurins are a family of giant cytoskeletal proteins, originally identified in striated muscles where they have structural and regulatory roles. We recently showed that obscurins are abundantly expressed in normal breast epithelial cells where they play tumor and metastasis suppressing roles, but are nearly lost from advanced stage breast cancer biopsies. Consistent with this, loss of giant obscurins from breast epithelial cells results in enhanced survival and growth, epithelial to mesenchymal transition (EMT), and increased cell migration and invasion in vitro and in vivo. In the current study, we demonstrate that loss of giant obscurins from breast epithelial cells is associated with significantly increased phosphorylation and subsequent activation of the PI3K signaling cascade, including activation of AKT, a key regulator of tumorigenesis and metastasis. Pharmacological and molecular inhibition of the PI3K pathway in obscurin-depleted breast epithelial cells results in reversal of EMT, (re)formation of cell-cell junctions, diminished mammosphere formation, and decreased cell migration and invasion. Co-immunoprecipitation, pull-down, and surface plasmon resonance assays revealed that obscurins are in a complex with the PI3K/p85 regulatory subunit, and that their association is direct and mediated by the obscurin-PH domain and the PI3K/p85-SH3 domain with a KD of ∼50 nM. We therefore postulate that giant obscurins act upstream of the PI3K cascade in normal breast epithelial cells, regulating its activation through binding to the PI3K/p85 regulatory subunit. INTRODUCTION Obscurins are a family of giant, cytoskeletal proteins originally identified in striated muscles, where they play important roles in their structural organization and contractile activity [1,2].In humans, the OBSCN gene, encoding obscurins, spans 150 kb on chromosome 1q42 and gives rise to several isoforms through alternative splicing [3,4].The prototypical obscurin, obscurin-A, is ~720 kDa and contains multiple signaling and adhesion domains arranged in tandem.The NH 2 -terminus and middle portion of obscurin-A contain repetitive immunoglobulin (Ig) and fibronectin-III (Fn-III) domains, while the COOH-terminus consists of several signaling domains, including an IQ motif, a src homology 3 (SH3) domain, a Rho-guanine nucleotide exchange factor (Rho-GEF), and a Pleckstrin Homology (PH) domain followed by a ~400 amino acids long segment that contains ankyrin binding sites [5,6].The OBSCN gene gives rise to another large isoform, obscurin-B or giant Myosin Light Chain Kinase (MLCK), which has a molecular mass of ~870 kDa.Two active serine/threonine kinase domains that belong to the MLCK subfamily are present in the extreme COOH-terminus of obscurin-B, which replace the ~400 amino acids long COOH-terminus of obscurin-A [4,7].The two serine/threonine kinases may also be expressed independently as smaller isoforms, containing one (~55 kDa) or both (~145 kDa) domains. Recent work from our laboratory has demonstrated that giant obscurins are abundantly expressed in normal breast epithelium, where they primarily localize at cellcell junctions [8].Their expression levels and subcellular localization, however, are altered in advanced stage human breast cancer biopsies [9].Specifically, breast cancer biopsies of grade-2 or higher exhibit dramatically reduced levels of giant obscurins, while residual proteins concentrate in large cytoplasmic puncta [9].Obscurindepleted non-tumorigenic breast epithelial MCF10A cells exhibit a growth advantage under anchorage-independent conditions, form mammospheres enriched with markers of stemness, extend microtentacles, and undergo epithelial to mesenchymal transition (EMT) resulting in disruption of adherens junctions, and enhanced motility and invasion in vitro [9,10].Consistent with these major alterations, depletion of giant obscurins from MCF10A cells expressing an active form of the K-Ras oncogene results in primary and metastatic tumor formation in subcutaneous and lung metastasis in vivo models, respectively [9].Taken together, these findings indicate that giant obscurins act as tumor and metastasis suppressors in normal breast epithelium.Conversely, their loss potentiates tumorigenicity and induces metastasis. In the present study, we sought to mechanistically understand how loss of giant obscurins leads to the aforementioned phenotypic and functional manifestations in breast epithelial cells.We found that down-regulation of giant obscurins in MCF10A breast epithelial cells leads to dramatic up-regulation of the Phosphoinositide-3 kinase (PI3K) signaling cascade.Notably, the PI3K pathway is altered in > 30% of invasive breast carcinoma cases (http://www.mycancergenome.org/content/disease/breastcancer/;Targeting PI3K in breast cancer).Our data reveal that pharmacological or molecular inhibition of the PI3K pathway results in reversal of EMT and suppression of the growth, motility, and invasion capabilities of obscurindepleted MCF10A cells.Thus, loss of giant obscurins from breast epithelial cells induces a tumorigenic and metastatic phenotype, at least in part, via up-regulation of the PI3K pathway.This is corroborated by our biochemical studies demonstrating for the first time that in normal breast epithelial cells giant obscurins and PI3K interact directly at the level of the cell membrane.Collectively, our findings indicate that giant obscurins act upstream of the PI3K pathway in breast epithelial cells contributing to its regulation. Downregulation of giant obscurins in normal breast epithelial cells results in upregulation of the PI3K pathway We previously generated stable MCF10A obscurinknockdown cell lines using shRNAs targeting sequences within the common NH 2 -terminus and middle portion of giant obscurins A and B [8,9].Obscurin-knockdown MCF10A cells undergo major cytoskeletal remodeling leading to increased tumorigenicity, motility and invasion both in vitro and in vivo [8,9].However, the molecular alterations accompanying obscurins' loss from breast epithelial cells have yet to be delineated. Mounting evidence suggests the pivotal role of the PI3K signaling cascade in regulating multiple processes during breast cancer formation and metastasis, including cell growth, migration, invasion and distant colonization [11].We therefore interrogated the expression levels and phosphorylation state of major components of the PI3K pathway in MCF10A obscurin-knockdown cells.Immunoblotting analysis revealed a significant increase in the levels of the phosphorylated forms of major components of the PI3K pathway in MCF10A obscurinknockdown cells compared to controls (Figure 1A).In particular, we detected a considerable increase in the amounts of phosphorylated PI3K at tyrosine-458, a phospho-site that has been reported to track with the activation levels of the enzyme [12], PDK1, a downstream target of PI3K, at serine-241 that renders the enzyme catalytically active [13,14], AKT, a direct target of PDK1, at threonine-308 and serine-473 indicating its maximal activation [15,16], and GSK3β, a downstream target of AKT, at serine-9 leading to its inactivation that promotes cell cycle progression through stabilization of cyclin D1 [17].Thus, depletion of giant obscurins from breast epithelial cells leads to increased phosphorylation and thus aberrant activation of the PI3K signaling cascade. Inhibition of PI3K signaling in obscurinknockdown MCF10A cells reverses epithelial to mesenchymal transition We have previously demonstrated that depletion of giant obscurins from MCF10A cells results in epithelial to mesenchymal transition (EMT) [9].To examine whether activation of the PI3K pathway in obscurin-knockdown MCF10A cells underlies EMT, we used two well-known chemical inhibitors of PI3K, LY294002 and BKM120.Both LY294002 and BKM120 inhibit the catalytic subunit p110 of PI3K through direct binding and competition at its ATP-binding site. To verify the effectiveness of the two inhibitors in suppressing the PI3K pathway, we first examined whether the phosphorylation levels of AKT were reduced following www.impactjournals.com/oncotargettreatment.Indeed, treatment of obscurin-knockdown cells with varying concentrations of the LY294002 (0-25 μM) and the BKM120 (0-1 µM) inhibitors resulted in a dosedependent decrease of the phosphorylation levels of AKT at both serine-473 and threonine-308 (Figure 1B), indicating that both inhibitors can effectively suppress the activation of the PI3K cascade.We then evaluated whether inhibition of PI3K reverses EMT in obscurin-knockdown cells.Examination of the expression levels of major epithelial and mesenchymal proteins by immunoblotting revealed a dose-dependent decrease in the amounts of the mesenchymal transcription factors Slug and Twist, and their downstream target N-cadherin, and a concomitant increase in the amounts of the epithelial proteins E-cadherin and β-catenin (Figure 1B). To further demonstrate that treatment of obscurinknockdown MCF10A cells with either the LY294002 or the BKM120 inhibitor reverses EMT specifically through suppression of the PI3K pathway, we transiently downregulated the expression of members of the AKT family using siRNA technology, and examined the expression levels of the same battery of mesenchymal and epithelial proteins 48 and 72 hours post-transfection.While downregulation of AKT1 failed to exert any effect at either time point (data not shown), down-regulation of AKT2, which promotes breast tumor growth and metastasis [18], resulted in decreased amounts of Slug, Twist and N-cadherin, and increased amounts of E-cadherin and β-catenin, with a more pronounced effect at 72 hours (Figure 1C). In agreement with these molecular alterations, evaluation of MCF10A obscurin-knockdown cells treated with either 25 µM of LY294002 or 1 µM of BKM120 using bright-field microscopy demonstrated that they (re) acquired an epithelial appearance and were able to form cell-cell junctions (Figure 2A-2A").More importantly, examination of the subcellular distribution of the epithelial markers β-catenin (Figure 2B-2B") and E-cadherin (Figure 2C-2C") under confocal optics confirmed the increased expression of both proteins and revealed their accumulation at cell-cell contact sites, where they contribute to the formation and stabilization of adherens junctions. Taken together, our findings indicate that blockade of the PI3K pathway in obscurin-depleted breast epithelial cells via pharmacological treatment or molecular means effectively reverses EMT. Suppression of the PI3K pathway diminishes the growth, motility and invasion potential of obscurin-depleted breast epithelial cells The ability of epithelial cancer cells to undergo EMT provides them with a growth advantage and increased motility and invasion capabilities [19].Consistent with this, our previous studies demonstrated that obscurinknockdown MCF10A cells form robust primary and secondary mammospheres (≥ 100 µm) enriched in stem cell markers under low attachment conditions, and display markedly increased tumorigenicity, motility and invasiveness in vitro and in vivo [9].We therefore sought to examine if the enhanced tumorigenic, motile and invasive properties of the obscurin-knockdown MCF10A cells are due to up-regulation of the PI3K pathway.To address these questions, we treated stable clones of obscurinknockdown MCF10A cells with different concentrations of the LY294002 (0-25 µM) and BKM120 (0-1 µM) PI3K inhibitors and examined their ability to survive and grow under low attachment conditions (Figure 3), migrate collectively and as single cells (Figure 4), and invade though artificial extracellular matrix (Figure 5). Treatment with varying concentrations of LY294002 (0-25 μM) or BKM120 (0-1 µM) resulted in a marked decrease in the number and size of mammospheres formed by the obscurin-knockdown MCF10A cells (Figure 3A).Specifically, treatment with 5 µM and 10 µM of LY294002 resulted in 36% and 58% reduction in mammosphere formation, respectively, while treatment with 25 µM of LY294002 nearly abolished mammosphere formation.Similarly, treatment with 0.1 µM and 0.5 µM of BMK120 resulted in 18% and 89% reduction in mammosphere formation, respectively, while treatment with 1 µM BKM120 completely eliminated mammosphere formation.It is noteworthy that treatment of adherent MCF10A obscurin-knockdown or scramble control cells with either inhibitor did not affect their survival and growth, except in the case of 25 µM LY294002 where we observed a slight, yet significant, decrease (~5%) in cell viability (Figure 3B-3B').Thus, these results pinpoint the importance of active PI3K signaling in promoting survival and growth of obscurin-depleted breast epithelial cells under unfavorable (i.e.anchorage-independent) conditions. We next examined if inhibition of the PI3K pathway in obscurin-knockdown MCF10A cells decreases their migratory ability.We therefore treated MCF10A obscurinknockdown cells plated on collagen with different concentrations of the LY294002 (0-25 µM) and BKM120 (0-1 µM) inhibitors and examined their migratory potential in wound healing assays (Supplementary Figure S1).Inhibition of the PI3K pathway with either inhibitor resulted in a dramatic and dose-dependent decrease of the migratory capability of the obscurin-knockdown cell monolayer over a 6-hour time period.While obscurinknockdown cells treated with DMSO vehicle exhibited ≥ 20% wound healing after 6 h, cells treated with 25 µM of LY294002 or 1 µM of BKM120 exhibited ~6% wound healing (Figure 4A). We also evaluated the ability of single obscurinknockdown MCF10A cells treated with either 25 µM of LY294002 or 1 µM of BKM120 to migrate through collagen type I-coated microchannels of constant height (i.e. 10 µm) and varying widths (i.e. 6, 10, 20 and 50 µm) using a microfluidic-based migration chamber combined with live-cell phase-contrast imaging.Obscurinknockdown cells treated with either inhibitor exhibited significantly reduced migration in both narrow (i.e. 6 and 10 µm) and wide (i.e.20 and 50 µm) microchannels, as evidenced by their decreased migration speed compared to DMSO treated cells (Figure 4B, Supplementary Videos S1 and S2, and Supplementary Figure S2).Moreover, cells treated with either inhibitor relative to vehicle control exhibited lower chemotactic index (CI), which is a measure of cell migration persistence, and is calculated from the ratio of net cell displacement to the total distance traveled by the cell (Figure 4C).This decrease reached statistical significance in narrow channels (6 µm) for both inhibitors.Taken together, these data indicate that PI3K inhibition impairs single cell migration of obscurin-knockdown MCF10A cells primarily by affecting migration speed, with a secondary effect of decreasing cell directionality (particularly in highly confining, 6 µm, microchannels).Lastly, we examined whether blockade of the PI3K cascade in obscurin-knockdown cells affected their invasive capability through a Matrigel-coated chamber (Figure 5).Similar to the wound healing and microchannel assays, treatment of obscurin-knockdown cells with varying concentrations of the LY294002 (0-25 µM; Figure 5A) and the BKM120 (0-1 µM; Figure 5B) inhibitor resulted in a significant, dosedependent reduction in cell invasiveness, ranging from 15-60% (Figure 5A) and 50-95% (Figure 5B).Therefore, the increased activation of the PI3K pathway in obscurin-depleted breast epithelial cells is critical for their increased invasiveness through extracellular matrix and basement membranes. Giant obscurins interact directly with the p85 regulatory subunit of PI3K To decipher how depletion of giant obscurins from breast epithelial cells leads to up-regulation of the PI3K cascade, we examined if giant obscurins and PI3K are direct binding partners.As a first step, we generated protein lysates from parental MCF10A cells that readily express obscurins and PI3K and performed coimmunoprecipitation assays using either antibodies to the extreme NH 2 -terminal immunoglobulin domain 1 (Ig1) of obscurins or control mouse IgG.The immunoprecipitate fractions were then tested for the presence of different components of the PI3K pathway via immunoblotting analysis.We found that the p85 regulatory component of PI3K was specifically and consistently present in the obscurin, but not the control, immunoprecipitate fraction (Figure 6A).We also examined the presence of additional components of the PI3K cascade in the obscurin immunoprecipitate fraction, including PDK1 and AKT, but we failed to detect them in our system (data not shown), which indicates the transient nature of their association with PI3K, and their lack of association with obscurins. Giant obscurins contain a tandem array of signaling motifs in their extreme COOH-terminus, including a Pleckstrin Homology (PH) domain.It has been welldocumented that PH domains are binding sites for lipids and/or other protein modules, and in particular for Src Homology 3 (SH3) domains [20,21].Importantly, the extreme NH 2 -terminus of the regulatory p85 subunit of PI3K contains an SH3 domain [22].We therefore examined if giant obscurins interact with the PI3K/p85 subunit via the direct binding of their PH and SH3 domains, respectively.To test this, we generated recombinant obscurin-PH domain fused to GST and PI3K/p85-SH3 domain tagged with the HIS-moiety (Figure 6B), and performed a GST-pull down assay.GST-obscurin-PH, but not control GST-protein, was able to efficiently and specifically retain HIS-PI3K/p85-SH3 (Figure 6C).Moreover, to determine the strength of the interaction between the obscurin-PH and the PI3K/p85-SH3 domains, we performed real-time kinetics analysis using a Biacore 3000 surface plasmon resonance biosensor (Figure 6D).Quantification of the obtained sensogram data using BIA evaluation 3.1.software followed by fitting with the 1:1 Langmuir model yielded a binding affinity, K D , of ~50 nM, which is indicative of strong, yet dynamic, binding between the obscurin-PH and the PI3K/p85-SH3 domains.Collectively, these findings indicate that giant obscurins and the p85 regulatory subunit of PI3K interact directly at the level of the plasma membrane where they both reside, and that their direct binding is mediated by their respective PH and SH3 domains. DISCUSSION Deregulation of signaling pathways is a hallmark of cancer cells, and is intimately associated with abnormal growth under unfavorable conditions, and increased motility, invasiveness and colonization.We have recently demonstrated that the giant cytoskeletal proteins obscurins are abundantly expressed in normal breast epithelium where they preferentially concentrate at the plasma membrane [8,9].However, they are nearly lost from advanced grade (grade 2 or higher) human breast cancer biopsies [9].Loss of giant obscurins from breast epithelial cells renders them highly tumorigenic and metastatic via induction of EMT that is associated with major cytoskeletal alterations [9].We herein show that depletion of giant obscurins from MCF10A breast epithelial cells results in upregulation of the PI3K signaling pathway, as evidenced by the increased phosphorylation levels of major components of the cascade, including PI3K, PDK1, AKT and GSK3β.Inhibition of the PI3K pathway in obscurin-depleted MCF10A cells via pharmacological or molecular means reverses EMT, leading to substantially decreased growth, motility and invasiveness.Given the direct and strong interaction between obscurins and the PI3K/p85 regulatory subunit that our binding studies demonstrated, we postulate that loss of obscurins leads to conformational and/or molecular alterations of the PI3K/p85 subunit that render it unable to modulate the enzymatic activity of the PI3K/p110 catalytic subunit, thereby leading to its over-activation.Aberrant activation of the PI3K cascade, and its downstream target AKT, affects diverse (patho)physiological processes, including differentiation, growth, stemness, EMT, motility and invasiveness, which are intimately associated with increased tumorigenesis and metastasis [23]. The AKT family is composed of three isoforms, AKT1, AKT2 and AKT3 that are structurally homologous, but exhibit distinct functional properties [18].Earlier studies have reported that overexpression of AKT1 inhibits breast epithelial cell migration and invasion via suppression of the ERK signaling pathway [24].Conversely, silencing of AKT1 in non-tumorigenic MCF10A breast epithelial cells results in increased migration via induction of insulin-like growth factor (IGF) mediated cascades [25].On the contrary, the levels of AKT3 mRNA, protein, and enzymatic activity are significantly increased in estrogen receptor negative (ER -) and triple negative (ER -, Progesterone Receptor negative, Her2/neu negative; ER -PR -Her2/neu -) breast cancers, promoting cell proliferation and tumor growth [26].Similarly, amplification or overexpression of AKT2 has been identified in HER2/neu positive (Her2/neu + ) breast cancers as well as ovarian, prostate and pancreatic cancers correlating with poor prognosis, and increased risk of relapse and metastatic tumor formation [27][28][29][30].In agreement with these findings, overexpression of enzymatically inactive AKT2 in breast and ovarian cancer cells diminishes their motility and invasion capabilities in vitro and their ability to form tumors in vivo, whereas overexpression of full length AKT2 results in increased survival, migration and invasion in vitro, and formation of multiple adherent and non-adherent metastatic modules in vivo [31].In line with these observations, our findings demonstrate that down-regulation of AKT2, but not AKT1, in obscurin-knockdown MCF10A cells results in diminished growth, migration and invasion. A key mechanism underlying increased tumorigenicity and metastasis in cancer epithelial cells is the acquisition of a mesenchymal phenotype via EMT [19].A prominent alteration that occurs during EMT is the concomitant loss of E-cadherin from adherens junctions and the increased expression of N-cadherin.This E-cadherin/N-cadherin switch is regulated by a number of transcription factors, including Slug, Snail and Twist; while all three repress the transcription of E-cadherin, Twist also induces the transcription of N-cadherin [32].Earlier evidence has suggested that Slug, Snail and Twist may be direct targets of the PI3K/AKT cascade.In support of this notion, activation of the PI3K/AKT pathway in HER2-overexpressing MDA-MB-435 cancer cells leads to increased expression of Slug [33].Similarly, activation of the PI3K/AKT cascade in melanoma, squamous carcinoma and breast cancer cell lines results in increased transcriptional activity of Twist and Snail [34][35][36].We therefore postulate that the increased levels of Slug and Twist along with the E-cadherin/N-cadherin switch that takes place in the obscurin-knockdown cells during EMT is a direct result of the upregulation of the PI3K/AKT cascade. Experimental evidence has highlighted the ability of cancer cells to evade anoikis and exhibit increased proliferation under non-adherent conditions, two properties that are associated with increased tumorigenicity [37].Accordingly, enhanced activation of the PI3K/AKT pathway in breast cancer cells has been correlated with increased mammosphere and tumor formation in vitro and in vivo, respectively, via phosphorylation and activation of the master transcriptional factor NF-κB, which in turn regulates the expression of several genes involved in cell cycle progression and apoptosis [38,39].Transcriptional activation of NF-κB via the PI3K/ AKT pathway also leads to up-regulation and increased production of matrix metalloproteinases (MMPs), which degrade the extracellular matrix and promote cell invasion [16,[40][41][42].Moreover, activation of the PI3K/ AKT cascade promotes cell motility by modulating the activity of major cytoskeletal proteins, such as girdin and filamin-A.Both girdin, an actin-binding protein that stabilizes actin filaments at the leading edge of migrating cells, and filamin-A, an actin cross-linker protein, are direct targets of AKT and play key roles in regulating cell migration [43][44][45][46].Thus, the increased growth, motility and invasiveness that the obscurin-knockdown cells exhibit in vitro and in vivo may be direct manifestations of the activated PI3K/AKT cascade and its downstream targets. It has been well documented that the PI3K/p85 regulatory subunit modulates the activity of the catalytic PI3K/p110 subunit [22].Herein, we demonstrate for the first time a direct and strong interaction between giant obscurins and the PI3K/p85 regulatory subunit mediated by the PH domain of obscurins and the SH3 domain of PI3K/p85.Given the increased activation of the PI3K/ AKT cascade in obscurin-depleted cells, we postulate that the direct binding of obscurins to PI3K/p85 is essential in regulating the ability of the latter to modulate the enzymatic activity of the PI3K/p110 catalytic subunit.Obscurins may mediate such an effect by topologically stabilizing the PI3K/p85 subunit in a conformation that precludes the constitutive activation of the PI3K/ p110 subunit.Alternatively, the presence of a RhoGEF motif and two Ser/Thr kinase domains in obscurins, which are tandem with and proximal to the PH domain, respectively, may suggest important intra-or intermolecular modifications, including the involvement of Rho-facilitated effects or novel phosphorylation events. Taken together, our results demonstrate that in normal breast epithelial cells giant obscurins act upstream of the PI3K/AKT pathway contributing to its regulation via their direct association with the PI3K/p85 regulatory subunit.Conversely, loss of giant obscurins from breast cancer cells may lead to conformational and/or molecular alterations in the PI3K/p85 regulatory subunit rendering it unable to regulate the enzymatic activity of the PI3K/p110 catalytic subunit.Thus, over-activation of the PI3K/AKT pathway in obscurin-depleted breast cancer cells may, at least in part, be responsible for their increased growth, motility and invasiveness given the marked suppression of these properties following inhibition of the PI3K cascade.These findings are critical for the development of individualized chemotherapies, since obscurin-deficient breast cancer patients may substantially benefit from a targeted therapy in the form of a PI3K inhibitor rather than a generalized chemotherapy, such as the taxanes.In line with this, our earlier studies have shown that obscurinknockdown MCF10A cells display significantly increased survival and (re)attachment capabilities in the presence of paclitaxel compared to control cells expressing scramble shRNA [10].Thus, the single or combinatorial use of PI3K inhibitors, such as the BKM120 used in our study, which is currently in clinical trials for several types of cancer including breast cancer, would potentially be a more appropriate and effective chemotherapy for treating obscurin-deficient breast tumors. Reagents Unless otherwise noted, all chemicals were purchased from Sigma-Aldrich (St. Louis, MO, USA).LY294002 was purchased from Cell Signaling Technology Inc (Danvers, MA, USA).BKM120 was purchased from Selleck Chemicals (Houston, TX, USA). Stable clones of mcf10a cells and culturing MCF10A stable clones expressing obscurin shRNA or control shRNA plasmids were generated and maintained as described in [8]. Generation of protein lysates and Western blotting Cell lysates were prepared in radioimmunoprecipitation assay (RIPA) buffer supplemented with cocktails of protease inhibitors (Roche, Mannheim, Germany), and phosphatase inhibitors (200 nM Imidazole, 100 mM Sodium Flouride, 115 mM Sodium Molybdate, 100 mM Sodium Orthovanadate, 400 mM Sodium Tartrate Dihydrate, 100 mM β-Glycerophosphate, 100 mM Sodium Pyrophosphate, and 10 mM EGTA).Protein lysates were electrophoresed on SDS-NuPAGE gel (Thermo Fisher Scientific, Waltham, MA), transferred to nitrocellulose membranes, and probed with primary antibodies as specified in the text, and the appropriate alkaline phosphatase-conjugated secondary antibodies (Jackson ImmunoResearch Laboratories).Immunoreactive bands were visualized using Amersham TM ECL TM prime western blotting detection reagent (GE Healthcare Life Sciences), and densitometry was performed with Image J software.Treatment of cells followed by lysate preparation and immunoblotting analysis was repeated at least three independent times. Mammosphere culture Single MCF10A cells stably transduced with obscurin shRNA were plated in ultralow attachment plates (Corning, Lowell, MA, USA) at a density of 10,000 cells/mL in 2 mL serum-free growth media (DMEM/ F12 with GlutaMAX TM ), supplemented with insulin (10 µg/mL), hydrocortisone (0.5 µg/mL), cholera toxin (100 ng/mL), epidermal growth factor (20 ng/mL), 1% penicillin-streptomycin and puromycin (1.5 µg/mL).Twenty-four hours following initial plating, cell cultures were supplemented with 2 mL serum-free growth media that contained the indicated concentration of LY294002, BKM120, or DMSO vehicle control every day for 10 days at which time point spheres were measured and those ≥ 100 µm were counted as tumor spheres. Cell viability and alamar blue assay Cell viability of MCF10A obscurin-knockdown and scramble control cells treated with the specified concentrations of LY294002 and BKM120 inhibitors or vehicle DMSO for 24 h was measured using the AlamarBlue assay (Life Technologies), as earlier reported [9].In brief, the AlamarBlue reagent was added to the cell culture at 10% v/v and incubated for 16 h at 37ºC, 5% CO2.The percentage (%) of AlamarBlue reduction was determined by measuring absorbance at 550 and 620 nm.Data are presented as the percentage of AlamarBlue reduced per number of cells. Wound healing assay Wound healing was measured by growing confluent MCF10A cell monolayers stably expressing obscurin shRNA in six-well tissue culture dishes (Corning).A scrape was made through the monolayer with a sterile plastic pipette tip and fresh media containing LY294002, BKM120, or DMSO vehicle control was added at the indicated concentrations.Images were taken with an inverted microscope (10X objective) at time 0 h and after a 6 h incubation period at 37ºC, 5% CO 2 .Migration was expressed as the average of the difference between the measurement at time zero and 6 h obtained from 3 independent experiments. Invasion assay Invasion was measured by adding 250,000 cells suspended in 0.5 mL growth media containing the specified concentration of LY294002, BKM120, or DMSO vehicle control to the upper chamber of a Matrigel-coated invasion chamber (BD Biosciences, San Jose, CA).The lower chamber contained growth media supplemented with 10% FBS.The inserts were incubated at 37ºC, 5% CO 2 for 16 h.At the end of the 16 h incubation period, the cells that had invaded in the lower chamber were fixed and stained with 0.5% crystal violet in 20% methanol.The number of invaded cells was quantified by counting at least 6 random fields from 3 independent experiments under an inverted light microscope (Olympus IX51) with a 10X objective. Microchannel seeding and single cell migration The microchannel device was fabricated by standard lithography and coated with 20 µg/mL collagen type I (BD Biosciences, San Jose, CA), as previously described www.impactjournals.com/oncotarget[40,[48][49][50][51]. Cells were trypsinized, resuspended in serum-containing media to neutralize the trypsin, and subsequently washed in serum-free media.A suspension of 5 × 10 4 cells was added to the inlet port, and cells were transported along the seeding channel by pressure-driven flow.Within 5 min, the cell suspension was removed and replaced with 100 μL of serum-free media containing the specified concentrations of LY294002, BKM120, or DMSO vehicle control.Serum-containing media with the indicated concentrations of LY294002, BKM120, or DMSO was added to the top-most inlet port, thus forming a chemoattractant gradient.Chambers were placed in an enclosed, humidified microscope stage at 5% CO2 and 37ºC (TIZ, Tokai Hit Co., Japan or Okolab, Italy).Phase contrast time-lapse images were captured at 20-min intervals for up to 17 h on an inverted Nikon microscope (10X objective) at multiple stage positions via stage automation (Nikon Elements, Nikon, Japan).Cell x,y position within the microchannel was identified as the midpoint between the poles of the cell body and tracked as a function of time using ImageJ (NIH, Bethesda, MD) and the MTrackJ plugin for up to 8 h [52].Cells in the microchannels for less than 1 h were not tracked.Tracks were discontinued if the cell left the microchannel.Dividing cells were not tracked.Cell velocity and chemotactic index were computed using a customwritten Matlab program (The MathWorks, Natick, MA).Instantaneous cell velocity was calculated by dividing each interval displacement by the time interval (20 min), and the mean velocity for a given cell was computed by averaging instantaneous velocities for all time intervals.Chemotactic index was calculated by dividing the end-toend displacement by the total path length of the cell.The reported velocity and chemotactic index for each condition is the mean of the pooled cells from at least 3 independent experiments. Co-immunoprecipitation assays Co-immunoprecipitation experiments were performed with protein lysates prepared in RIPA buffer from parental MCF10A cells, according to [53].In brief, 100 µL of protein A/G beads slurry (Thermo Fisher) were incubated with 5 µg of NH 2 -terminal obscurin antibody or mouse IgG (Jackson ImmunoResearch Laboratories Inc, West Grove, PA) at 4ºC overnight in PBS.The antibodybound beads were then incubated with 1 mg of MCF10A protein lysates at 4ºC overnight with gentle rocking.Samples were washed 5x with PTA (PBS containing 0.5% Tween-20), solubilized in 60 μL 2x SDS-PAGE sample buffer, and heated at 70ºC for 20 min before they were separated by SDS-PAGE and transferred to nitrocellulose membranes.Blots were probed with the indicated primary antibodies and the appropriate alkaline phosphatase-conjugated secondary antibodies (Jackson ImmunoResearch Laboratories).Immunoreactive bands were visualized with the Tropix chemiluminescence detection kit (Applied Biosystems). GST-pull down assays were performed as described before [54,55].Equivalent amounts of control GST-protein and GST-obscurin-PH were bound to glutathione-Sepharose beads and incubated with 3 µg/ mL of HIS-tagged PI3K/p85-SH3 protein overnight at 4ºC in pull-down buffer (50 mM Tris, pH 7.5, 120 mM NaCl, 10 mM NaN3, 2 mM DTT, and 0.5% Tween).At the end of the incubation period, beads were washed five times with wash buffer (PBS with 10 mM NaN3 and 0.1% Tween), and bound proteins were eluted with 2× LDS buffer (Invitrogen), followed by boiling at 95ºC for 10 min and separation on a 4-12% bis-Tris gel.A HIS6antibody (sc-803, Santa Cruz Biotechnology) was used for detection of immunoreactive bands with the Tropix chemiluminescence detection kit (Applied Biosystems). Kinetic analysis of obscurin-PH binding to PI3K/ p85-SH3 using surface plasmon resonance Surface plasmon resonance was performed using a BiaCore 3000 instrument, as previously described [6,54,56].In particular, the GST-obscurin-PH domain was used as ligand and was immobilized on a carboxymethyldextran sensor (CM5) chip, while the HIS-PI3K-p85/SH3 domain was used as analyte at varying concentrations ranging from 8-250 nM.The flow rate for analyte injection was 20 μl/min.For each analyte concentration, association was measured for 180 sec and dissociation was measured Figure 2 : Figure 2: Treatment of MCF10A obscurin-knockdown cells with PI3K inhibitors restores the formation of cell-cell junctions.(A-A") Representative bright-field images of obscurin-knockdown MCF10A cells treated with DMSO vehicle or PI3K inhibitors.Cells treated with 25 µM LY294002 or 1 µM BKM120 lose their mesenchymal appearance, and instead acquire an epithelial morphology and form cell-cell junctions.The expression levels and membrane distribution of the epithelial markers β-catenin (B-B") and E-cadherin (C-C") are restored in MCF10A obscurin-knockdown cells treated with either PI3K inhibitor, as determined under confocal optics. Figure 5 : Figure 5: Blockade of the PI3K signaling in obscurin-knockdown MCF10A cells decreases their invasive capabilities through matrigel-coated chambers.Stable clones of MCF10A cells expressing obscurin shRNA were added to a matrigel-coated chamber in the presence of different concentrations of (A) LY294002 (0-25 µM) or (B) BKM120 (0-1 µM), and allowed to invade for 16 h.Invasive cells were visualized via staining with crystal violet dye.Treatment with either inhibitor markedly decreased the invasive capabilities of obscurin-knockdown MCF10A cells in a dose-dependent manner.Quantification of the % invasion of inhibitor-treated relative to DMSO-treated obscurin-knockdown cells, which was arbitrarily set to 100%; n = 3, error bars = SD, *P < 0.03; t-test.
2018-04-03T00:35:10.801Z
2016-06-13T00:00:00.000
{ "year": 2016, "sha1": "0ec144fa2da17e8518676f970bfcf0a244e6a33b", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=31382&path[]=9985", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0ec144fa2da17e8518676f970bfcf0a244e6a33b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202540412
pes2o/s2orc
v3-fos-license
Incidental Supervision from Question-Answering Signals Human annotations are costly for many natural language processing (NLP) tasks, especially for those requiring NLP expertise. One promising solution is to use natural language to annotate natural language. However, it remains an open problem how to get supervision signals or learn representations from natural language annotations. This paper studies the case where the annotations are in the format of question-answering (QA) and proposes an effective way to learn useful representations for other tasks. We also find that the representation retrieved from question-answer meaning representation (QAMR) data can almost universally improve on a wide range of tasks, suggesting that such kind of natural language annotations indeed provide unique information on top of modern language models. Introduction It is often labor-intensive to have humans directly annotate data for NLP tasks which require research expertise and/or have lengthy guidelines. For instance, one needs to understand thousands of semantic frames in order to provide semantic role labelings (SRL) (Palmer et al., 2010). A promising approach to address this issue is to use natural language (NL) to annotate NL. Throughout this paper, we refer to this kind of annotations as NL annotations. Existing works along this line include natural logic for textual entailment (TE) (MacCartney and Manning, 2007), QA-SRL (He et al., 2015), QAMR (Michael et al., 2017), and zero-shot relation extraction (RE) via reading comprehension (Levy et al., 2017). When annotators are not required to understand those convoluted concepts 1 Our code and online demo are publicly available at https://github.com/HornHehhf/ISfromQA. defined by experts, they can focus more on the actual meaning of text and even laymen can provide indirect annotations using NL. Despite the lower cost of NL annotations, it raises an issue of how to use them effectively. For example, even if we know a task needs predicateargument information, it remains unclear how to improve the task given QA-SRL, although we believe QA-SRL can provide useful information. This is in general a critical issue when using NL annotations. The key to many NLP tasks is to learn representations from data, either as explicit discrete symbols or as latent continuous vectors. Based on NL annotations, we often cannot reliably get symbolic representations due to the ambiguity and variability nature of NL. For example, QA-SRL data are, in their surface form, very different from SRL. As we show later in Section 5.1, it is challenging to learn a good SRL parser purely based on QA-SRL. Furthermore, NL can flexibly express things that are not covered by pre-defined formalism, so converting NL annotations to a fixed inventory of symbols will actually lose information. Therefore, we propose to learn latent continuous representations from NL annotations. Note that many existing works have studied how to learn latent continuous representations for language from massive text data instead of NL annotations, mainly from the perspective of language modeling (LM) (Pennington et al., 2014;Peters et al., 2018;Devlin et al., 2019;Yang et al., 2019;Liu et al., 2019). However, Tenney et al. (2019b) show that as compared to non-contextual LMs, those contextual LMs (e.g., ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019)) could only offer marginal improvements on semantic tasks, suggesting that contextualization alone is not the solution to semantic representations. We hence believe that representations learned purely from unlabeled text will not be enough for all types Sentence Ann. Question Answers (1) Mr. Agnew was vice president of the U.S. from 1969 until he resigned in 1973 . INF What did someone resign from? vice president of the U.S. (2) This year , Mr. Wathen says the firm will be able to service debt and still turn a modest profit . INF When will something be serviced? of tasks. This is consistent with one underlying philosophy of BERT: while an LM like BERT can handle syntactic variations very well, it still needs fine-tuning on some annotations to acquire the "definition" of the target task. Specifically, we study how to learn latent representations from NL annotations in the format of QA, which can be viewed as retrieving incidental supervision signals from QA data. Our first contribution is that we propose a practical method to do it. We fine-tune BERT with these QA signals and obtain representations carrying the necessary information to answer these questions. We term this procedure as QA-driven language modeling (QALM). Experiments show that improvements can be achieved by adding the resulting representations from QALM as a feature vector to existing neural architectures. Specifically, QALM has outperformed BERT on seven tasks by an average of 1.2 F1 score in the small data setting, and 0.2 F1 score in the full data setting. We work on two types of NLP tasks: tasks with a single-input sequence, e.g., SRL and named entity recognition (NER); and tasks with a pair of inputs, e.g., TE and machine reading comprehension (MRC). We argue that if a task is single-input, then QALM should focus more on the input itself, which we call standard QALM; if a task is pairedinput, then QALM should also consider the interaction between the two inputs, which we call conditional QALM. Since QA is itself a paired-input task (i.e., a sentence and a question), if we want to do standard QALM, then we should restrict the interaction between the sentence and question, as shown in Fig. 1(a). Results show that this dis-tinction is important for QALM to perform effectively for single-input tasks. In summary, our second contribution is that we are the first to distinguish between these two types of semantic representations, and we also propose separate models for each of them. Our third contribution is the discovery that QALM based on QAMR data can almost universally improve on a wide range of tasks, especially when direct training data are limited. This suggests that, while AMR (Banarescu et al., 2013) is a useful symbolic representation for semantics, we can also take advantage of AMR by learning from much cheaper QA pairs dedicated to it. The appealing idea is that if we guide human annotators to propose and solve questions related to a certain semantic phenomenon, then we may obtain latent representations dedicated for that phenomenon, and then, improve on relevant tasks. The rest of this paper is organized as follows. Section 2 describes our QALM framework in detail, including the distinction between standard QALM and conditional QALM, and Section 3 shows our experiments using QALM. Section 4 focuses on analyzing the usage and extension of our framework, and Section 5 discusses the difficulties of alternative methods. Section 6 concludes our work and points out future directions. QA-driven Language Modeling The recent decade has seen significant progress in NLP due to the success of machine learning, but most methods heavily rely on costly annotations. The importance of being able to use cheap signals has attracted researchers' attention. For instance, Roth (2017) proposes the concept of getting supervision signals that occur incidentally. The incidental signals can be noisy, partial, or only correlated with the target task. Dehghani et al. (2018) and Ning et al. (2019) are two recent examples among the large number of works along this line. Here our goal is to learn latent semantic representations from QA pairs, 2 which is also an example of incidental supervision (Roth, 2017). Since we extensively use BERT to help us handle the syntactic and lexical variations in QA, we also call it QAdriven language modeling (QALM). However, the specific choice of LM is orthogonal to our proposal and the same idea still applies for other LMs (e.g., RoBERTa (Liu et al., 2019)). Two Semantic Representations Previous semantic representations try to encode as many meaning ingredients as possible for a sentence. However, these semantic representations might not be a good choice for paired-input tasks, such as MRC and TE. MRC aims to predict the start and end positions of the answer given a paragraph and a question, and TE determines whether a hypothesis can be entailed by a premise. In these two tasks, not all the semantic information about the paragraph or hypothesis is needed. Instead, we only care about the information that is related to the question or premise. Therefore, we propose to distinguish these two types of semantic representations as standard semantic representation and conditional semantic representation. Standard semantic representation encodes all semantic information h(S) for a sentence S, while conditional semantic information encodes the information based on the attention h(S|A) for the sentence S given some attention A. In a perfect world, standard semantic representation also includes conditional semantic representation. However, we believe that there is a trade-off between the quality and the quantity of semantic information that a model can encode in practice. As shown later in Fig. 2, the quality of our standard QALM that tries to encode as much semantic information as possible, is significantly worse than that of conditional QALM, which only cares about the semantic information based on some attention. For example, in sentence (4) of Table 1, when asked "What country is the former post office in?", our conditional QALM answers correctly: "United States," while standard QALM gives the wrong prediction: "Mississippi." To retrieve semantic information from simple question-answer pairs for downstream tasks, we propose two different models, standard QALM and conditional QALM for two different types of semantics. Both models try to encode semantic information into the latent distributional represen- tation. For standard semantic representation, our goal is to encode as many meaning ingredients as possible into our latent distributional representation. We use standard QALM as shown in Fig. 1(a) for single-input tasks. In order to force the sentence component to encode more semantic information, the interaction layer of the architecture should be as simple as possible. For conditional semantic representation, we directly use BERT as our model to pre-train on simple question-answer pairs, because the bidirectional transformer is a good architecture to learn how to attend. We use components in the black box in Figure 1(b) to provide semantic information for paired-input tasks. Learning Our models consist of three basic components: a semantic sentence encoder for the sentence component, a question encoder for the question component, an interaction layer between the sentence component and the question component. We experiment on five variants of our basic model as follows: (I) Basic model: a fixed BERT and onelayer bidirectional transformer for semantic sentence encoder, a fixed BERT and one-layer bidirectional transformer for question encoder, and a two-layer multi-layer perceptron (MLP) for the interaction layer; (II) a fine-tuned BERT; (III) the same as model II, with a bi-directional attention flow added to the question component; (IV) the same as model III, with the interaction layer changed from a two-layer MLP to a bidirectional transformer; (V) the same as model IV, with the interaction layer changed from a onelayer bi-directional transformer to a two-layer one, and beam search is used in the inference stage. We call the best model (i.e.,model V) "standard QALM" and its architecture is shown in Figure 1(a). Tenney et al. (2019a) conclude that lower layers of BERT encode more local syntax, while higher layers capture more complex semantics. This finding is consistent with the intuition of standard QALM, because we add a sentence modeling layer on top of above BERT to capture semantic information. Application: Single-Input Tasks Given a sentence [w 1 , w 2 , · · · , w n ], our standard QALM can provide a sequence of hidden vectors as [h 1 , h 2 , · · · , h n ]. 3 In single-input tasks, we use standard QALM to extract extra semantic features, and concatenate it to word embeddings of the original model at the input layer. Standard QALM can be fine-tuned when trained on specific tasks. Therefore, it can be directly applied to different tasks based on word embeddings. Learning We use BERT to pre-train on QAMR and get conditional QALM as shown in Figure 1(b) for paired-input tasks. Why do we choose to pretrain our models on QAMR rather than other MRC datasets? Because QAMR has a simpler concept class and is more general than MRC. Therefore, the training of QAMR requires less examples and the model pre-trained on QAMR can help more tasks. Application: Paired-Input Tasks Given a sentence S = [w 1 , w 2 , · · · , w n ] based on an attention A = [q 1 , q 2 , · · · , q m ], our conditional QALM can provide a latent distributional representation h(S|A) = [h 1 , h 2 , · · · , h n ]. we add conditional QALM to the layer before the classification layer, and fine-tune it on downstream tasks. Machine Reading Comprehension. In this task, our conditional QALM will provide a conditional semantic representation for paragraph 4 P with the attention of question Q as h(P |Q) = [h 1 , h 2 , · · · , h n ]. For each token in the paragraph, the hidden vector h t will be concatenated to the original token embeddings before the classification layer. It is a general method for token classification tasks. Textual Entailment. Similarly, in this task, our conditional QALM will provide a conditional semantic representation for hypothesis H with the attention of premise P as h(H|P ) = [h 1 , h 2 , · · · , h n ]. We first use the max pool and average pool to get related hidden vectors h max and h avg , and then we concatenate these two vectors to the original BERT embeddings before the classification layer. It is a general method for sentence classification tasks. Standard QALM Experiments To evaluate the efficiency of standard QALM, we first investigate whether it can provide extra information to BERT for strong baselines, and whether it can be used to improve current state-of-theart (SoTA) models for single input tasks. In or- 4 We simply treat the whole paragraph as a long sentence. der to demonstrate that standard QALM embeddings contain semantic information, we evaluate our models on two different types of formal semantic schemes: SRL and semantic dependency parsing (SDP), and three tasks with different levels that require semantic information: NER, RE, and co-reference resolution (Coref). We use the reimplementation of AllenNLP for SRL, NER and Coref, and we implement SDP and RE ourselves. test examples in this dataset. We use the deep neural model in , and replace GloVe embbeddings with BERT embeddings as a strong baseline. The state-of-the-art model is the same SRL model with ELMo embeddings. 5 Semantic Dependency Parsing. We use the dataset from SemEval 2015 shared task (Oepen et al., 2015) with DELPH-IN MRS-Derived Se- (Peters et al., 2018) and replace GloVe with BERT embeddings as a strong baseline. The state-of-the-art model (Flair) uses contextual string embeddings and a BiLSTM-CRF sequence tagger (Akbik et al., 2018). Relation Extraction. We use Semeval 2010 Task 8 (Hendrickx et al., 2009) as our data. The dataset contains 8000 sentences for training, and 2717 for testing. We use a attention-based BiLSTM (Zhou et al., 2016) as a strong baseline, and use BERT embeddings as word embeddings. This model also serves as our current state-of-the-art model 7 . Co-reference Resolution. We use CoNLL 2012 shared task (Pradhan et al., 2012). This dataset contains 2802 training documents, 343 development documents, and 348 test documents. The av-erage number of words per document is 454, and the longest document has 4009 words. We use the model of and replace GloVe with BERT embeddings as a strong baseline. The baseline uses independent LSTMs for every sentence, since cross-sentence context is not helpful in experiments. The same model with ELMo embeddings is considered as the state-of-the-art model. 8 Result. Experimental results are shown in Table 2. The performance of baseline models has improved on all five tasks with small training data, when standard QALM embeddings are concatenated to BERT embeddings. If the number of training data increases, baseline models with standard QALM embeddings still perform as well as those without standard QALM embeddings. It indicates that our standard QALM can provide extra information to BERT, and can be used to improve BERT embeddings by simple concatenation. Similarly, if standard QALM embeddings are concatenated to embeddings in state-of-the-art models, even though embeddings used in original SOTA models are various (ELMo, Flair etc.), the performance has nevertheless improved on all five tasks with small training data. Empirical results reveal that our standard QALM can help outperform current state-of-the-art models by simple concatenation. Conditional QALM Experiments We compare the results of BERT and BERT concatenated with conditional QALM embeddings on two core tasks, MRC and TE. Because conditional QALM is based on attention, which is universal in NLP tasks, we further apply it to two single-input tasks, NER and sentiment analysis. Paired-Input Tasks Machine Reading Comprehension. We use SQuAD 1.0 (Rajpurkar et al., 2016) Table 3. Conditional QALM can improve BERT on both paired-input and single-input tasks with small training data. If the number of training data increases, BERT with conditional QALM still performs as well as BERT. In addition to BERT, conditional QALM can capture extra information and can be used to improve BERT by concatenating two representations before the classification layer and fine-tuning them together. The efficiency of conditional QALM on single-input tasks also verifies the adaptability of our proposals. Comparison with Formal Semantic Representations Comparison with SRL. He et al. (2015) show that question-answer pairs in QA-SRL often contain inferred relations, especially for why, when and where questions. These inferred relations are typically correct, but outside the scope of Prop-Bank annotations (Kingsbury and Palmer, 2002 conditional QALM can encode extra information which previous formal semantic representations don't include, because of the flexibility of natural format of question-answer pairs. Improving QALM with Existing Resource We investigate whether adding Large QA-SRL dataset (FitzGerald et al., 2018) and QA-RE 9 dataset (Levy et al., 2017) in the pre-training stage can help SRL and RE. For simplicity, we use a simple BiLSTM baseline with the input of BERT embeddings and binary features of predicates for SRL and a simple CNN baseline with the input of BERT embeddings and position features for RE. We use QALM embbeddings to replace 10 BERT embeddings rather than concatenate the two embeddings. Improving SRL with Large QA-SRL dataset. We add Large QA-SRL dataset to QAMR for pre-training to see whether more question-answer pairs related to SRL can get a better sentence representation for SRL 11 . Improving SRL with QA-RE dataset. We add QA-RE dataset to QAMR for pre-training and test the model on SRL to see whether more questionanswer pairs related to the semantics of the sentence can get a better sentence representation in general. Improving Relation Classification with QA-RE dataset. We add QA-RE dataset to QAMR for pre-training to see whether more question-answer pairs related to RE can get a better sentence representation for RE. Discussions. The effect of adding existing resources, Large QA-SRL and QA-RE, on pretraining SRL and RE are shown in Table 4 and Table 5. We find that adding related question-answer pairs in the pre-training stage can help improve specific tasks. Noteworthy is the fact that QA-RE can also help SRL, the improvement is minor compared to Large QA-SRL though. This indicates that adding more question-answer pairs related to the semantics of the sentence can get a better semantic representation in general. Error Analysis The results of our models on the development set of QAMR 12 dataset are shown in Table 6. The F1 scores of standard QALM and conditional QALM on the test set are 66.78 and 84.11. In general, the results of our standard QALM are similar to BiDAF but are significantly worse than the conditional QALM on QAMR. We conduct thorough error analysis including: sentence length, answer length, question length, question words and the PoS tag of the answer. We find that standard QALM is not good at dealing with long sentences compared to conditional QALM. The analysis of sentence length is shown in Figure 2. We also find that the average number of question-answer pairs is much larger when the sentence is longer as shown in as conditional QALM at long sentences. We conclude that the failure of standard QALM in long sentences is mainly because there are more relations to encode, while conditional QALM only needs to encode information based on specific questions. Using Standard QALM Different Layers of Standard QALM. We mainly consider two types of features extracted from QALM, the last hidden layer and the weighted sum of all layers like ELMo. We compare these two types of features in NER. The F1 score of NER baseline with the last hidden layer and weighted sum of all layers in QALM is 91.58 and 92.14 respectively. The results are consistent with those of (Devlin et al., 2019). We find that the weighted sum of all layers is a better choice in general, but the last hidden layer is also a good substitute. Fine-tuning Standard QALM. We compare different combinations of BERT and QALM on NER as shown in Table 8. We find that fine-tuning all models does not yield good results as one may expect, while fine-tuning the modeling component is the best choice in some situations. Although fine-tuning the modeling component can be worse than the one without -for example, fine-tuning the modeling componenet of NER in the full data setting achieves an F1 score of 91.68, while it achieves 92.14 without fine-tuning -they are at least close in general. Why Standard QALM on Feature Based Single-Input Tasks? Since conditional QALM can also capture semantic information, a natural question arises: Why don't we use conditional QALM instead of standard QALM for feature-based single-input tasks? The intuitive answer is that, we need to encode as much information as possible into a sentence embedding for single-input tasks, while conditional QALM can only encode the information related to the attention. We further compare standard QALM with conditional QALM on two feature based single-input tasks, SRL and SDP. The results are shown in Table 9. Rather than concatenate two embeddings, we replace BERT embeddings with QALM embeddings. For simplicity, we use a simple BiLSTM model for SRL 13 and a simple Biaffine model based on BiLSTM for SDP. The results indicate that QALM has a great advantage in feature based single-input tasks. The Choice of Pre-trainining Data Our standard QALM vs standard QALM pre-trained on SQuAD. We comapre our standard QALM with standard QALM pre-trained on SQuAD for the task of NER. The results are shown in Table 10, and ours outperforms in both settings. Our conditional QALM vs conditional QALM pre-trained on SQuAD. We compare our conditional QALM with conditional QALM pre-trained on SQuAD for the task of uncased NER. The results are shown in Table 11. Again, the original conditional QALM yields better results. Discussions. The results are consistent with our analysis in Section 2.3.1. Because the concept class of QAMR is simpler than SQuAD, we can achieve better results using standard and conditional QALM with less training examples (51K in QAMR and 88K in SQuAD). This is of low annotation cost, because SQuAD is based on paragraphs (117 words on average), while QAMR is based on sentences (24 words on average). We are aware that QAMR is not a perfect dataset because of its limited number of examples. It is not surprising that other datasets can outperform it in some tasks. However, QAMR is a good dataset for our model to be pre-trained on and it can help many tasks in general. As shown in Section 4.1, we also find that if the question-answer pairs are more related to a task, for example, Large QA-SRL is more related to SRL than QA-RE, then these QA pairs can better improve the task. Difficulties of Alternative Methods We propose to retrieve latent distributional representations from question-answer pairs based on the sentence meaning and apply them to downstream tasks. Can we use other semantic representations? In this section, we discuss the difficulties of two other possible methods and conclude that neither of them is as tractable as ours. Learning Formal Semantic Schemes from Question-Answer Pairs We consider learning a SRL parser from QA-SRL. It reduces the problem of learning formal semantic schemes from QA pairs to a simplified case. Challenges. There are four main challenges to learn a SRL parser from Large QA-SRL. • Partial issues. We get partial supervision: Only 78% of the arguments have overlapped with answers; 47% of the arguments are exact match; 65% of the arguments have Intersection/Union ≥ 0.5 14 . • Irrelevant question-answer pairs. Supporting statistics: 89% of the answers are "covered" by SRL arguments; 54% of the answers are exact match with arguments; 73% of the answers have Intersection/Union ≥ 0.5. These statistics show that we also get some irrelevant signals: some of the answers are not really arguments (for the corresponding predicate). 14 These statistics of partial issues and irrelevant questionanswer pairs are based on the PTB set of QA-SRL. • Different guidelines. Even if the arguments and the answer overlap, the overlap is only partial. • Cross domain issues. We need to evaluate the trained SRL model on Propbank, but corresponding QA pairs are not annotated in Propbank dataset. For example, Large QA-SRL annotates sentences in three domains: Wikipedia, Wikinews and Science. A reasonable upperbound. We treat the answers that have overlapped with some arguments as our predicted arguments. If two predicted arguments intersect each other, we will use the union of them as new predicted arguments. The results are shown in Table 12. We know from the table that this mapping algorithm achieves a span F1 of 56.61, which is a reasonable upper bound of our SRL system. Baselines. We consider three strong baselines to learn a SRL parser from Large QA-SRL dataset. • Rules + EM. We first use rules to change question-answer pairs to labels of SRL. We keep the labels with high precision and then use an EM algorithm to do bootstrapping. A simple BiLSTM is used as our model for SRL. The results are shown in Table 12. We think that low token F1 is due to the low partial rate of tokens (37.97%) after the initialization. • PerArgument + CoDL + Multitask. We consider a simpler setting here. A small number of gold SRL annotations are provided as seeds. To alleviate the negative impact of low partial rate, we propose to train different BiL-STM models for different arguments (PerArgument) and do global inference to get structured predictions 15 . We first use seeds to train the PerArgument model and then use CoDL (Chang et al., 2007) to introduce constraints, such as SRL constraints, into bootstrapping. At the same time, we train a model to predict the argument type from question-answer pairs. These two tasks (argument type prediction and SRL) are learned together through soft parameter sharing. In this way, we make use of the information from question-answer pairs for SRL. We use 500 seeds to bootstrap. The span F1 of our method is 17.77 and the span F1 with only seeds is 13.65. More details are in Table 12. The performance of baseline model has only improved several percents compared to the model trained only on seeds. • Argument Detector + Argument Classifier. Given a small number of gold SRL annotations and a large number of question-answer pairs, there are two methods to learn an endto-end SRL 16 system. One is to assign argument types to answers in the context of corresponding questions using rules, and learn an end-to-end SRL model based on the predicted SRL data. This is exactly our first baseline, Rules + EM. However, the poor precision of argument classification leads to unsatisfactory results. Another method is to learn from small seeds and bootstrap from large number of QA pairs. Thich is our second baseline model, PerArgument + CoDL + Multitask. However, bootstrapping can not improve argument detection much, leading to mediocre results. We also notice that argument detection is hard with a small number of annotated data, but argument classification is easy with little high-quality annotated data. Fortunately, most answers in Large QA-SRL overlap with arguments. Furthermore, the mapping results of argument detection is about 56.61, good enough compared to two baselines. We propose to learn two components for SRL, one is for argument detection and the other is for argument classifier. We use the span-based model in (FitzGerald et al., 2018) for argument detection. The argument classifier is trained on predicates in the PTB set of QA-SRL. The results are shown in Table 12. Using Formal Semantic Schemes in Downstream Tasks There have already been some attempts to use formal semantic schemes in downstream tasks. We discuss three types of application here. Traditionally, semantic parsers can be used to extract semantic abstractions, and can be applied to question answering (Khashabi et al., 2018). Second, dependency graphs, such as SDP, can be incorporated into neural networks. For example, (Marcheggiani and Titov, 2017) encodes semantic information in Graph Convolution Networks (GCN). In order to use constituent based formal semantic representations, one can encode related semantic information by multi-task learning (MTL). (Strubell et al., 2018) mentioned such an example of application. Discussions. The main difficulty of retrieving formal semantic representations for downstream tasks is to learn a good parser for formal semantic schemes from question-answer pairs. QAMR as a symbolic representation Learning a QAMR parser. In Large QA-SRL, the exact match for question generation is only 47.2, although the span detector achieves an exact match of 82.2. As for QAMR, Michael et al. (2017) show that question generation can only achieves a precision of 28%, and a recall of 24%, even with fuzzy matching (multi-BLEU 17 > 0.8). From these results, we know that it is question generation that mainly hinders learning a QAMR parser. Using QAMR in Downstream Tasks. Stanovsky et al. (2018) show that a QAMR can be converted to a list of OpenIE extractions by using a syntactic dependency parser, and augmenting their training data with conversion of the QAMR dataset yields state-of-the-art performance on several OpenIE benchmarks. However, similar to SRL, OpenIE is more of a formal semantic representation than a downstream task. Is it still unclear how to use QAMR on downstream tasks, such as general QA and textual entailment. Another direction is to use the QAMR graph as shown in (Michael et al., 2017) for downstream tasks. However, the labels of graph edges are questions, making the graph difficult to put into use directly. Alternatively, We can simply utilize the relations without labels, but it will definitely lose some important information. Discussions. In a word, learning a QAMR parser for downstream tasks is mainly hindered by question generation, and how to use the full information of QAMR for downstream tasks is still unclear. Conclusion and Future Work In this paper, we investigate an important problem in NLP: Can we make use of low-cost signals, such as QA signals, to help related tasks? We retrieve signals from sentence-level QA pairs to help NLP tasks via two types of semantics. For tasks with a single-input sequence, such as SRL and coreference, we propose standard QALM that provides latent sentence-level representations. For tasks with a paired-input sequence, such as TE and MRC, we propose conditional QALM that provides latent sentence-level representations related to some attentions (e.g., questions for paragraphs in MRC and premises for hypotheses in TE). Experiments on five single-input tasks and two paired-input tasks show our standard QALM and conditional QALM are indeed effective, especially in the low-resource setting. This paper can be viewed from three perspectives. First, we propose a new practical framework for incidental supervision. We successfully retrieve incidental supervision signals by pretraining standard QALM and conditional QALM on question-answering data. This pre-training method distinguishes itself from previous incidental supervision methods, such as response-driven learning (Clarke et al., 2010), in that it can use general signals for many tasks rather than taskspecific signals. Second, QALM is a new and applicable semantic representation. Previous formal semantic representations, such as SRL and AMR, not only suffer from costly annotations, but is also not flexible because of the pre-defined formalism. Moreover, it still remains unclear how other tasks can take advantage of QA pairs in QA-SRL and QAMR. In order to benefit from cheap annotation labor and flexibility, we still use the format of questionanswer pairs to collect data as in QA-SRL/QAMR. However, instead of using question-answer pairs as the final semantic representation, our standard and conditional QALM retrieve a latent distributional representation of the signals carried by these question-answer pairs. Third, QALM is a new language model for contextualized word representations. Previous unsu-pervised LMs (ULMs), such as Elmo and BERT, do not perform well in semantic tasks (Tenney et al., 2019b). Although CoVe (McCann et al., 2017) is trained on translation signals, it is significantly worse than Elmo and BERT in helping NLP tasks (Peters et al., 2018) and probing analysis (Tenney et al., 2019b). Our standard and conditional QALMs are able to provide extra information that BERT doesn't include especially on semantic tasks. Our QALMs precede these ULMs by making use of low-cost signals, and it is orthogonal to other ULMs (e.g., we can have a similar QALMs for XLNet (Yang et al., 2019)). Future work involves various directions. We list a few here. First, probing the contextualized word embeddings of our QALMs and understanding their sentence representations are worth exploring. Second, it is interesting to see how existing resources can be mostly utilized by QALMs. An example is to design some heuristic rules to generate simple question-answer pairs from coreference dataset. Additionally, our QALMs can be improved with the help of stronger language models, such as XLNet or RoBERTa (Liu et al., 2019). The problem of QALM performing poorly in long sentences also needs to be addressed.
2019-09-01T06:30:57.000Z
2019-09-01T00:00:00.000
{ "year": 2019, "sha1": "6b54744bc8059ef53beaff5c903a6564a64f12c2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6b54744bc8059ef53beaff5c903a6564a64f12c2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
267399570
pes2o/s2orc
v3-fos-license
The T-Blep: A Soft Optical Sensor for Stiffness and Contact Force Measurement This paper presents the Tactile Blep (T-Blep), an optical soft sensor that can measure the stiffness and force of different materials. The sensor consists of an inflatable membrane with an optical elements inside. The T-Blep can switch between stiffness detection and force detection modes, by changing the pattern followed by internal pressure of the membrane. Simulations reveal that a 1 mm-thick membrane enables differentiation of extra-soft, soft, and rigid targets. Furthermore, the sensitivity and FSO of the force estimation can be adjusted by varying the internal pressure. Force detection experiments exhibit a sixfold increase in detectable force range as internal pressure varies from 10 kPa to 40 kPa, with a force peak of 5.43 N and sensitivity up to 331 mV/N. A piecewise force reconstruction method provides accurate results even in challenging conditions ( R2>0.994 ). Stiffness detection experiments reveal distinguishable patterns of pressure and voltage during indentation, resulting in a classification accuracy of 97%. Introduction Touch sense helps us to instinctively gather crucial information from our surroundings, enabling us to explore and navigate in unfamiliar environments.In the medical field, palpation is used in breast examinations [1][2][3], prostate checks [1], and initial tumor identification [2].Similarly, in agriculture, a gentle squeeze of a fruit, detailed by Erkan et al. [4], offers vital tactile feedback for assessing ripeness, highlighting touch's pivotal role in exploring and manipulating our surroundings. The importance of touch takes center stage also in robotics.Integrating tactile feedback into robotic systems can significantly reduce grasping forces during operations [5] and provide feedback in vision occluded scenario, e.g., surgical operation [6].This reduction translates into energy savings and could also contribute to prevent the target from damages due to an excess of grasping force. Traditionally, haptic feedback relies on applied force to ensure that robotic systems exert sufficient force on known and well-defined targets.However, there are still open challenges in scenarios where the target is unknown and deformable.In such cases, predefining the grasping force can lead to overestimation, risking object damage, or underestimation, resulting in grip loss.To address these issues, a preliminary non-destructive investigation of the stiffness of the target emerges as a crucial strategy.This approach facilitates determining the correct force range required for a successful grasp, thereby leveraging the adaptability and precision of robotic manipulations [7].The focus on stiffness detection becomes a pivotal factor in optimizing the efficacy of robotic interactions with unpredictable and deformable targets. Stiffness sensing entails quantifying the resistance of a material or fabric to deformation under applied force.Conventionally, this is accomplished through the use of a rigid indenter with known properties, allowing the reconstruction of the elastic modulus via the Hertz-Force indentation relation [8,9] or application-specific relations [10,11].Despite their common use in portable solutions, these approaches often encounter challenges in adaptation and integration into robotic grippers due to miniaturization issues and the necessity to guarantee a load in a direction normal to the target surface.An alternative approach, which allows for miniaturization, involves employing two or more sensing elements with distinct elastic constants to activate a capacitive sensor [3,[12][13][14][15].Through a comparative analysis of the capacitive sensor's response due to a different displacement in correspondence with these elements, the estimation of the target material's stiffness becomes feasible.These solutions, being based on capacitive transduction mechanisms, typically exhibit characteristics of linearity and temperature invariance [16][17][18].Nonetheless, they are susceptible to electromagnetic interference and commonly necessitate complex measurement units [16][17][18].Furthermore, their capacity to solely measure displacement requires the use of further sensing elements to provide the system with contact force information, making integration challenging where space is limited, such as a robotic finger. Versatile solutions that can simultaneously assess stiffness and force, by using the inverse magnetostrictive effect, have been recently introduced [19,20].In these cases, the authors leveraged the interaction between Galfenol cantilevers and permanent magnets to detect force and displacement.The reconstruction of the stiffness of the targeted materials was then effectively achieved through the application of Hooke's law.While the use of magnetic transduction makes these solutions suitable to measure (with high-resolution) force in the range 0 N to 5 N and the stiffness of the target.The use of magnetic elements introduces biases by an external magnetic field or by the interaction with ferromagnetic elements in the environment [16][17][18], reducing the range of possible applications. In this study, we present a tunable optical soft sensor designed for the dual detection of stiffness and force.Unlike conventional methodologies employing capacitive and magnetic transduction mechanisms, our approach integrates optical elements, ensuring high resolution while robustly responding in the presence of electromagnetic fields [16][17][18]. The T-Blep comprises an inflatable membrane housing an optical sensor element.Its functionality as either a force or stiffness sensor is contingent upon the pressurization mode.In dynamic pressure conditions, the sensor operates as a stiffness detector, employing a k-nearest neighbors algorithm (KNN) to categorize the target into three distinct softness levels: extra-soft, soft, and rigid.Conversely, under static pressure, it seamlessly transitions into a force detection mode.The T-Blep introduces a versatile and adaptive solution developed with a view to future integration on a robotic grasping system. Structure and Principle of the Sensor The structure and dimension of the sensor used in the experimental activity are represented in Figure 1a, where an array of four T-Bleps is presented.The array of detection units and the relative electronics were chosen to fabricate multiple sensors at once and to take into account variability in each unit's response.Moreover, having four distinct contact points for stiffness detection allows to recognize small differences in the dataset due to the natural local inhomogeneity of the material.Hence, the gathered data resulted more representative of real-world scenarios.Each sensor array comprises four optoelectronic sensing elements equally distributed at a distance of 10 mm between four chambers on the surface and oriented to measure force and displacements along the direction normal to the surface.As shown in the section of the single sensing element visible in Figure 1a, the T-Blep consists of three main layers: (i) the flexible printed circuit board (PCB), (ii) the bottom non-reflective layer, and (iii) the top reflective layer.While the PCB is a flexible layer, the top and bottom layers are made of soft silicone.This comes from the need to create an inflatable membrane that deforms to relatively low pressure.Hence, materials that fulfill mechanical requirements such as significant elongation at break, low elastic modulus, and high stretchability are required.The operating principle of the sensor is based on the QRE113 miniature reflective object sensor (Semiconductor Components Industries LLC, Scottsdale, AZ, USA), as reported in Figure 1c.Schematically, it can be represented by an integrated circuit comprising a photodiode and a phototransistor.QREE113 uses the change in collector current to measure the distance between the sensing element and the reflective surface of the target by emitting and receiving light in the infrared range (∼940 nm).The initial distance between the optoelectronic elements and the top layer was chosen according to the characteristic response of QRE113, to allow force and stiffness detection depending on the actuation pattern.As shown in Figure 1b,c, force sensing is obtained by applying a constant and positive pressure within the chambers, such that the response to contact results smoother.Differently, when the applied pressure follows a ramp pattern from 0 kPa to 45 kPa, the trend of the indentation depth allows the system to recognize the stiffness of the target. Fabrication Process and Materials The fabrication process and materials used were chosen to minimize fabrication steps while still meeting the mechanical and optical requirements for proper sensor operation.Both were built of Polydimethylsiloxane (PDMS) (Sylgard 184, Dow Inc., Midland, MI, USA) at a 10:1 mix ratio to ensure proper bonding between the bottom and top silicone layers.However, to fabricate an adequately reflective top layer, 4.3 wt% of Titanium dioxide particles (TiO 2 ) were added to the initial formulation of PDMS, following the procedure reported in [21].In both cases, the formulation was mixed for 30 s using a Thinky ARE-250 mixing and degassing machine (THINKY USA, INC, Laguna Hills, CA, USA), followed by an additional degassing step of 5 min using the S-26P-NL vacuum degassing system (Easy Composites EU BV, Rijen, The Netherlands). Based on casting techniques, the fabrication followed the steps presented in Figure 2. The three-piece mold reported in Figure 2a-d was designed using SolidWorks (Dassault Systems SolidWorks Corporation, Waltham, MA, USA) and 3D printed by fused deposition modeling (FDM) with an Ultimaker S3 (Ultimaker B.V., Geldermalsen, The Netherlands) using an acrylonitrile butadiene styrene filament (ABS, Ultimaker B.V., Geldermalsen, The Netherlands), suitable for oven curing.Once parts A and B of the mold were assembled, PDMS was poured inside the system.The mold was placed in the degassing chamber for 5 min to remove any trapped air pocket, then cured at 60 • C for 1.5 h.Once the first step was completed, part B of the mold was replaced with part C, and the process described in the initial step was repeated, but this time using the PDMS+TiO 2 formulation.To conclude, as reported in Figure 2e,f, the resulting two-layer silicone part was removed from the mold, aligned with the sensing elements on the PCB, and attached to the PCB layer using a one-component silicone adhesive Sil-Poxy™ (Smooth-On, Inc., Macungie, PA, USA), taking special care to tightly seal the four chambers and the tube used to provide pressure during actuation. Simulation and Experimental Activity To adequately characterize the sensor under the two operating conditions, finite element analysis (FEA) simulations and experimental tests were performed. FEA Analysis The FEA study was conducted in COMSOL Multiphysics ® 5.6 (COMSOL Inc., Stockholm, Sweden), assuming the 2D representation of the sensing element of the T-Blep depicted in Figure 3. Since the objective of the simulation was to verify the existence of a correlation between the displacement of the top layer and both target stiffness and force applied to the contact point (by the T-Blep on the target object), a simplified model not including the optical component of the sensor was investigated.In addition, given the symmetry of the sensing unit, a 2D axisymmetric model was used in the simulation.The top and bottom layers, made of the same silicone material (PDMS), were represented as a unique body for both studies.For the stiffness analysis, the sensor and target were assumed to be in contact before inflation.A fixed constrain was applied to the bottom boundary of the T-Blep and the top boundary of the target, while a distributed pressure (P) ranging from 0 kPa to 45 kPa was applied to the T-Blep membrane as shown in Figure 3a.While the latter was modeled as a hyperelastic material, assuming a Neo-Hookean model, the target substrate was modeled as an elastic material.For simplicity, for both bodies a free-tetrahedral mesh was assumed. For force analysis, the two bodies were assumed to be in contact.However, as shown in Figure 3b, the T-Blep was previously deformed by a fixed internal pressure (P).Moreover, the fixed constraint was applied only to the bottom boundary of the T-Blep, while the target body was assumed free to move along the vertical direction due to a normal load (F).Unlike the stiffness study, only the T-Blep was modeled as hyperelastic material, while the target was assumed to be an elastic, undeformable solid body.In both cases, a parametric study was conducted.While the dimensions of the model are visible in Figure 3, in Table 1, the main parameters used in the two studies are resumed. Experimental Protocol Three tests were performed to demonstrate the multi-functionality of the sensor, and the resulting data were post-processed using MATLAB R2023 (The MathWorks, Inc., Natick, MA, USA). All the tests were conducted using a custom platform, reported in Figure A1, and repeated on three different samples of the sensor array.A micrometric servo-controlled translation stage M-111.1DG(Physik Instrumente, Karlsruhe, Germany), interfaced with a triaxial load cell ATI Nano 17 (ATI Industrial Automation, Inc., Apex, NC, USA), was used to control the distance between the sensor and the target and recording the force applied.At the same time, a flow regulator ITV0010 (SMC corporation, Tokyo, Japan) was used to set the pressure inside the T-Blep to values ranging from 0 kPa to 45 kPa.Two DAQ systems, USB-6218 (National Instruments, Austin, TX, USA), were employed to control the platform, regulate the pressure, and acquire the data with a sampling rate of 50 Hz.At the same time, a custom PCB board was used to regulate the QRE113 light intensity and phototransistor gain. To characterize the stiffness detection, a set of nine specimens was used to generate the dataset to train the classifier to recognize extra-soft, soft, and rigid targets, as show in Figure 4.Each of the specimens used was fabricated having a diameter of 50 mm and a thickness of 5 mm.The material used were Ecoflex Gel™ (Smooth-On, Inc., Macungie, PA, USA), Ecoflex 00-30™ (Smooth-On, Inc., PA, USA), DragonSkin 10™ (Smooth-On, Inc., PA, USA), DragonSkin 30™ (Smooth-On, Inc., PA, USA), Sylgard 186 (Dow Chemical Company, Midland, MI, USA), SmoothSil 960 (Smooth-On, Inc., PA, USA), TPU 95A ((Ultimaker B.V., The Netherlands), ABS (Ultimaker B.V., The Netherlands), and Plexiglass.Each material sample was selected based on its Young's modulus, as detailed in Table 2.Even though the Young's modulus of a material with unknown geometry cannot reliably serve as an indicator of stiffness, in the present protocol such an inference is permissible since the geometry is predetermined and equal among samples.3.00 [39] 0.080 [40] 1.360 [35] 0.069 [35] 1.58 [33] * E = Young's modulus. Once the sensor resulted in contact with the desired specimen, 20 inflation cycles from 0 kPa to 45 kPa were performed, and the response was recorded.The results were digitally filtered and elaborated through a classifier trained with the K-nearest neighbor algorithm. A different approach was employed to characterize force detection.Maintaining the pressure inside the T-Blep constant, a flat rigid indenter was used to systematically indent the four sensing elements until contact occurred between the indenter surface and the non-inflated area of the sensor.The experiment was repeated at 10 kPa, 20 kPa, 30 kPa and 40 kPa.Additionally, 20 indentation cycles were performed for each pressure level to ensure a comprehensive assessment. Processing Tactile Reading The scheme in Figure 5 shows the two approaches used for stiffness and force detection.In both cases, the signal was pre-processed by a digital low-pass filter.In the case of stiffness detection, a dataset was created to train a KNN classifier.The input to the classifier comprises two features, i.e., the current pressure imposed on the regulator and the voltage read by the T-Blep.For each specimen, 1017 points were obtained by concatenating multiple cycles such that the dataset was balanced.Multiple subsets of the dataset were created by limiting the pressure range imposed on the T-Blep to optimize the acquisition procedure.A categorical vector was associated with each row of the dataset with categories extra soft, soft, and rigid.The hyperparameters of the classifier summarized in Table 3 were chosen with a grid search by maximizing the accuracy.Further analysis of the result was implemented by training a KNN with sample hyperparameters using the name of the probe material as output, as reported in Appendix C. Differently, in the case of force detection, the signal was initially normalized using the equation: where: corresponds to the exponential difference between the response with no load and the current one.Finally, the results were elaborated through the system of equations: where f 1 (χ), f 2 (χ), and C are polynomials whose coefficients are functions of the pressure (P) imposed to the T-Blep; H(i) is the Heaviside step function, while x c and x n define the interval of that maximizes the R-square of the force reconstruction function F(χ) for the prescribed P. While (3) allows a direct estimation of force from the normalized signal, the sensitivity and Full Scale Output (FSO) can be adjusted online since the coefficients and parameters used for both normalization and force estimation only depend on the pressure imposed on the regulator within the range 10 kPa to 40 kPa.Consequently, this allows for fine-tuning the sensor's stiffness, thus expanding the range of detectable forces.A comprehensive elucidation of this methodology is provided in the supplementary information reported in Appendix B. The graphs in Figure 6a-d show that variations in the target stiffness correspond to different indentation depths.Regardless of the membrane thickness, stiffer materials exhibit flatter curves, with displacements never exceeding 0.2 mm, even under the case of the maximum pressure (40 kPa).Conversely, softer materials, depending on the membrane thickness, permit more substantial indentations, reaching almost 1 mm for a thickness of 0.5 mm, as reported in Figure 6a. FEM Analysis While a greater indentation depth improves differentiation between soft and hard materials, it is advisable to limit the depth to prevent inducing plastic deformation in weak targets, particularly when the material is unidentified.On the other hand, small deformations may be better suited for brittle materials, but they can also produce false positives in stiffness differentiation.Figure 6d shows that this is especially true for stiffer materials, where most of the tested configurations correspond to indentation between 0.1 mm to 0.2 mm. After analyzing the results of the simulations, it was determined that a membrane thickness of 1 mm would be appropriate.This choice produces displacement patterns that accurately reflect the material's stiffness while also avoiding excessive indentation depths in the target subject. The graphs presented in Figure 6e-h describe the outcomes of force simulation, considering a membrane thickness of 1 mm.In each scenario, the displacement represents the distance from the base of the T-Blep; returning its maximum value at the idle state when no load is applied to the sensor.As expected, the rising internal pressure within each T-Blep induces an increase in the detectable maximum force.This outcome arises from a combination of factors intrinsically linked to the pressure, encompassing the stiffness of individual elements and the initial height of the domes. Given the extreme cases of pressure at 10 kPa and 40 kPa, the initial dome height goes from 0.91 mm to 1.97 mm and the structure gets more rigid.Consequently, a displacement of 0.5 mm generates a force of 0.15 N at 10 kPa, up to nearly 1.5 N when the imposed pressure is 40 kPa.The same happens assuming intermediate pressure values at 20 kPa and 30 kPa, where the same displacement generates forces equal to 0.52 N and 0.98 N, respectively. The results of both simulations confirmed how the same sensor can be used as a force sensor or to recognize the stiffness of the target, imposing a static or dynamic pressure, respectively.In the particular case of static pressure, it has been demonstrated that increasing pressure corresponds to greater ranges of measurable force. Stiffness Detection Figure 7 shows the indentation outcomes for the three samples, where higher voltages correspond to larger indentation.The curves reflect measurable differences in indentation depths, thus discriminating between specimens with different stiffnesses.While the theory suggests a correlation between larger and smaller indentation depths into softer and rigid materials, respectively, even minute misalignments of the two contact surfaces can introduce errors that prove wrong the previous statement.For example, in Sample 1 (Figure 7a), Dragonskin 30 has the largest indentation.Similarly, in Sample 3 (Figure 7c), Ecoflex 30 aligns with indentation depths more indicative of soft materials, which leads to a wrong conclusion in both cases.Although these cases may be considered outliers in a controlled environment, in a real-world scenario, they are inevitable.For this reason, to increase the system's robustness, it was preferred to include them in the dataset used in training the classifier. The classifier underwent iterative training across all possible pressure intervals within the 0 kPa to 50 kPa range to optimize outcomes with the available data.As a result, within 70% of the evaluated intervals, the validation accuracy consistently attains values equal to or exceeding 0.9.However, in only 30% of the intervals, the classification reaches 0.95 or higher accuracy. Figure 8 shows the confusion matrix for the KNN trained on the interval 10 kPa to 44 kPa.On this interval, the classification performs best, reaching a validation accuracy (VA) of 0.97.Specifically, the false negative rate (FNR) results are always below 5% with a peak of 4.7% to extra soft material, which in 4.2% of the cases are wrongly classified as rigid.Similarly, soft materials are improperly identified as extra soft in 4% of cases, while only in 0.4% of cases rigid materials are wrongly identified as extra soft. . Confusion matrix of the optimized KNN results, within the optimal pressure range (10 kPa to 44 kPa).The VA result 97%, while FNR remain below 5%. Force Detection The graphs presented in Figure 9 illustrate the outcomes derived from the experimental characterization of force sensing.Following theoretical expectations, the internal T-Blep pressure increases with the stiffness of the dome.Consequently, when the pressure varies from 10 kPa to 40 kPa, the detectable force range undergoes a sixfold expansion.Specifically, at 10 kPa, the maximum force recorded is 0.89 N, while at 40 kPa this value rises to 5.49 N. Similar trends are observed at intermediate pressures, particularly at 20 kPa and 30 kPa, where the maximum forces recorded are 2.31 N and 3.81 N, respectively. Based on the explanation reported in Section 2.3.3,force reconstruction is achieved using a piecewise function.The general expression of this function can be found in (3).This methodology, coupled with real-time computation of parameters and domain intervals based on pressure, facilitates a highly precise approximation of the original data.Even in the adverse scenario tested at 40 kPa, R 2 has a value of 0.994, approaching 1 at 10 kPa.The partitioning of the domain into two distinct parts, as outlined in Figure 10, reveals two regions characterized by different sensitivities (S) at a specified pressure.In general, an observable trend manifests, indicating a diminishing sensitivity with an increment in imposed pressure.Consistent with the previous consideration, optimal performance is achieved at 10 kPa, where the sensitivity for forces up to 0.29 N is 259 mV/N, rising to 331 mV/N for loads within the range of 0.29 N to 0.89 N. Conversely, under an imposed pressure of 40 kPa, the minimum sensitivity is 62 mV/N for forces below 3.05 N, reaching a maximum of 112 mV/N for forces within the range of 3.05 N to 5.41 N. Analogous to observations for the maximum detectable force, intermediary pressure values yield sensitivities within the range 62 mV/N to 331 mV/N, supporting the capability to modulate sensor performance through the internal pressure of the T-Blep.Table 4 provides a comprehensive overview of the sensor characteristics at 10 kPa, 20 kPa, 30 kPa and 40 kPa and the obtained results are resumed in the Supplementary Video S1 linked in the Data Availability Statement. Conclusions This study introduces the T-Blep, an optical soft sensor that can be tuned for a dual functionality: stiffness and force detection.Unlike conventional methods, the device integrates optical sensing elements, ensuring high resolution and robust responsiveness in the presence of electromagnetic fields.The main parts of the T-Blep are an optical sensor and an inflatable membrane, whose internal pressurization dictates the device's functionality. The FEA study confirms the hypothesis of using the same sensor to measure stiffness and force under various pressurization conditions.The simulation outcomes show that, within the explored pressure range, a 1 mm-thick membrane enables distinguishing between extra-soft, soft, and rigid targets, with indentations depth typically not exceeding 0.9 mm.Furthermore, the force simulation demonstrates an augmentation in the detectable maximum force as the internal pressure increases, supporting the idea of adjustable sensitivity to a broad spectrum of measurable forces. Force detection experiments show a sixfold increase in the detectable force range as the internal T-Blep pressure varies from 10 kPa to 40 kPa.The sensor's capability to accommodate a broad internal pressure range, spanning from 0 kPa to 40 kPa, facilitates the precise regulation of its stiffness.Consequently, this fine-tuning allows the sensor's sensitivity to be dynamically adjusted for loads, reaching a maximum of 5.45 N. The force reconstruction method, employing a piecewise function, provides accurate approximations, even in challenging conditions, with a coefficient of determination (R2) approaching unity at 10 kPa and consistently exceeding 0.994.The sensitivity analysis highlights the ability to adjust sensor performance through internal pressure, mainly showcasing high sensitivity to loads within the range of 0-2.3 N when internal pressure results lower than 20 kPa. Experiments focused on stiffness detection show distinguishable patterns in indentation outcomes among various samples, facilitating differentiation based on material stiffness.Nevertheless, challenges arise due to inevitable misalignments between the two contact surfaces, indicating that the results of this solution are still influenced by the load direction, a common concern in most stiffness sensors found in the literature.The opti-mized KNN classifier, adapted across pressure intervals, aids in mitigating this challenge, achieving a peak accuracy of 97% within the 10 kPa to 44 kPa pressure interval. Compared to the other solution present in the literature the solution here presented does not provide any precise measurement of young modules the material taget, being the latter. While the results of this work underscore the efficacy of this sensing approach, it must be considered that the study was conducted in a controlled environment.This setting facilitated the acquisition of a dataset with repeatable measurements but limited the exploration of potential disturbances inherent in real-world scenarios.One of the aspects that could affect the T-Blep performance is related to the possible variations of ambient light.Although subject to regular fluctuations corresponding to natural daylight changes, in the present study this aspect was not extensively assessed to guarantee insensitivity to any external light.For example, when the internal pressure of the T-Blep reaches its maximum at 40 kPa, and thus the thickness of the membrane is minimum, with the same condition, a change in the signal quality was noticed when compared to lower actuation pressure.This highlights the necessity, especially for potential integration into a robotic system, to investigate further the impact of external lights characterized by known wavelength spectra. In contrast to existing solutions, the T-Blep presents the advantage of enabling the measurement of both force and stiffness using the same structure and transduction system.While it may not offer precise stiffness measurements like certain alternatives [10][11][12][13][14][15], the T-Blep provides a valuable classification into three categories.Although it could be seen as a limitation, it allows to directly relate sensor response to stiffness classification ignoring the force, with advantages in terms of computational efforts.Moreover, the expanded study into material recognition, as reported in the section, suggests that, with a more extensive training dataset, the same approach could offer a more precise indication of stiffness. Concerning force detection, the T-Blep outperforms comparable solutions in measuring stiffness.With a range of 0-5.45 N, T-Blep rises the limits of 0.45 N compared to the solution of Li et al.'s 5 N while nearly tripling Weng et al.'s outputs [19,20].The T-Blep's sensitivity aligns with existing solutions in the same force range but can achieve up to 2.5 times higher sensitivity, surpassing the average of 121 mV/Nof the cited solution. In conclusion, this study demonstrates how the T-Blep approach results in a versatile and adaptive sensing solution for future integration into robotic grasping systems, showcasing its potential for enabling grasping through stiffness and force detection.The study lays the groundwork for further improvements, like the invariance to misalignments and miniaturization, which is crucial for the sensor to be integrated into a robotic system. The coefficients are obtained through the following polynomials: Similarly, x n is associated with a polynomial of the third degree: x n = −1.4071× The aforementioned expressions were derived via interpolating experimental data obtained under the prescribed pressure conditions. Appendix C In addition to employing the 3-label KNN for differentiating extra soft, soft, or hard materials, an additional 9-label KNN was trained to identify the specific nine materials used in the experiment.Henceforth, we will refer to them as the 3-label KNN and the 9-label KNN, respectively.While the use of material labels understandably led to a reduction in overall classifier accuracy to approximately 90%, the results provided insights into which materials contributed to errors in the 3-label KNN. The analysis revealed that, in the case of the 9-label KNN, the most significant errors occurred between materials previously classified as extra soft or soft, as reported in Figure A2.Excluding errors within the same material category, it was evident that most of the remaining errors were attributed to the misclassification of Dragonskin 30 and Ecoflex 30, accounting for 7.9%of occurrences.Similarly, PDMS and Ecoflex were confused 5.2% of the time, while confusion between Ecoflex 10 and Smooth Sil 960 occurred only 3.5% of the time.Despite certain materials sharing characteristics, the errors in both KNNs could not be justified by similarities in Young's modulus.As explained in the section, since stiffness detection relies on indentation depth under different pressure values, it is more plausible that an initial non-zero distance might lead to an initial false positive, which subsequent readings cannot rectify, suggesting the necessity to reduce the dependency of the sensor performance to the initial positioning. Figure 1 . Figure 1.Working Principle and T-Blep details.(a) Schematic representation of the fully assembled T-Blep array, where silicone multi-material layer and PCB are attached through silicone adhesive.The four bleps are indicated by circles spaced by 10 mm.All the indicated dimensions are in mm; (b) Pressure profiles adopted during force and stiffness detection tasks.While the ramp for stiffness detection remains constant, the pressure value for force detection can be adjusted based on the targeted force range; (c) Working principle in stiffness detection (ramp pressure pattern) and force detection (constant pressure). Figure 2 . Figure 2. T-Blep fabrication process.(a,b) Manufacturing steps to fabricate the silicone layer made of PDMS; (c,d) Manufacturing steps to fabricate the to complete the silicone membrane with the reflective layer; (e) The membrane and the Printed Circuit Board (PCB) are joined using Sil-Poxy adhesive; (f) Finalized sensor configuration with pneumatic silicone tubes attached to the inlet holes, securely sealed with SilPoxy adhesive. Figure 3 . Figure 3. 2D axisymmetric model employed in the FEA study with a membrane thickness of 1 mm.(a) Mesh configuration utilized for stiffness simulation.(b) Mesh design utilized for force analysis, considering the T-Blep pressurized to a specified pressure. Figure 4 . Figure 4. Samples and specimens used in the experimental activity.Arranged in columns from left to right: Extra-Soft specimens, Soft specimens, Rigid specimens, and T-Blep Arrays. Figure 5 . Figure 5. Schematic representation of the methodology for stiffness and force detection.The sensor operates as a stiffness detector during dynamic pressure increase and as a contact force detector under static pressure.Force reconstruction employs a polynomial approach with coefficients computed online, while stiffness detection utilizes a KNN classifier. Figure 6 Figure 6 validate in an FEA study the hypothesis of using the same sensor for measuring stiffness and force under different actuation conditions. Figure 6 . Figure 6.Results of the FEM analysis.(a-d) Displacement variations in response to changing membrane thickness and target stiffness under increasing pressures in the range 0 kPa to 45 kPa.The configurations are presented as follows: (a) Membrane thickness: 0.5 mm; (b) Membrane thickness: 1.0 mm; (c) Membrane thickness: 1.5 mm; and (d) Membrane thickness: 2.0 mm.(e-h) Displacement variations as a response to force in the range 0 N to 3 N when the T-Blep pressure results fixed at: (e) 10 kPa; (f) 20 kPa; (g) 30 kPa; and (h) 40 kPa. Figure 7 . Figure 7. Sensor response variation to specimens with different stiffness levels during the experiment under pressure ranging from 0 to 40 kPa.Each graph corresponds to a distinct sample: (a) Sample 1, (b) Sample 2, and (c) Sample 3. Figure 9 . Figure 9. Normalized response of the three samples at different imposed pressures.For each pressure reported, the mean curve was used to generalize the approach.(a) 10 kPa; (b) 20 kPa; (c) 30 kPa; (d) 40 kPa. ForceFigure 10 . Figure 10.Results of force reconstruction for different pressure levels, using the piecewise function reported in (3).For each graph, the dashed line represents the domain separation point that maximizes the goodness of fit (R 2 ).(a) 10 kPa; (b) 20 kPa; (c) 30 kPa; (d) 40 kPa. Figure A2 . Figure A2.Confusion matrix of KNN trained to recognize the material. Table 1 . Parameters used to model the materials in the FEA studies. Table 2 . Young's modulus of the specimens used in the experiment, as reported in the literature. Table 4 . Performance of the sensor for the examined pressure.
2024-02-04T16:16:18.755Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "d3c54e0e6dd74e3351a0a447c71e479865f4fd30", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/15/2/233/pdf?version=1706785213", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b526169c4e241c72dfeaab3c8365b7617956053", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [] }
73406671
pes2o/s2orc
v3-fos-license
Epidemiology of Lower Respiratory Tract Infections in Children Context: Respiratory tract infections are the main cause of children’s morbidity and mortality both in the developing and the developed countries. An accurate understanding of the epidemiology of these diseases, identification of risk factors, etiology and seasonality are critical for successful treatment and/or prevention program. Evidence Acquisition: This article aims at offering clinicians a brief update on the recent epidemiology of respiratory infections in pediatrics. It also underlines the fact that any evidence-based recommendation needs more research in different areas. Results: Almost 150 million new episodes of pneumonia are identified per year worldwide more than 90% of which occur in developing countries. Nearly 30% of total annual deaths occur in children younger than 5 years old. Viruses remain the most common cause of RTIs. S. pneumonia and HIB are the main causes of bacterial pneumonia in the world; however, infections due to many of these pathogens can be prevented. Conclusions: Widespread immunization against influenza, measles, bacilli calmette-guerin (BCG) and now pneumococcus have been related to the decline of the LRTIs in children. LowerRespiratoryTractInfections The most common LRIs in children are bronchiolitis and pneumonia. The most frequent symptoms and signs in these children are coughs and an increased respiratory rate. The occurrence of lower chest wall indrawing is indicative of a more severe disease. The most common causes of LRIs are viruses and RSV, a major cause among other viruses (1,2). Pneumonia: Pneumonia has both viral and bacterial roots. Bacterial pneumonia is usually the result of Streptococcus pneumonia (pneumococcus) or Haemophilus influenza, especially type b (Hib), and rarely Staphylococcus aureus or other streptococci. Chlamydia pneumonia and Mycoplasma pneumonia cause atypical pneumonias (3). In young children, the pathogenesis of bacterial pneu-monia has been recognized due to upper respiratory tract colonization by organisms and aspiration of the contaminated excretions. Viruses account for 40 to 50 percent of pneumonia hospitalizations for children in developing countries. RSV, parainfluenza viruses, adenoviruses and influenza type A virus are the most significant causes of viral pneumonia (3)(4)(5). Bronchiolitis: Bronchiolitis mainly occurs in children less than one year and it shows decline during the second and third years of life. The clinical features are fever, rapid breathing, lower chest wall indrawing and wheezing (6).Hyperinflation and the collapse of lung segments occur because of the inflammatory obstruction of small airways. Differentiation between bronchiolitis and pneumonia is difficult for health workers for the fact that symptoms and signs are very similar. The seasonality of RSV in the area and the expertise to identify wheezing may help in diagnosis. RSV is the highest cause of bronchiolitis universally and can account for up to 70 or 80 percent of LRIs through high season. Parainfluenza virus type 3 and the influenza virus are the other causes for bronchiolitis (7,8). Evidence Acquisition Acute respiratory tract infection (ARI) is the leading cause of morbidity and mortality in both developing and developed countries (9). WHO recognized respiratory diseases as the second important cause of death for children under five years in 2010 (10). WHO states that pneumonia is one of the main three causes for newborn infant deaths (11). Pneumonia was diagnosed in approximately 156 million children in 2008 (151 million in developing countries and 5 million in developed countries) and led to 1.4 million deaths (28-34% of all deaths in those younger than five years of age). More than 20 million patients with severe disease out of 156 million new cases of pneumonia need hospital admission yearly (9,12). WHO reports in developing countries (e.g. Nigeria, Gambia, Senegal, Chad, Cameroon, Burkinofaso and Mali) demonstrate ARI incidence rate of 15-21% in children younger than five years old (13). In developing countries, respiratory tract infection accounts for more than 2 million deaths yearly. Pneumonia is the major cause of children's death in these countries (14). In developed countries, the yearly incidence of pneumonia is estimated to be 33 per 10000 in children < 5 years and 14/5 per 10000 in children 0 to 16 years old (15). Hospitalization rates of pneumonia (all causes) among children younger than two years in the United States have decreased (from 12 to 14 per 1000 population to 8 to 10 per 1000 population) after considering the pneumococcal conjugate vaccine as the routine childhood immunization plan since twelve years ago (16). A recent Meta-analysis study reveals that 1.9 million children died from ARI in 2000 all over the world, two third of them in Southeast Asia and Africa (17). Approximately one in five child deaths (18 percent) worldwide oc-curred during the neonatal period (the first four weeks of life) (9). The mortality rate in developed countries is at the lowest level (< per 1000) (16). Risk Factors Environment-related risk factors have an important role in the incidence of respiratory tract infections in children. The most significant risk factors are malnutrition, low birth weight, nonexclusive breast feeding (especially in the course of the first 4 months after birth) air pollution, indoor crowding and lack of measles immunization in the children under one year of age. The most important risk factors with identified effects are parental smoking, zinc deficiency, mother's experience as a caregiver and concomitant diseases (e.g. asthma, diarrhea, heart disease, etc.). Finally, the possible risk factors may include mother's education, day care attendance, humidity and cold weather, vitamin A deficiency and outdoor air pollution (9). Etiology Respiratory tract infection is caused by both viral and bacterial organisms. It has been known that viral infections are the main causes of mild to moderate pneumonia (especially in the first years of life) while bacterial infections are the leading cause of severe pneumonia (9, 18-20). Viruses Viruses have already been recognized as the most common cause of acute respiratory tract infections in young children. According to WHO reports, viruses account for 30 to 67% of pneumonia mostly occurring in children < 1 year (11). A study carried out in Iran in 1960 on children under 5 years old with acute respiratory tract infection found that the contribution of viral agents in acute respiratory tract infection was 54% (less than 10% of which were dual-cause infection). The prevalence of viruses detected in this study were PI3 (15.8%), RSV (12.9%), Inf A (7.4%), PI1 (6.4%), PI2 (6.4%), adenovirus (5.9%) and Inf B (3.5%) (21). Respiratory Syncytial Virus (RSV): RSV is the principal viral cause of ARI detected in 15-40% of children admitted for bronchiolitis and pneumonia in developing countries (22). RSV is the most significant cause among all causes of lower respiratory infection in infants and children globally (23). Although a new vaccine is recognized, RSV still remains a very important fatal pathogen that in combination with other bacterial pathogens or solely leads to pneumonia in children. The most severe form of the disease happens in infants in their 3rd weeks to 3rd months of age (23). In the United States, 85000-144000 infants were hospitalized and admitted for respiratory infection resulting from RSV (23). Seventy percent of hospitalized infants suffered bronchiolitis and 20-25% of them were diagnosed with pneumonia (24). In North America and Europe, RSV infection happens in winter and spring. Studies in the developing countries with a temperate climate such as Argentina have shown a similar seasonal pattern. Studies in the tropical countries have shown an increase in the rainy season (17). Milani et al demonstrated that RSV's clinical spectrum in Iran is similar to other countries. His research showed an incidence of 19.18 % in the children under 5 years old in Tehran (prevalence was higher in crowded living conditions). According to the study RSV is a significant cause of hospitalization in winter mostly occurring in infants >6 months of age. Nearly all of the infected children were up to 2 years old and less than 50% of the cases appeared in infants >1 years of age (25). Parainfluenza: After RSV, Parainfluenza viruses -type 1, 2, and 3 (PIV-1, PIV-2 and PIV-3) -are the second leading cause of viral respiratory infections in young children (26). PIV-1 and PIV-2 are the principal cause of croup, which is mostly seen in children between 6 months and 4 years old. PIV-3 causes bronchiolitis and pneumonia, mostly in children younger than 12 months. Annual hospitalization rates of PIV-1, PIV-2 and PIV-3 in the USA are estimated to be 5800 to 28900, 1800 to 15600 and 8700 to 52000 respectively. PIV1 causes a high incidence of croup in the autumn. PIV2 infection is usually followed by PIV1 outbreak. The seasonality of PIV3 infection is in spring and summer (17). Influenza: The seasonal epidemics of influenza are the consequence of antigenic changing of influenza viruses. Hospitalization rate due to severe disease is 3 per 1000 in children aged 6 to 23 months and 9 per 1000 in children less than 6 months (27). A recent multicenter study in Japan, Russia and Michigan (USA) suggests that vaccination of children in school ages lowers the incidence of respiratory diseases. A recent surge in the new influenza virus (H1N1) has obviously revealed that viral infection can result in pneumonia with a poor outcome in all age groups of children (28)(29)(30). The seasonality pattern of influenza is different around the world. Seasonal epidemic peaks occur in mid-winter in temperate climates. The seasonal patterns of tropical regions can include a whole year (31). According to physical, chemical and biological basis, adenoviruses have been classified into six groups (A-F). The correct outbreak statistics of adenoviral infections is unidentified because nearly all cases are visited by general practitioners. Adenovirus is a very frequent infection, it consists 2%-5% of all respiratory infections (32). In several studies on the children under 5 years old in Germany, Brazil, India and Jordan of all children who had been examined (12.9%), (6%), (1.5%) and (37%) respectively were reported to suffer from adenoviruses. Incidence of this pathogen in Jordanian children was significantly higher than other countries (33)(34)(35)(36). Incidences of adenovirus-associated respiratory infections have risen in late winter, spring, and near the beginning summer; but adenovirus infections can be happened all over the year (32).Human Rhinoviruses (HRV): Rhinoviruses are well known causes of upper respiratory tract infections. In an Australian study it was reported that nearly half of the patients suffered lower respiratory tract infections. Several studies in Australia, Korea, and Jordan report the prevalence of HRV in children as (44%), (5.8%) and (11%) respectively (36)(37)(38). Rhinovirus was not associated with any specific season. It was the most common diagnosis in Indian children, both in dry and rainy seasons (39). A study in the United States reported that rhinovirus infections occurred year-round; however, 40% of all cases were detected in spring. This supports the previous interpretation that rhinovirus has the largest number in spring (40)(41)(42). Measles: Pneumonia is the most serious complication associated with measles. Pneumonia due to measles occurs in 16-77% of hospitalized children and 2-27% of the children in community-based studies. In addition, pneumonia is recognized in 56-86% of all deaths due to measles (43). There were more than 2.5 million deaths due to measles in 1980, before the general utilization of measles vaccine in developing countries. In 1999 the incidence of measles declined to about 873000 deaths (44). Measles mortality reduction strategy was carried out in 47 countries with the highest disease burden in 2001. The strategy consisted of expanding the routine measles vaccination coverage of the first dose, provision of an additional opportunity through complementary immunization activities, suitable case management and improved observation. The widespread application of this strategy, particularly in countries such as Africa, led to a 60% decrease in the measles deaths from 873000 to 345000 between 1999 to 2005 (45). New viruses: Human metapneumovirus (hMPV) has been known as a respiratory tract pathogen that caused a major outbreak of both upper and lower respiratory tract infections affecting infants and children in 2001 (46). Human bocavirus (HBoV) has been identified as a cause of respiratory tract diseases since 2004 and it belongs to the family of Parvoviridae (47). The newly-recognized viruses such as human metapneumovirus (hMPV) and human bocavirus have been recognized in 8-12% and 5% of pneumonia cases in children, respectively. RSV, hMPV and HRV have been identified as the leading causes of children's pneumonia in developed countries (48)(49)(50). Other viruses: Varicella zoster, Herpes simplex virus, Cytomegalovirus, measles and Enteroviruses are the other causes of RTIs (22). Bacteria Streptococcus pneumonia: S. pneumonia is a frequent cause of morbidity and mortality among children all over the world, especially in the developing countries (51)(52)(53). The peak age of this infection is in children under 5 years old. Centers for Disease control and Prevention (CDC) have analyzed the invasive pneumococcal disease (IPD) in USA since 1994. CDC data indicates that children less than 2 years of age have the most prevalence of the invasive disease (54) ( Table 1). The outbreak of invasive pneumococcal disease in children < 5years old declined from 77 to 22 new cases per 100000 (74%) from 1997 to 2008 which is better than target of 46 cases per 100000 population in 2010. The outbreak of penicillin resistant pneumococcal infections in children less than 5 years old decreased from 16 to 7 new cases per 100000 while 2010 target was 6 cases per 100000 (55). Target goals in 2020 for children less than 5 years are 12 invasive pneumococcal disease cases per 100000 population; also penicillin-resistant pneumococcal infections decline in children under 5 years to 3 cases per 100000 population (56). The incidence of pneumococcal disease in Europe is lower than USA but the disease rate varies from 14 cases per 100000 in Netherlands and Germany to more than 90 per 100000 in Spain (57). The outbreak of invasive pneumococcal infection in the Gambia reaches 500 per 100000 in the first year of life and 250 per 100000 in the children under 5 years of age (58). The study in Kenya reported that prevalence of pneumococcal bacteremia in children less than 5 years old was 597 per 100000 (59). The seasonal pattern of pneumococcal infection points to winter and the peak months are December and January (3-5 times more than August) (60). Senstad et al in Norway also reported a peak in January and a small outbreak of hospital pneumonia in summer (61). Pneumococcal infection peak is winter in temperate climates. The seasonality is due to several factors such as indoor crowding, low humidity, associated viral infections, cold weather and air pollution (62). Haemophilus influenzae: Haemophilus influenzae type B (HIB) remains the second important bacterial pathogen of pneumonia (22). Almost one in 200 children less than 5 years old developed invasive HIB disease and nearly two-third of HIB infections occurred in children under 18 months. In 2009, a study on children less than 5 years old in the United States showed 32 cases of invasive HIB disease. In addition, they detected H. influenzae serotypes in 178 cases. Most cases were found in unvaccinated or incompletely vaccinated children. Outbreaks of HIB have fallen by 99% after vaccination (63). HIB vaccination has been available from 1990 and is recommended for children younger than five years (64). In 2010 and 2011 HIB caused just 12% of H. influenzae cases. From 2010 to 2011 the outbreak of invasive non-HIB influenza was 0.60 cas-es per 100000. Highest incidence was seen in under one year old children (3.14) and then in children between the ages one to four years old (1.50) (65). Staphylococcus aureus: Staphylococcus aureus, after S. pneumonia and H. influenzae, is the third important bacterial organism in pneumonia (22 (66). The outbreak of staph infection among children in UK has risen in the past decade (67)(68)(69). There are limited data about MRSA infection in children in developing countries (22). Mycoplasma pneumonia: Mycoplasma pneumonia (MP), categorized as an atypical pathogen, is one of the most important causes of lower respiratory tract infections (LRTIs) worldwide. The epidemiological and clinical features of Mycoplasma pneumonia, including the cyclic epidemic and lymphopenia, are similar to other viral infections such as influenza. Although MP is identified to be the main cause of pneumonia in school-aged children the highest prevalence is seen in the group of children between the ages of 4 and 6 year according to a study in Korea. MP infection is endemic in many countries of the world but epidemics occasionally occur every 3 to 7 years. In Korea, epidemic periods lasting 12 to 18 months and 3 to 4-year cycles of MP pneumonia have been recognized from 1980 to now (69). One study in India conducted on 75 children infected by LRTIs reported 30.7% prevalence of M. pneuminiae (70). In England epidemics last approximately 18 months with 4-yearly intervals. The seasonal peaks of infection occur from December to February (71,72). One study in Iran investigated one hundred patients with acute LRTIs and found 10 positive PCR for M. pneumoniae with the prevalence of 10% including 6 of 62 hospitalized patients and 4 of 38 outpatients (73). Results Almost 150 million new episodes of pneumonia are identified per year worldwide more than 90% of which occur in developing countries. Nearly 30% of total annual deaths occur in children younger than 5 years old. Viruses remain the most common cause of RTIs. S. pneumonia and HIB are the main causes of bacterial pneumonia in the world; however, infections due to many of these pathogens can be prevented. Conclusions Widespread immunization against influenza, measles, bacilli calmette-guerin (BCG) and now pneumococcus have been related to the decline of the LRTIs in children.
2019-03-11T13:11:49.952Z
2013-03-02T00:00:00.000
{ "year": 2013, "sha1": "e127f58a57e6e94f1f9a5d125a0714386e599d4a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.17795/compreped-10273", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e50fb1649670cf3ce5361c488dcf93d6bc005fe5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214807199
pes2o/s2orc
v3-fos-license
Unraveling Plant Natural Chemical Diversity for Drug Discovery Purposes The screening and testing of extracts against a variety of pharmacological targets in order to benefit from the immense natural chemical diversity is a concern in many laboratories worldwide. And several successes have been recorded in finding new actives in natural products, some of which have become new drugs or new sources of inspiration for drugs. But in view of the vast amount of research on the subject, it is surprising that not more drug candidates were found. In our view, it is fundamental to reflect upon the approaches of such drug discovery programs and the technical processes that are used, along with their inherent difficulties and biases. Based on an extensive survey of recent publications, we discuss the origin and the variety of natural chemical diversity as well as the strategies to having the potential to embrace this diversity. It seemed to us that some of the difficulties of the area could be related with the technical approaches that are used, so the present review begins with synthetizing some of the more used discovery strategies, exemplifying some key points, in order to address some of their limitations. It appears that one of the challenges of natural product-based drug discovery programs should be an easier access to renewable sources of plant-derived products. Maximizing the use of the data together with the exploration of chemical diversity while working on reasonable supply of natural product-based entities could be a way to answer this challenge. We suggested alternative ways to access and explore part of this chemical diversity with in vitro cultures. We also reinforced how important it was organizing and making available this worldwide knowledge in an “inventory” of natural products and their sources. And finally, we focused on strategies based on synthetic biology and syntheses that allow reaching industrial scale supply. Approaches based on the opportunities lying in untapped natural plant chemical diversity are also considered. Drugs and Natural Products Several reviews, like the updated survey from Newman and Cragg (2016), pointed to the fact that many drugs on the market are from natural origin; these authors stated that, out of the 1,328 new chemical entities approved as drugs between 1981 and 2016, only 359 were purely of synthetic origin. From the remaining ones, 326 were "biological" entities (peptides of more than 50 residues, including therapeutic antibodies), and 94 were vaccines. A little less than half of those new drugs (549, exactly) were from natural origin or derived inspired from natural compounds. Furthermore, in the anticancer area, out of the 136 approved nonbiological compounds from the same period , only 23 were purely synthetic (i.e. not derived from natural compounds nor natural compounds themselves) (Newman and Cragg, 2016). Natural origin can have different definitions, and these authors accounted for three categories: unaltered natural (pure) products; defined mixture of natural products (NP) and natural product derivatives isolated from plants or other living organisms as fungi, sponges, lichens, or microorganisms; and products modified by medicinal chemistry. There are many examples: anticancer drugs such as docetaxel (Taxotere™), paclitaxel (Taxol™), vinblastine, podophyllotoxin (Condylin™), or etoposide; steroidal hormones such as progesterone, norgestrel, or cortisone; cardiac glycosides such as digitoxigenin; antibiotics like penicillin, streptomycin, and cephalosporins [see IA Ross for more examples (1999)]. Furthermore, Rodrigues et al. (2016) pointed to the fact that fragments derived from natural structures are a source of diverse molecules from which new drugs can be designed, thanks to the fragment-based drug discovery approach (Erlanson et al., 2016;Mortenson et al., 2018;Yñigez-Gutierrez and Bachmann, 2019). Screening for New Drugs and Discovery Approaches Besides the understanding of pathological processes, the source of molecules has been a main concern for the pharmaceutical industry. Vast libraries of compounds have been established in order to feed the research. For example, in midsize pharmaceutical companies, it is common to find libraries from 30,000 up to 500,000 compounds, while for big pharmas, the numbers are more in the 500,000 to several million ranges (Macarron et al., 2011). To our knowledge this is also the case for the National Chinese Compound Library in Shanghai, China (http://en.cncl.org.cn/). Finally, national or transnational efforts have been reported to create such depositories of compounds for the use of screening programs from the Academy: see Horvath et al. (2014) in Europe and Thornburg et al. (2018) for the NIH/NCI effort. In addition, vendors are also selling libraries of compounds composed of a "large" diversity that they build according to different principles (Boss et al., 2017). Several publications deal with how the compounds are chosen (Langer et al., 2009), if they are following Lipinski rules (Lipinski et al., 2001;Lipinski, 2003) or not, if they are virtual (Glaab, 2016) or real, if they are systematically tested on all the targets, how they can be organized in subclasses of compounds designed to potentially interact with channels, receptors, or enzymes, etc. Furthermore, the composition of the library in relation with the main categories of molecules-small synthetic compounds, drug-like organic compounds, peptides, proteins, sugars, nucleosides, or natural compounds-can greatly vary in function of the "Pharma company culture", that is to say, the compounds that have already been synthesized in a given company as well as the "sensitivity" of the medicinal chemists and screening people. When the decision to incorporate natural products is made, pure well-defined compounds or extracts are selected according to different criteria: pharmacognosy, ethnopharmacology, or even traditional knowledge. Because of the traditions existing in the uses of botanicals and medicinal plants, this empirical knowledge has accumulated for ages and passed through generations. Modern pharmacology has explored and validated probably only a minor part of this knowledge through attempts of rationalizing the use of plants as sources of drugs. This first possible approach would be a way to guide some drug discovery projects. Another totally different approach based on the use of high-throughput screening (HTS) emerged one or two decades ago. It aimed at exploring systematically the immense chemical diversity in secondary metabolites and was based on the technological developments of discovery tools such as miniaturization and automation (Atanasov et al., 2015). In this sense, the increase of the amount of compounds was tested simultaneously, and the scientific rationalization of the selection of those compounds, thanks to the growing capacities of chemoinformatic approaches (Larsson et al., 2005;Larsson et al., 2007), emerged as trends in HTS. Strategies for Identification of Bioactive Compounds From Crude Extracts When reviewing the literature on how discovery of plant-derived actives is performed, the following pattern emerges (Weller, 2012;Gupta, 2015, Heinrich et al., 2018). Plants are collected in precise geo-localized sites. The collection may include the whole plant or any part such as leaves, stems, bark, seeds, or roots. Then, the botanical material is dried and powdered using mechanical means, such as grounders. Those powders are then extracted with solvents at different temperatures or of increasing polarities with sequential extraction procedures in the cases when, for example, the chemistry of the active compounds is unknown. These first steps are important to consider as the extraction method might influence the chemical composition of the extract and consequently, its biological activity. Then, the extracts are dried under low pressure, and the final solid residue is suspended in a solution comprising the minimal amount of a biological-compatible solvent, often DMSO. The next steps would include the testing of the extracts in 96 (or 384) well plates against the biological targets of the program. These biological tests are often a cloned enzyme catalytic assay, a receptor binding assay, a protein-protein interaction assay or even a whole pathway (Ishibashi and Ohtsuki, 2008;Xie et al., 2018), to name but a few. Biological testing targeting whole-cells was also reported against cancer cells (Mazzio et al., 2014;Kant et al., 2016), virus-infected cells (Yi et al., 2004;Dai et al., 2012), or microorganisms (Correia et al., 2008;Figueroa-López et al., 2014;St-Pierre et al., 2018). Extracts showing activity on those tests are selected and submitted to fractionation by chromatography. Each fraction (typically around ten) will be tested in turn in the same assay, and the active fractions are subjected to one or several extra fractionations, often using alternative chromatographic conditions. Some of these experiments are based on an HTS environment. HTS methodologies provided us with impressive progresses in terms of increased speed of assay and lowering of price. Indeed, robots can handle several thousands of tests during a workday. However, the initial enthusiasm for HTS (Harvey and Cree, 2010;Sarker and Nahar, 2012) when applied to crude extract libraries in targeted assays systems has been facing several issues as stated recently by Thornburg et al. (2018). In fact, HTS techniques do not really modify the discovery process itself. For example, starting with 2,000 extracts-which is a modest number considering (a) the number of plants species available and (b) the number of compounds in an HTS campaign-might result into 10% actives, in the best case. Thus, the next step would be 200 hydrophobic columns, with the collection of about 10 fractions per chromatography, and then 2,000 tests. As exemplified, the real bottleneck in this whole process is the parallel fractionations of the actives; as of today, only partial progresses in their automatization have been reported over the last years (Sharma and Gupta, 2015). Other strategies have been developed that delivered results. In their detailed review, Atanasov et al. (2015) classify into five groups the strategies to identify bioactive compounds from plant extracts: the bioactivity-guided fractionation strategy previously mentioned, the similar synergy-directed fractionation strategy, the metabolic profiling strategy, the metabolism-directed (biotransformation focused) strategy, and finally the direct phytochemical isolation strategy. The group of metabolic profiling approaches has a more detached relationship with bioactivity as they are not focused on compounds. They were developed during the last 10-15 years in plant natural products. Using highly sensitive and reproducible analytical methods, they allow the correlation between chemical profile (qualitative or quantitative) and bioactivity data (Zampieri et al., 2017;Parrot et al., 2018;Wolfender et al., 2019) with recent progress in the field of data analysis and integration at the extract level (Wolfender et al., 2019). Finally, the group of direct phytochemical isolation approaches focuses on the comprehensive chemical characterization of the plant extract and the isolation of novel scaffolds without immediately evaluating their bioactivity. Successes and Limitations Some of the successes in terms of new drug development have already been mentioned in Drugs and Natural Products. But in terms of screening and strategy for finding an active compound in an extract as an enzyme inhibitor or a protein/protein interaction inhibitor, many successes have also been reported. Some examples of screening results of extracts with those approaches (Atanasov et al., 2015) can be mentioned here, but being exhaustive is impossible, as literally hundreds of such tests were performed [e.g. from our laboratories (Bousserouel et al., 2005;Lautié et al., 2008;Litaudon et al., 2009a;Litaudon et al., 2009b;Columba-Palomares et al., 2015;Catteau et al., 2017;Olivon et al., 2018) and from hundreds of others]. Some examples reported recently are given in Table 1 of the inhibitory activities of plant extracts on specific enzymes. Of note, the panel of enzyme activities is large and diverse; the examples in Table 1 reflect the current trend in terms of enzyme of origin for this type of work: they are mainly coming from cancer and diabetes/obesity fields. It is important to say that an unknown number of failing attempts remain unpublished. For the last thirty years screening for new moleculeswhether candidate drugs per se or hit compound that should be modified-is one of the two main sources of drugs for the pharmaceutical industry. These screening strategies delivered mixed results, and several factors complicated dramatically the Ascophyllum nodosum*** (Austin et al., 2018) analyses of these results. The main ones are in our opinion: the way diseases can be simplified-or not-to a molecular target that is amenable to HTS; the choices of the molecules which can be screened into these assays; the quality of the assays and/or their sensitivity; the statistical considerations to determine the threshold at which a compound is recognized as an active; and finally, the source of the screened molecules. We (Boutin et al., 2018) and others (Schmid et al., 1999;Dufresne, 2000;Thiericke, 2000;Mayr and Bojanic, 2009;Bodle et al., 2017) exposed our screening strategies at several occasions, and considering the diversity of the potential problems, it can be hard to identify a universal approach. This is a fact that HTS promising techniques have not delivered as many drugs as expected with regard to the price and the ambition of the solutions/organizations involved (Newman and Cragg, 2016). Several Limitations in the "Classical" Strategies to Identify Bioactive Natural Products The first type of limitation would be related to the collections of living organisms and access to plant species, for example. The probability still exists that a plant collected on a given location would not be there the next time around, especially the whole plant. If bioactivities associated with this species are discovered during the process, it could be hard to find again the same plant population. In those cases, two choices are possible: to look for this species in another location or to look in the same location for a closely related species with the help of botanists. Neither solution is entirely satisfactory: the change of harvest location can imply change(s) in secondary metabolite production as their biosynthesis generally depends on the control by both biotic and abiotic factors. Furthermore, the other species of the same genus might very well express different genes coding for proteins acting slightly differently in biosynthetical pathways leading to different compounds or to the same compounds synthetized in different proportions (see below for discussion on diversity). Moreover, if the collected species are not properly reported following the correct taxonomy, the harvest might be even more difficult to reproduce: thereby, a table of correspondence between the names mentioned in this review (and used in the articles of origin) and the accepted plant species names is provided in the supplementary data (Table S1). Another type of limitation is related with the fact that groups performing bioguided discovery of new actives are often confronted with similar problems: (a) the active extracts, once fractionated, do not maintain the level of activity; (b) the whole long and difficult isolation process leads to a molecule that is mundane or known for decades; (c) the final product quantity in the remaining active fraction is too small to hope for a structure elucidation; (d) going back to the location of origin, the same plant population is not found anymore or if the plant is found, the same extraction treatment led to an apparent lack of active fraction(s) on the same target. Most of those events are not discussed in publications, obviously. Therefore, this bioguided approach, existing with small variations--mostly technical--is paved with difficulties that are more than often discouraging at least for an industry or for big programs . As discussed in many instances and particularly well summarized by Atanasov et al. (2015), one of the key starting points is the choice of the pharmacological assay used for the screening. The fact that the targeted disease is obviously more complex than the molecular target it has been simplified to, is universally recognized, but alternative approaches remain elusive or still difficult to set up. For example, the disease could also be simplified by a phenotypic change of a cell originating from a diseased organ using the stem cell differentiation approaches (Bedut et al., 2016). After treating such a model with NP extracts -having potentially hundreds of compounds-reverse pharmacology should allow recognizing the actual target of the NP compound as the lysed cell extract is chromatographed to retain the target proteins (Raut et al., 2017). Nevertheless, cellular phenotypic changes are extremely complex to understand, and this approach has been more than once extremely frustrating despite some successes, outside of the NP recognition area (Nosjean et al., 2000;Graves et al., 2002). Actually, another limitation of this approach is linked with working with complex mixtures in NP discovery as sometimes several compounds could contribute synergistically to the global action on a disease: like for example the compounds (at least three of them) derived from leaves of Salvia miltiorrhiza (red sage) that are reported to act at many different levels of liver fibrosis (Ma et al., 2019). This kind of observation would strongly argue against the strategy described in here aiming at the discovery of the main pure drug-like compound in a plant extract. In summary, such approaches (phenotype screening and reverse pharmacology) would require disease models that are far from being the current available systems. Limitations Related to the Need of Market-Compatible Amounts In the domain of drug production, a fact is that around 60,000 tons per year of salicylic acid are produced synthetically worldwide (https://ihsmarkit.com/products/chemicaltechnology-pep-reviews-salicylic-acid-2003.html), and this figure translates to 80 billions of pills. That is to say, that in order to benefit patients, critical amounts of a potential drug need to be considered, and the access to the identified NP as well as its availability are critical. For example, at the start of the Taxol™ story, the compound was isolated from Taxus brevifolia bark. Ten kilograms of bark was necessary to obtain 2 g of the pure compound needed for the treatment of one patient. For the clinical study, 12,000 trees were cut down to obtain the 2 kg necessary for the studies leading to the approval of the compound. In fact, the antineoplastic Taxotere™ is now produced by hemi-synthesis from the precursor 10-deacetylbaccatine III, which is isolated from leaves of Taxus baccata. In other words, an alternative had to be found to the initial procedure taking a too heavy toll on trees of the initial species. Another interesting figure dealing this time with herbals is the annual demand of ginseng roots around 50,000 tons (Mathur et al., 1999). This kind of figures embodies a major limitation to new drugs originating from NP that is sometimes disregarded by research laboratories: there is a real challenge for the use of NP in reaching a tonnage compatible with the drug market. Another reason for which NP based projects might be challenged for consideration by the industry is the small amount in which natural compounds generally accumulate in plant tissues. Indeed, structural characterization as well as biological testing can be made more difficult. Nevertheless, many progresses have been done in this area: in relation with structural characterization, for example, considering the growing sensitivity of the analytical methods and instruments, it will not be long until the barrier of the microgram for the determination of the structure of a compound would be reached. The analytical techniques for structural elucidation of an unknown compound are based on a mixture of spectroscopy (infrared, UV, Raman), mass spectrometry, and nuclear magnetic resonance (proton and C 13 NMR). Nice examples of such task are described in various publications (Baker et al., 1990;El-Elimat et al., 2013) while difficulties are also discussed by Amagata (2010). Advances in NMR techniques were reviewed (Breton and Reynolds, 2013;Harvey et al., 2015;Gomes et al., 2018), and several examples of compounds for which structures were solved with mass of pure compounds lower than 100 µg were reported. In fact, the situation could be different from one plant to another or from one organ/tissue to another, even in a single plant species. Therefore, even if we can identify the many compounds in each sample while being able to establish their structure, it would be more straightforward to consider the feasibility of the systematic compendium of all the plant chemical components. A careful understanding of the metabolic pathways of the tissues of the plant species from which the product has been isolated is also necessary. It is especially the case when hemisynthesis is to be used to perform the NP's industrial production. Instead of using a vital organ of the plant in which the compound is in fair concentration, one tries to find a precursor of the product in a renewable part of the plant, like the leaves. For the hemi-synthesis of Taxol™, 30 km 2 of T. brevifolia fields in the Yunan province (China) consists in the main source of precursor for (http://www.yewcare.com/index.php cited in Malik et al., 2011). Certainly, the use of a renewable source of the natural product is desirable, and often enough that such sustainable and profitable solutions are found, as proven by the 40% of the available drugs in the Pharmacopeia that are from natural origin (Cragg et al., 1997;Newman et al., 2003;Newman and Cragg, 2007;Da Silva and Meijer, 2012;Newman and Cragg, 2012;Newman and Cragg, 2016). Many examples of drugs from natural origin exist, one being found in the still common use of plant sapogenins, alkaloids, or sterols for the production of steroids for human drugs (sex hormones, corticosteroids, contraceptive drugs, etc.). Other examples involving hemisyntheses modifying the NP can be given: quinine (at least 100 tons per year), camptothecin (Asano et al., 2009), cocaine, camphor, vitamin B12, etc. Interestingly, at the other end of the spectrum, extremely simple molecules also of NP origin like salicylic acid for pain (J. B. Jin et al., 2017) or metformin for diabetes (Bailey, 2017) see their production reach industrial scales by total syntheses using standard organic chemistry. The Need for Easy Access to Chemically Diverse Compounds Testing as many compounds as possible with chemically diverse structures is important in order to have better chances to discover new drugs (Firn and Jones, 2003). Indeed, the affinity of a drug to a target is the result of shape and electrostatic potential complementarity between the drug and the binding site (Bauer and Mackey, 2019) as well as binding kinetic related properties such as desolvation and conformation changes upon binding. Ligand flexibility therefore plays an important role in identifying partially fitting chemicals as starting points further optimized by medicinal chemistry. Screened compounds need to be diverse in shape and electrostatics to match any of the 3,300+ binding site, as listed in the pocketome (Kufareva et al., 2012), as well as diverse in structure to allow thorough optimization. It is noteworthy that the probability of identifying a hit considerably decreases with the increasing complexity of the ligand (Hann et al., 2001). The complexity of a molecule increases with the size and the atom connectivity. As a result, the more complex they are, the greater the number of molecules one should screen. Statistical analyses have been performed on natural products to study their chemical diversity (Firn and Jones, 2003;Hong, 2011), molecular properties (Feher and Schmidt, 2003;Quinn et al., 2008), scaffold diversity (Lee and Schneider, 2001;Grabowski et al., 2008;Yongye et al., 2012), and coverage of the chemical space (Rosén et al., 2009a). Comparisons between NP and other compound collections (Henkel et al., 1999;Lee and Schneider, 2001;Grabowski and Schneider, 2007;Rosén et al., 2009a) have also been performed, showing that NP differ from drugs and synthetic compounds in several aspects. NP are considered complex due to their number of asymmetric centers, their number of Sp3 carbon ratio in rings, and their number of ring junctions. Indeed, NP display in general more chiral centers than drugs, although their number of chiral centers to number to carbon atoms ratio may be lower (Gu et al., 2013;Skinnider et al., 2017). Although there is no clear evidence that this complexity is necessary for their biological activity (Firn and Jones, 2003), it greatly impacts their specificity (Clemons et al., 2011). NP also display a great diversity in their scaffolds (Lee and Schneider, 2001;Grabowski et al., 2008;Yongye et al., 2012). For instance, the GreenPharma database (Do et al., 2015;Gally et al., 2017) contains 55,185 Murcko frameworks (Bemis and Murcko, 1996) for 302,000 natural products (18.3%), whereas NCI and ChEMBL databases contain a little of less NP-derived scaffolds: 13.1 and 13.6% respectively. An interesting representation of the divergent diversity between bioactive NP and synthetic compounds can be seen in Figure 1. This graph reported by Rosén et al. (2009a) shows, on a small sample of compounds (126,140 natural compounds versus 178,210 medicinal chemistry-issued compounds), the difference in repartition between these two populations after a principal component analysis. NP are also considered biologically diverse compounds that can hit a vast diversity of biological targets (Hong, 2011). Most of them have a single known target, with a mean of 2.66 targets and only few highly promiscuous compounds. However, biological activity is reported for only 2% of the NP (Gu et al., 2013). An extended study using docking experiments of NP in 332 targets showed a mean of 2.14 targets per natural products, while in comparison, drugs interact in average with 3-6 targets, and 50% of all drugs might exhibit activity against more than five targets (Mestres et al., 2008;Jalencas and Mestres, 2013;Hu et al., 2014). For all these reasons, there is a strong need for an easy and organized access to chemically diverse structures in the form of libraries of compounds. It is necessary to keep a good balance between diversity, defined as the mean of compound's pairwise dissimilarity and structural redundancy to ensure greater diversity of hits at an overall hit rate in the range of 0.5 to 2% and 20-40% within an active chemical series. In addition, these compound libraries are required to be constantly enriched with compounds filling the gaps in the molecular space. Interestingly, Harvey et al. (2015) stated: "diversity within biologically relevant chemical space is more important than library size", this biologically relevant chemical space being defined by proteinbinding sites for potential ligands. Another point emphasized by these authors is that while the commercially available chemical space is wider than the explored natural product universe-the first one is evaluated to 5.10 9 compounds, while the second only represents 3.10 5 compounds (Banerjee et al., 2015)-NP are intrinsically more diverse. A major component of this comparison is that NP are more complex, from a chemical point of view, leading to greater shape diversity. However, NP might not be the best compounds to screen in HTS campaigns due to their complexity ("best" in a drug-discovery perspective). Indeed, their probability of being active on a random target is lower, and their chemical tractability is far from optimal. Although NP decently occupy the biologically relevant chemical space, and 80% meet the criteria to be considered as drug like compounds (Harvey et al., 2015), their complexity might be the bottleneck during the optimization phase of the lead compounds. Thus, acquiring as many diverse compounds as possible is necessary in drug discovery programs, and the exploration of natural existing chemical diversity would be a must. Nevertheless, not only the design of those libraries, but also the design of the assays and the understanding of the pathways that are responsible for the targeted pathologies need to be relevant and well understood. In summary, the main idea is that if compounds as chemically diverse as possible are necessary to feed drug discovery programs, then a great deal of efforts should still be invested in the exploration of natural chemical diversity because, in this particular context, natural chemistry is more advanced than organic chemistry. After this introduction on the background of NP in the area of drug discovery, we aim at discussing and showing the extent of natural chemical diversity (Natural Chemical Diversity) as well as the strategies of having the potential to embrace this diversity (Different Strategies to Benefit From This Diversity) based on an extensive survey of recent publications. A preliminary survey on the main origins of recently reported natural compound structures' literature was performed by taking a complete volume from a journal that is considered as a gold standard in NP (J. Natural Products, 2018, vol 81). Then three different tables mentioning the species of origin and the type of compounds newly reported in 2017 and 2018 in several journals selected for their specialization in the area (like for example Journal of natural products, Planta medica, Tetrahedron Letters or Natural products communications) were built upon this sampling. Finally, Different Strategies to Benefit From This Diversity offers some strategic and technical proposals based on our interpretations of the trends of the field supported by recent ad hoc literature. NATURAL CHEMICAL DIVERSITY The chemical diversity of NP is known to be extremely wide, and it can be divided in several types representing a challenge for the chemist and the biologist. Diversity of Origins in Natural Compounds Diversity can be measured and quantified by many methods some of which have been used for decades to qualify the product library diversity (Langer et al., 2009). Our first survey showed the main origins of the molecules reported as natural compounds in 2018: they were classified between those coming from animals (2.6%), from fungi and lichens (9.3%), from microorganisms (12.9%), from marine organisms in the broadest sense of the term (13.2%), and from plants (44.1%), while the papers dealing with partial or total syntheses of natural compounds (19) and about 20 others describing various methodologies of analyses were put aside. As shown, in 2018 plants were still the main source of "new" natural compounds. Then, by sampling the recent ad hoc literature, three different tables were constructed gathering the origin of new compounds reported in 2017 and 2018 in several journals selected for their specialization in the area. Although plants were our main focus, we thought it might be useful to also consider other sources of diversity from other living kingdoms as they also consist of very interesting sources to explore and because some of them are somehow related to plants: it is especially the case for compounds which functions in plants are related with defense (Bednarek and Osbourn, 2009) and other ecological interactions as compounds "optimized by evolution" according to Gunatilaka in its review on natural products from plant-associated microorganisms (Gunatilaka, 2006). Among the 240 papers that were reviewed, a handful described compounds isolated from venoms and toxins originating from marine organisms and animals (insects, snakes, and alike). Then, a few other papers described compounds coming from microorganisms as can be seen in Table 2. Microorganisms, such as bacteria and yeasts, were traditionally important sources of antibiotics and are still a source of new peptides (Xue et al., 2018), but here compounds other than peptides can also be found that are close to secondary metabolites naturally synthesized in those cells. New peptides are more often described in journals specialized in peptide chemistry and pharmacology, even if their source is a living organism. Table 3 gives examples of compounds isolated from lichen, fungi, and sponges. The chemical diversity of these compounds is interesting but, as of today, growing lichen or sponges is not so much reported and might turn out to be difficult on a larger scale. On the other hand, secondary metabolites synthetized by microorganisms growing in extreme conditions might also be difficult to obtain in large quantities. It would be therefore interesting to dig deeper in this area, as it has been done elsewhere for other purposes: biotechnological Mokashe et al., 2018), industrial (Sarmiento et al., 2015), pharmaceutical (Irwin, 2010;Karker et al., 2016;Patel, 2016;Oliveira et al., 2018), or ecological (Casillo et al., 2018;Orellana et al., 2018). Finally, Table 4 gives many examples of compounds isolated from different plant species and different tissues from these species. As plants are one of the major sources of "new" compounds, the following sections will deal with their diversity. Diversity of Plant Species As mentioned, the compounds reported from plants ( Table 4 of our survey) originated from 165 different plant species, from trees to ornamental plants and harvested in very diverse areas from Australia to Antarctica. In fact, a total of 146 different genera were represented from 82 different botanical families, only six of which are not Angiospems (two ferns, two liverworts, a moss, and a spikemoss). Most of them, around 93%, are thus from the flowering plant group ( Figure 2). Furthermore, it is interesting to observe that the remaining 76 families from which these new compounds were reported represent a small part of the total 416 botanical families defined in the last 2016 update of the Angiosperms Phylogeny Group (The Angiosperm Phylogeny Group et al., 2016). As illustrated, most of the new compounds of our survey were isolated from the superasterids and superrosids, major clades from the eudicots. Indeed, this small survey illustrates how a large proportion of plant species still remains underexplored, not only in the Angiosperms but between all taxa of Plant kingdom. Furthermore, it is important to consider that new species of plant are discovered quite often: for example, 2,034 new plant species were recorded in 2014, including at least one tree species (Mancuso, 2018). So far, 310,000 plant species have already been described, among which authors estimate that only 6% have been investigated pharmacologically and 15% phytochemically (Atanasov et al., 2015). It clearly indicates that even more compounds remain to be found in plants. It is also interesting to observe that reviews such as Solyomváry et al. (2017) teach us that plants from different taxa can also synthesize identical although rather complex compounds. Gene duplication and neo-functionalization leading to the extension of the existing metabolic pathways are both part of the mechanisms that have been identified in plants as responsible for diversification of secondary metabolites together with the influence of ecological factors: for example, it has been suggested that, from a small group of precursors, plants would synthetize a full range of highly diverse compounds rapidly changing that are "screened" for their biological activity afterwards as mechanisms of adaptation that would help plants cope better with biotic and abiotic pressures (Moore et al., 2014). This ecological understanding of plant secondary metabolite diversification also contributes to the anticipation that the Thermoactinomyces. Vulgaris Thermoactinoamide A (Teta et al., 2017) Trichoderma sp. Neomacrophorin X (Kusakabe et al., 2017) more plants species are described, the more diversity is to be found in the end-products of these pathways along with possibly valuable "new" compounds. A recent interesting study (Henz Ryen and Backlund, 2019) discussed the chemical Angiosperms' diversity among natural products and some of these evolutionary mechanisms leading to diversification of chemical structures in function of the group of secondary metabolites (flavonoids, tropane alkaloids, sesquiterpene lactones, and betalains). They use the ChemGPS-NP developed previously by their group (Rosén et al., 2009b) as tool to localize compounds in the chemical property space, "measuring" this way the chemical diversity. In other words, the preservation of ecological niches and plant biodiversity will also serve our interest in terms of chemical diversity for possible applications. And, interestingly, at higher trophic levels, it will also contribute to biodiversity (Schuman et al., 2016), creating in turn other biotic pressures on plants that may adapt by synthetizing other compounds! Chemical Diversity in Plants Related to Space and Time As plants are multicellular organisms with organs specialized for different functions, it seems logical to think that some biosynthetic pathways could be turned on or off depending on the part of the plant studied and that a certain level of diversity can exist within the plant tissues. The main parts of the plants that are classically separated are roots, twigs (stems), leaves, flowers, fruits, and seeds. Of course, constraints of collection make some of these parts more suitable than others. Our survey in Table 4 details for each one of the 165 reported plant species the plant part from which it was isolated. Figure 3A summarizes the proportion of new compounds reported in different plant parts in our survey. For comparison, Figure 3B shows similarly the organs/parts of origin of the active compounds reported from the 49 plants described in Ross work (Ross, 1999;Ross, 2001). It seems that in both cases, most of the studies are done on leaves. On the other hand, it is not clear, on a systematic basis, if what was found in the other organs can be found as well-even in small quantities-in the leaves. Nevertheless, leaves are the most accessible part of any plant, and they are renewable which allows preserving the whole plant. Obviously, it is also valid for fruits or flowers, but their availability may be subjected to seasonality. Indeed, often enough, a compound is described from a part of a given plant. It seems interesting to emphasize that it does not necessarily mean that it is the main component or a major Aconitum apetalum Apetaldines Allanblackia floribunda Xanthones (Mountessou et al., 2018) Alnus viridis Hydroxyalphitolic acid derivatives (Novakovic et al., 2017) Althaea officinalis X (*) (Sendker et al., 2017) Amorpha fruticosa Amorphispironones ( (Ross, 1999;Ross, 2001), the author described the way plants are used in traditional medicine. The different parts of a given genus can be used alone or in mixtures in preparations ranging from infusion, maceration, decoction, juice, dried powder, or even fresh organs (fruits, leaves) ingested as such. Astonishingly, those recipes were collected from scattered geographic places. This last observation suggests that traditional medicines around the world independently found similar remedies to similar diseases and sometimes with the use of related plant species. Another interesting source of intraspecific chemical diversity is the environment of the plant cells biosynthesizing the chemical compounds of interest: it is well known that compounds produced by the same plant species can vary in nature or quantity depending on the environment (localization or time of the year) or the part of the plant where it has been extracted from. Moore et al. (2014) pointed out in particular the plant ontogeny, but also genetic and environmental variations as major sources of diversity for plant secondary metabolites. The existence of chemotypes is another example of this intraspecific chemo-diversity well described for example in plants producing essential oils. Factors like moisture, salinity, temperature, or nutrition levels are known to influence the essential oil production (Sangwan et al., 2001), and the genotype could also significantly influence the chemotype as it was shown recently for Valeriana jatamansi Jones . The biosynthesis of natural products can also differ in function of the different individuals from the same population. It is the case, for example, when these compounds are related to antimicrobial activity as summarized by Bednarek and Osbourn (2009) in their perspective article on chemical diversity linked with plant defense: these compounds can be synthetized constitutively as part of normal plant development-and stored in specialized tissues-or synthetized in response to pathogenic challenges through the activation of the transcription of specific genes of the corresponding biosynthetic pathways. Diversity of Chemical Skeletons and Structures Based on the number of genes, it has been estimated that the plant kingdom contains more than 200,000 different metabolites Zizyphus jujuba Epicatechinoceanothic acids (Kang et al., 2017a) Compound names in bold characters are those that are exemplified in Figure 1. Compounds in green were extracted from stems or seeds; in red from roots; in blue from flowers or wood; and in black from leaves, hole aerial part, rhizomes, fruit or bark; (*) those papers described many different compounds in those plant parts. Lines overlaid in gray exemplify similar genuses expressing different compounds. with values for single species ranging between 5,000 and 15,000 (Trethewey, 2004;Fernie, 2007), values that are significantly greater than those of microorganisms (∼1,500) and animals (∼2,500) (Oksman-Caldentey and Saito, 2005). But it is not only the global absolute value of chemically diverse compounds that is interesting. Such diversity and such dynamicity are indeed a wonderful wealth of chemical structures, source of inspiration for medicinal chemist, once the structure is carefully identified, and the related pharmacological activity is screened. But it can also become a source of complexity for the phytochemist working on the structure or on the structure/activity relation. Classically, within this chemical diversity in plant secondary metabolites, the nomenclature used by pharmacologists to attempt classifying several families of natural compounds such as polyketides, phenylpropanoids, terpenoids, steroids, or alkaloids is based more on their biogenesis and the pathway they originate from (acetate, shikimate, mevalonate or methylerythritol phosphate pathways) or their combination, than their structure itself. And even within a defined group, the diversity can be impressive: for example, the terpenoid family is suspected to contain at least 50,000 different molecules (Kirby and Keasling, 2009) while at least 12,000 flavonoids have been described (Henz Ryen and Backlund, 2019). Certainly, how "different" these molecules are could be further commented, but as discussed below, apparently minor differences (for example a methyl or a hydroxyl moiety) might dramatically change the molecule's pharmacological properties. In the data gathered in Table 4, as previously commented, it can be observed that even in the same plant species and same organ, different compounds have been characterized like jozilebomines and dioncophyllines in Ancistrocladus ileboensis leaves (J. Li et al., 2017d;J. Li et al., 2017c) or xanthohumol and a-acid derivatives such as humulones in Humulus lupulus flowers . Nevertheless, these compounds might only be slightly different from each other in terms of skeletons as illustrated in the following example: the compounds which structures are shown in Figure 4 were recently isolated from six different species of Euphorbia. They are all diterpenoids that slightly differ in their structures: ent-abietane derivatives (structures Z1 and Z2) (C. , gaditanone (structure AA) (Flores-Giubi et al., 2017), ingenane derivatives (structure AB) , premyrsinane and tigliane derivatives (structure AC) , dideoxyphorbol ester (structure AD) , sooneuphoramine (structure AE) (Gao and Aisa, 2017), jatrophane analog (structure AF) (Rédei et al., 2018), and other abietane derivative (structure AG) . These compounds all originated from the same biosynthetic pathways where the cyclization reactions of the precursor geranylgeranyl diphosphate and several rearrangements allow many structural variants of diterpenoids to be produced. It is interesting to notice that despite a homology in term of basic scaffold (a phorbol ring system with some rearrangements), all the compounds are extremely different from each other from a chemical point of view. (Table 4). Lautié et al. Maximizing Explored Plant Chemical Diversity This last statement requires underlining that the difference between two chemicals can be dramatic concerning their biological activities while minimal concerning their chemistry. For example, the chemical difference between testosterone and estrone is a saturation of the A cycle of the cholesterol backbone. These minor chemical differences leading to massive differences in pharmacological potential have been the source of a never ending debate among screeners in the pharmaceutical industry on what should populate chemical libraries: should minor variations of basic skeletons be included or not (in the primary screening) knowing that a missing methyl could lead to a nonactivity and vice-versa? In our view, it seems important to gather as many compounds as possible inside a chemical series, even if the diversity seems to be futile, because, by setting the minimum results at a poor but significant level, such as 1 to 10 µM (depending on the molecular target) hits could be found, and even minor differences in structures can lead to new leads. In this sense, working with phytochemical diversity as shown in Figure 5 becomes meaningful. Furthermore, most of these compounds would be difficult to obtain by conventional organic chemistry methods, not only because of the presence of several intramolecular bridges, but also because of the stereochemistry of the final product [see for example discussion on isoprenoids (Bouvier et al., 2005)]. Secondary metabolites are the results of multienzymatic pathways, and all these enzymes have a strict stereo-specificity. These multiple possibilities in terms of spatial arrangement in compounds contribute to the wide range of pharmacophores lying in natural products. It seems interesting not only to mention the plant chemical diversity but also to show it with some interesting and diverse structures: Figure 5 already presented about 20 different chemical skeletons of NP that are reported in Table 4 (compound appearing in bold cases). More diversity is presented in Figure 6. To be noticed that compounds issued from microorganisms are often-but not always-peptidederived structures, often macrocyclic compounds (Newman and Cragg, 2015), such as the families of antibiotics found in Penicillium and the like, and thus are not the main purpose here. However, it is a fact that the structures presented in Figure 6, even if they are almost randomly chosen, are different from what provides the current state of the art in medicinal chemistry. This is not really surprising when considering that 83% of core ring scaffolds shown in natural products cannot be found in libraries of synthetic compounds (Harvey et al., 2015). It should be reminded that part of the interest in "discovering" such structures, if they have any biological activities, resides in our capacity to use medicinal chemistry to translate such complex structures in simpler molecules amenable to the industrial production. At the same time, those structures might bear activity towards proteins, the inhibition of which has not been reached yet, with our current access to chemical synthons. This has been reported and discussed according to two points of view: (a) the natural compound-derived fragments (Rodrigues et al., 2016) and (b) the list of compounds issued from natural skeletons: what Newman and Cragg (2016) called NP derivatives in which 268 out of 1,328 new drugs can be found. Among the classical examples, there are vincristine and vinblastine as part of vinca-alkaloids (Zhou and Rahmani, 1992), statins (Sirtori, 2014), glifozins (Burson and Moran, 2015), and ingenol mebutate (Alchin, 2014). It should be reminded here that the current process is moving more from "simple" hits-from whatever origin-to more complex molecules by means of medicinal chemistry decoration of these (Ross, 1999;Ross, 2001 synthons. The process of lead hopping, as defined by Krueger et al. (2009) and Chakka et al. (2017), should theoretically permit to mimic one complex structure by another, simpler one. All this body of techniques and theories should be put together at work in order to gain new powerful compounds from new, NP-based approaches (Yñigez-Gutierrez and Bachmann, 2019). In fact, a survey of the data indicates two important features of NP, on the basis of this selection: 1/compounds are very diverse even if they include some common features (like for instance the particular cycloheptanic structure found in some of the main compounds from Euphorbia-see Figure 4) and 2/the high number of asymmetric carbons render their synthesis by standard organic chemistry difficult if not impossible. For instance, some examples of those numbers are given in Table 5. Considering that the number of theoretically possible isomers is 2 n , n being the number of asymmetric carbons, in some cases, the total number of possible isomers is in the several thousand ranges (see numbers in Table 5). Nevertheless, a series of impressive chemical papers reported complete syntheses of such compounds by "standard" synthetic organic chemistry, even if the up-scaling of such tour de forces remains to be addressed (Kuttruff et al., 2014). DIFFERENT STRATEGIES TO BENEFIT FROM THIS DIVERSITY As previously mentioned, a deep interest lies in searching and exploring the immense plant chemical diversity for drug discovery purposes, but the strategies to do so need to be reevaluated. Indeed, most of the natural secondary metabolites mentioned herein are not-so far and by far-easily synthesized. It is still through harvesting that we can use plants for discovery and development purposes or for industrial scale production. Compounds with a superior pharmacological activity isolated from a plant part must be isolated from plant extracts where they lay in minor amounts. Certainly, as previously discussed, this is a main bottleneck for many applications, as it is time-consuming, the "superior activity" can vanish in the process for several reasons, or a compound of already known activity can be rediscovered at the end of the process. "Research and Discovery": In Vitro Culture Regarding plant cell culture, several recent reviews bring a new light to in vitro culture, for investigation purposes as well as for its uses as a valuable platform for high-value metabolite production (Wilson and Roberts, 2012;Moscatiello et al., 2013;Ochoa-Villarreal et al., 2015;Eibl et al., 2018). Examples are scattered in the literature with in vitro cultures producing compounds of interest either using dedifferentiated cells from callus or undifferentiated cells from meristematic cambial cells. From the calli, systems of plant suspension cell cultures can be generated like for example for acteoside production from Scrophularia stiata (Khanpour-Ardestani et al., 2015), rosmarinic acid from Satureja khuzistanica (Sahraroo et al., 2014), or carotenoid from Tagetes erecta (Benítez-García et al., 2014). On the other hand, tissue cultures (i.e. hairy roots) can also be developed from already differentiated cells. All these types of culture are generally developed for a particular purpose, often very restricted to a given compound in a given plant, such as for camptothecin production by Ophiorrhiza species (Asano et al., 2009), Schisandra chinensis lignans production (Szopa et al., 2017;Szopa et al., 2018) or boeravinone Y by Abronia nana . Several examples of cultures at a commercial scale have also been described validating its feasibility and scalability from lab-scale to large-scale (paclitaxel form Taxus spp. cultures, rosmarinic acid from Coleus blumei cultures, scopolamine from Duboisia spp. cultures (Wilson and Roberts, 2012) to name only but a few). Nevertheless, if the literature provides us with some very welldescribed examples of such tasks, it remains to be seen if these techniques are universal. In other words, what has been described to be possible to obtain a "large scale" plant cell culture is not necessarily applicable to the next cell culture of a different plant, and a fortiori, of a different organ of the same plant species. What has been considered as a promising approach remains in some cases a challenge, as no experimental process has been developed-or published-with a general usage purpose. Thus, the perfectly described process to obtain stem-derived callus or leaf-derived callus producing anticancer phenolic compounds from Fagonia indica (Khan et al., 2016) is only partially similar to the process to obtain callus from fruit pulp of varieties of apple producing high triterpenic acids (Verardo et al., 2017) or the process to obtain callus originating from seeds of Abronia nana (Kim et al., 2014), a desert plant found in North America to produce massive amount of boeravinone Y or even callus from Scrophularia striata for the production of acteoside (Khanpour-Ardestani et al., 2015). As seen above, many examples can be found in the literature, but the remaining question is how much the methodology varied from one example to the next in order to obtain such callus and then to obtain such cell suspensions producing the desired compound. Some general procedures for the establishment of dedifferentiated plant cell suspension cultures exist (Mustafa et al., 2011;Eibl et al., 2018) but with many specific adaptations in the function of species, organs of origins, and culture conditions. Probably because the authors concentrated mainly on the productivity and yield of the targeted compound and not on a larger picture applicable to a more general view of accessing and testing the chemical diversity of plants. If one considers the natural diversity lying in the plant species from our environment as a source of "new" chemicals, it would be wonderful to rely on a methodology universal enough to collect the biological material just once (or a very few times) and then, to rationalize the culture of the cells originating from the specific plant organ (leaves, stems, roots, etc.). The interest on these cell cultures is that the cells would be able to biosynthesize a large variety of compounds in quantities large enough to isolate and identify compounds with pharmacological activity and completely characterize them. At that stage, the culture size can be customized by expanding from several liters to several tens of liters of cell culture (hundreds if the initial results are promising and more biomass is needed to go on with testing). Finally, even modifications of culture conditions could work at enhancing chemical diversity (Jozwiak et al., 2013) or at least variations within the proportions of different secondary metabolites (Jozwiak et al., 2013;Akbari et al., 2018;Saad et al., 2018). The originality of our approach thus resides in using plant tissue and cell culture not for the production scale for which some limitations exist but to attempt facilitating the access to plant chemical diversity. With these perspectives, a repository associated with plant cell culture would be a valuable tool. And the versatility in terms of scaling the in vitro cultures would also allow bridging the gap between drug discovery and the first stages of development. At the Development Stage: Systematic Inventories of Natural Products and Their Sources Classically, the approaches used for drug discovery-detailed above for some of them-are quite specific: trying to find a compound in a plant organ that has some specific activity against a particular enzyme, receptor, or pathway. On the contrary, a systematic "inventory" of the NP existing in living organisms, in plant parts for example, would be of great interest both for the drug discovery aspects as well as the development aspects, for the discovery aspects because it would allow a better use of the known natural chemical diversity and for the drug development aspects, because it would allow to change the sourcing of the NP keeping in mind how important the supply of the compound of natural origin is for a company. Tables 2-4 which consist in a sampling of recently reported works on natural compounds, this "Systematism" is already used by a few groups who catalogued the chemical compounds in a given organ of a given plant (Batista et al., 2017;Sendker et al., 2017;Ma et al., 2018;Sharma et al., 2018), in a fungus (Chang et al., 2017), or in a microorganism (McMullin et al., 2017;Verastegui-Omana et al., 2017). In such cases, an idea of the possible diversity of those sources is given. A compendium of some plant compositions had been done by I.A. Ross (Ross, 1999;Ross, 2001). Such inventory organized in a global database would greatly facilitate the access to the diversity of secondary metabolites of plants, for example. It would somewhat ease any strategy based on chemotaxonomy by describing better the filiations/relationships between the biosynthetical pathways in a different genus and by facilitating the access to some types of chemical skeletons in renewable naturally producing sources such as leaves or fruits. Furthermore, Table 4 indicates, in our view, the way the literature can be compiled from all the available sources to build a database based only on published articles describing one or several compounds from plant parts. In this line, the remarkable paper of Solyomváry et al. (2017) reviews the available literature on the compounds from the dibenzylbutyrolactone lignan family and describes some 91 compounds of this chemical family from their origin in terms of plant species and plant parts. As shown for a few examples in For many reasons, such a systematic inventory of plants chemical components related to the tissues and the species from which they have been extracted, would be of great use but could be difficult to complete, even with modern and fast analytical tools. Indeed, it is the completion of such a compendium that is the real challenge, and the experience of the few already existing NP databases exemplifies that challenge. For example the NAPRALERT experience (https://napralert.org) gathering data from more than 200,000 scientific papers is very informative: comprehensive coverage is claimed from 1975 to 2004 while only 20% of the global data is covered from 2005 due to budgetary constraints. Organisms, compounds, activities, or authors can be searched. In fact, the last decade has seen the development of several databases providing systematic collection of information that focuses on natural compounds themselves, offering the possibility of searching structure, source, and mechanisms of action of the searched compounds. For example, DEREP-NP is a database that compiles structural data (Zani and Carroll, 2017). An interesting review from Xie et al. (2015) allows the comparison of fourteen of these databases focusing on NP, balancing their advantages and disadvantages. Among them, the updated version of a 2006 database SuperNatural II is a public resource (http://bioinformatics.charite. de/supernatural) with more than 325,500 natural compounds, offering 3D structure and conformers (Banerjee et al., 2015) which seems to outperform many others (Harvey et al., 2015;Xie et al., 2015). Another source of natural compounds is also the Greenpharma collection (www.greenpharma.com/products/ compound-librairies/#GPNDB) (Do et al., 2015;Gally et al., 2017). Industrial Scale Production: Synthetic Biology and Organic Syntheses The use of plant cell and organ culture for the production at the industrial scale of compounds with superior added value has been reported in various reviews (Wilson and Roberts, 2012;Imseng et al., 2014;Eibl et al., 2018). These reviews cited a large range of applications from the pharmaceutical area (suspension cells of Pacific yew in 75m 3 stirred bioreactors delivering 500kg/ year of paclitaxel) to the cosmetic and food industries like cell cultures of Malus domestica grown in 50 to 100 L production bioreactors. But still, it is acknowledged that some limitations exist: mainly the fact that time-consuming processes are involved, with possibly low titers, and the possibility of somaclonal variations appearing in the selected top producing cell lines. Several solutions to try to avoid these kinds of limitation can be assessed (Trosset and Carbonell, 2015), but in our opinion, general strategies should consider other alternatives for the large industrial production scale depending on the kind of applications. For some specific NP of pharmacological interest like podophyllotoxin, artemisinin or plumbagin for example, a whole set of biotechnological approaches has been developed and described for the production at larger scale of these valuable compounds (Lautié et al., 2010;Kayani et al., 2018;Roy and Bharadvaja, 2018). But once more, these strategies are quite specific, driven only in one identified compound and its specific biosynthetical pathway. Synthetic Biology As previously mentioned, one of the major requirements to this approach is to understand the pathways through which a particular compound is biosynthesized, thanks to the activity of a series of enzymes involved in a particular plant part. Within this last approach, the focus in plants is more on their capacity to synthetize unique scaffolds than on the end-products themselves. Indeed, the use of the recent integrative approaches based on "Omic" analyses (metabolomics, proteomics, transcriptomic, and genomics) can be of great value. Indeed, knowing the precursors and intermediates through the biochemical status of a tissue, identifying the key enzymes and the limiting steps of the pathways, monitoring indirectly the function of the genes involved in these pathways and their regulation will contribute to decipher the biosynthetic routes in planta (Cheallaigh et al., 2018;Scossa et al., 2018). The relative ease at which one can now obtain large-scale data has facilitated the analyses at the level of the whole metabolic network (Paddon and Keasling, 2014;Ikram and Simonsen, 2017). For example, large amounts of transcriptomic data are now easier to access as stated by Owen et al. (2017), making possible the identification of multistep pathways by coexpression analyses or untargeted metabolomics. Furthermore, the discovery that genes linked to biosynthetic pathways are organized in clusters has opened new opportunities by adapting methodologies developed initially for microorganisms to plants like systematic cluster mining algorithms (Owen et al., 2017). Scossa et al. (2018) reviewed recently the progresses made in the understanding of plant biosynthetic pathways with the integration of metabolomics and next-generation sequencing based on various families of compounds: for example, benzoisoquinoline and monoterpenoid indole alkaloids, cannabinoids, ginsenosides, or withanolides. They also emphasized the new insight that this area can bring in the field of synthesis of NP. They mention for example, the intriguing case of caffeine biosynthesis that evolved independently in several orders of eudicots: at least three metabolic pathways evolved separately coopting genes from different gene families illustrating how biosynthetic pathways can evolve with land plant diversification (Scossa et al., 2018). After having identified the genes involved, the reconstitution of the biosynthetic pathways of interest can be realized thanks to novel DNA construction technologies. It can be realized in a foreign host which enables the increase of product yields. The choice of this host organism is key as the goal is to develop an efficient platform for heterologous gene expression. Microbial hosts are generally considered more amenable than plants to fermentation process (Atanasov et al., 2015). Among them, classical work horses like E. coli or S. cerevisiae, or newcomers like Bacillus subtilis and Pseudomonas putidaare (Nikel et al., 2014;Loeschcke and Thies, 2015;Choi and Lee, 2020) can be cited. Another interesting example is the recent high-cell-density fermentation strategies developed for heterologous production in Pichia pastoris (W.-C. . Then, basically, the strategy will consist in cloning the genes of the enzymes of the pathway that have been identified; constructing large plasmid (or family of plasmids) encoding for those enzymes; transfecting with the plasmid a microorganism that will be grown afterwards; and purifying the product . Several recent reviews detail how the technical advances in synthetic biology and multiplexed genome engineering allow for optimizing the design and synthesis of the pathways involved in NP production (Awan et al., 2016;Breitling and Takano, 2016;Carbonell et al., 2016;Smanski et al., 2016;Moses et al., 2017). Many such examples can be found such as for curcumin synthesis reconstitution in E. coli (Kang et al., 2018), polyunsaturated fatty acids production in the fungus Ashbya gossypii (Ledesma-Amaro et al., 2018), a-amyrin, lupulones ou ginsenosides synthesis in S. cerevisiae (Dai et al., 2014;Yu et al., 2018;Guo et al., 2019), or the diversification of the carotenoid biosynthetic pathways (Umeno et al., 2005). But probably the best example of economically feasible process is reported for the production of artemisin at an industrial scale (Paddon and Keasling, 2014;Ikram and Simonsen, 2017). Alternatively, the developments in plant transformation and transfection technology offering rapid and scalable biosynthesis allow for considering more and more the use of plant-based expression platforms like Nicotiana or Arabidopsis spp. (Fuentes et al., 2016;Lu et al., 2017;Reed et al., 2017;Appelhagen et al., 2018). Indeed they are considered genetically more flexible than the native plant sources and offer in some cases several advantages even over microbial hosts that can lack the endogenous biosynthetic precursors of these NP or intracellular compartments as endoplasmic reticulum related with the implementation of enzymes like cytochrome P450s (Appelhagen et al., 2018). These advances in plant synthetic biology will increase the access to NP through new synthetic routes (Reed et al., 2017) but will also allow the synthesis of new-to-nature molecules and so, expand the natural plant chemical diversity. Organic Syntheses A way to analyze the total syntheses that have been recently produced in the literature is to use simple criteria in order to evaluate the feasibility of such approach in case of similar compounds finding their way to the clinic. We evaluated a set of recent publications dealing with "total synthesis" of NP according to the simple criteria: number of cycles in the compounds, number of carbons and heteroatoms-including sulfur-in those cycles and number of asymmetric carbons in these structures ( Table 6). In this nonexhaustive set of publications, it was decided not to consider peptides and peptide-derived macrocycles (about a dozen structures). The next observation was that there were a surprising high number of bacteria-derived compounds, a feature that we did not notice in our previous surveys (Tables 2-4). Another parameter that allows for judging the feasibility and scalability of the processes is the yield and the number of steps. In that sense, most of those works are exquisitely delicate enterprises. The success of those publications in terms of tour de force is obvious, but they also allow for emphasizing the necessity to obtain such general synthetic routes, as most of those were used then to provide analogs to the desired NP in each publication. Some of those compounds are devoid of asymmetric carbons, rendering the synthesis 'easier', but still a challenge requiring several steps, with an overall poor yield. At the other end of the spectrum are compounds with a considerable number of asymmetric carbons, such as (+)-dimericbiscognienyne A with 12 asymmetric carbons , or namenamicin with 11 (Nicolaou et al., 2018a) and/or with a high number of cycles, even if they were not always fused with each other. Indeed, a series of three furan-based cycles (Samala et al., 2018) separated by alkyl carbon chains would not be a considerable difficulty to synthesize, depending on the decorations of those cycles that introduce notions of asymmetries and thus difficulties to perform. Those data, when compared to the ones gathered in Table 5, show that in these particular cases, the access by chemistry of all the possible optical isomers would be simply impossible. These observations cast some shadows on the possibility of using those synthetic routes at the industrial scale. On the other hand, the mastering of some steps, particularly the stereo-controlled ones, are key in the cases where alternative hemi-syntheses solutions are adopted from a most abundant intermediary (natural) compound. Finally, another point is certainly the growing numbers of synthetic routes that are explored, assessed, and validated to access some "common" features from those natural compounds. A review of this literature can be found in Li L. et al. (2018). Even if partial by essence, it shows the considerable number of routes that has been set up and that permits access to some of the main fused cycles Suillusin Suillus granulatus (mushroom) 4 1 7 1 2 8 1 1 conosilane A (Yuan et al., 2018) Conocybe siliginea (mushroom) Mitrephora glabra (tree) 5 1 7 1 5 7 2 6 (+)-chamuvarinin (Samala et al., 2018) Uvaria chamae (plant) 3 × 1 12 3 7 20 3 (±)-aspidofractinine (Saya et al., 2018) Aspidosperma cylindrocarpon (tree) 6 1 7 2 4 8 < 5 (+)-leucomidine A found in compounds coming from different natural sources. Another review summarized the way spiroacetal can be accessed, another common feature of many natural compounds . This last point strongly emphasizes the common nature throughout the living world of the basic enzymatic systems aiming at producing secondary metabolites from the same fundamental bricks such as mevalonate or other isoprenoids. Nevertheless, it is also clear from that survey that chemistry is not, at the present time, the solution to the problem of scalability of NP productions to an industrial level, even if these compounds were extremely active on a given disease and even if a large panel of examples in which complete syntheses of NP are presented (Kuttruff et al., 2014). As pointed out earlier in the present assay, at the research level and even at the level of exemplifications of chemical analogs of a given active, these approaches are necessary and important. Indeed, deciphering the various routes to some of those compounds might help design and simplify the overall structures, as it is the case in standard, organic chemistry-based, medicinal chemistry. CONCLUSIONS For decades, the interplay between the search for "new" drugs and NP has been strong, to a point where some fear that the destruction of native forests, leading to a reduction of plant diversity would jeopardize our finding of new cures for old and new diseases. The present essay aimed to offer a global overview of the extent of the known chemical diversity, its access, and its use. Several approaches to chemical diversity were also discussed maximizing, in our view, the possibilities of finding useful compounds for human unmet medical needs. As illustrated, plant natural chemical diversity is indeed immense. And the knowledge we gathered on plants is only the tip of the iceberg as exemplified in IA Ross's books (Ross, 1999;Ross, 2001), in which he gathered all the chemical components found in some 40 plants and their many different components. And this knowledge is certainly scattered all over the world. A unifying work should be done, under a simple format, that could be like the one presented in Table 3 and completed by following the example of Solyomváry et al. (2017). (Martin et al., 2018) Larrea tridentata (plant) 3 1 6 1 4 6 4 0 (±)−exotine B Murraya exotica (plant) 5 2 2 2 2 6 lanceolactone A (Acharyya and Nanda, 2018) Illicium lanceolatum (plant) 2 7 2 2 4 4 4 bussealin E (Twigg et al., 2018) Bussea sakalava (plant) 4 1 5 1 0 1 1 1 4 polyflavanostilbene B Polygonum cuspidatum(plant) 9 4 0 2 7 "Unnamed alkaloid" (Davison et al., 2018) Isatis indigotica (plant) 4 1 4 5 2 6 2-epi-narciclasine (Borra et al., 2018) Narcissus sp. (plant) 4 13 3 4 9 4,5 (+)-psiguadial B (Chapman et al., 2018) Psidium guajava (plant) 6 23 1 7 15 1,3 Parvineostemonine (Gerlinger et al., 2018) Stemona parviflora (plant) 4 1 4 2 4 5 1 7 Arboridinine Kopsia arborea (plant) 5 1 6 2 3 1 6 adunctin B (Dethe and Dherange, 2018) Piper adunctum (plant) 4 1 8 1 3 6 2 3 (±)-deguelin Tephrosia vogelii (plant) 5 1 9 3 2 4 6 2 englerin A (Hatakeyama, 2018) Phyllanthus engleri (plant) 5 1 9 2 6 2 3 1 3 houttuynoid A (Jian et al., 2018) Houttuynia cordata (plant) 5 2 2 3 4 7 2 4 (−)-mucosin (Nolsoe et al., 2018) Reniera mucosa (sponge) 2 8 0 2 1 5 (rac)-renieramycin T (Kimura and Saito, 2018) Reniera sp. This kind of tool would incredibly ease the access to plant natural chemical diversity and should ideally be comprehensive, organized and include data from worldwide plant species, from past to recent studies. Such a globalized database could furthermore be integrated to other ones like genomic, phylogenic, species occurrence, biosynthetic pathway, biological activity, or chemical classification (Allen et al., 2019) allowing researchers to mine the resources and correlate the information, hence empowering all kind of research studies. This trend has been emphasized by several authors in their recent reviews (Atanasov et al., 2015;Harvey et al., 2015) stating that drug discovery from plants requires multidisciplinary approaches. Experiences from the past tell us how important it is, for drug discovery purposes, to access this wide diversity lying in the Plant kingdom, especially because it may be shrinking due to the rapid alterations of the biosphere. In order to fully access the whole chemical diversity without jeopardizing plant biodiversity, alternative ways to collect and store plant tissues can be explored, as for example the use of in vitro culture techniques allowing a renewable and sustainable access to plant chemical diversity. As the final purpose is giving access to workable quantities of therapeutic compound(s), we suggested that the advances in synthetic biology coupled with genomics and bioinformatics can pave the way to possible future strategies of productions of the compounds originating from this diversity. But the chemical diversity in the scaffolds of plant natural compounds is so wide that there is still some space from different strategies for large-scale production: from organic total synthesis for the simpler scaffolds like ephedrine or metformin that are able to be synthetized in few steps, that is to say, at a reasonable cost or, at the other end of the spectrum, heterologous (plant)? production for compounds with more complex scaffolds like taxanes and multistep biosynthesis, and in between even hybrids (multihosts)? semisynthetic strategies can be imagined and developed. AUTHOR CONTRIBUTIONS EL and JB wrote the review with the help of PD (modelization) and OR (chemistry). ACKNOWLEDGMENTS We would like to thank Ms. Luana Gessica do Carmo da Silva for her help in preparing Figure 2 and Dr. Natalia Sayuri Muto for her help in the language edition of the manuscript. The Center of Agro-food, Pharmaceutical and Cosmetic Valorization of Amazonian Bioactive Compounds (CVACBA) and Prof. Hervé Rogez are also acknowledged.
2020-04-07T13:12:35.556Z
2020-04-07T00:00:00.000
{ "year": 2020, "sha1": "6cae7a747dffdda0d7b13d22509e465df50900d4", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2020.00397/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6cae7a747dffdda0d7b13d22509e465df50900d4", "s2fieldsofstudy": [ "Medicine", "Chemistry", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
271092625
pes2o/s2orc
v3-fos-license
Assessment of eye health programme reach by comparison with rapid assessment of avoidable blindness (RAAB) survey data, Talagang, Pakistan Background The purpose of this study was to quantify how much of the burden of visual impairment (VI) and unmet need in Talagang, identified by Rapid Assessment of Avoidable Blindness (RAAB) survey data, has been addressed by Community Eye Health (CEH) programme efforts. Methods A RAAB survey was carried out in November 2018, with 2,824 participants in Talagang Tehsil, Punjab, Pakistan, aged 50 and over. Census data were used to extrapolate survey data to the population. Alongside this, a CEH programme was launched, consisting of community eye screening, and onward referral to rural health centres, secondary or tertiary ophthalmological services, as required. This health intervention aimed to address the eye care needs surfaced by the initial survey. From 2018 to 2022, 30,383 people aged 50 or over were screened; 14,054 needed referral to further steps of the treatment pathway and more detailed data collection. Programme data were compared to estimates of population unmet needs. Main outcome measures were prevalence of VI, and proportion of need met by CEH Programme, by cause and level of VI. Results Among those aged 50 and over, 51.0% had VI in at least one eye. The leading causes were cataract (46.2%) and uncorrected refractive error (URE) (25.0%). In its first four years, the programme reached an estimated 18.3% of the unmet need from cataract, and 21.1% of URE, equally in both men and women. Conclusions Robustly collected survey and programme data can improve eye health planning, monitoring and evaluation, address inequities, and quantify the resources required for improving eye health. This study quantifies the time required to reach eye health needs at the community level. Assessment of eye health programme reach by comparison with rapid assessment of avoidable blindness (RAAB) survey data, Talagang, Pakistan Muhammad Zahid Jadoon 1 , Zahid Awan 2 , Muhammad Moin 3 , Rizwan Younas 3 , Sergio Latorre-Arteaga 4,5 , Elanor Watts 4,6* , Marzieh Katibeh 4,7 and Andrew Bastawrous 4,8 Background Globally, in 2020, it was estimated that 43.3 million people were blind (presenting visual acuity, VA, worse than 3/60 in the better eye), with nearly half of these cases due to cataract (17.0 million) or uncorrected refractive error (URE, 3.7 million) [1].Cataract and URE combined are also responsible for 241.2 million of the 295.1 million people with moderate-severe visual impairment (VI).In addition, at least 509.7 million people are estimated to have near vision impairment due to uncorrected presbyopia.Cataract surgery and refractive correction are relatively simple interventions which could restore good vision to those affected.Provision of these services can be monitored via indicators including prevalence of VI, effective cataract surgical coverage (eCSC) and effective refractive error coverage (eREC) [2].eCSC and eREC have been recommended by a WHO-led expert panel for assessment of eye care provision and Universal Health Coverage [3].Standardised reporting allows consistent situational assessment, service planning, and comparison of groups and regions. Relevant data is collected within the Global Vision Database/Vision Atlas [4], a key source of which is Rapid Assessment of Avoidable Blindness (RAAB) surveys [5,6].RAAB surveys assess prevalence and causes of VI in people aged 50 and over, as well as estimating cataract surgical coverage and identifying barriers to cataract treatment, and have been undertaken in over 80 countries.The over 50s age group is prioritised for screening when targeting those with cataract and vision impairment, as approximately 73% of blindness and visual impairment cases are in this group, with cataract being the leading cause [7]. Sequential RAAB surveys have been used to analyse service provision [8], and RAAB data has elsewhere been combined with Ministry of Health data regarding surgeries performed to allow review of eCSC [9]. Community-based eye care programmes can improve access to eye care, especially among low-income and rural patient groups [10].Community eye health (CEH) programmes have been carried out in Pakistan with Peek Vision, who have previously demonstrated the use of appropriate data management and iterative programme change to allow for continuous programme improvement, including improved adherence to referral [11].Comparison of data collected during these large-scale vision programmes and by RAAB surveys allows identification of groups which have been successfully reached by the programme, or missed, as well as projection of how much programme activity would be required to eradicate avoidable blindness and visual impairment in the region. Here we compare RAAB survey data from Talagang, Pakistan, with CEH programme data, to assess programme reach. RAAB survey During November and December 2018, a RAAB survey was carried out in the Talagang Tehsil, Chakwal District, Punjab, Pakistan.As per standard RAAB methodology, clusters were randomly and systematically identified, and then 50 people aged 50 or over were selected from within each cluster via compact segment sampling.This led to an initial sample of 2,889 people.Data were collected via door-to-door fieldwork at participants' homes.Assessment included VA measurement via Tumbling E chart, and examination with torch and direct ophthalmoscope.Pinhole vision testing and ophthalmologist examination were carried out for those with VA < 6/12.Dilated examination was carried out in those for whom a cause of visual impairment could not be found on undilated examination.This allowed estimation of prevalence of blindness and VI, and causes.Data included refractive error and presbyopia prevalence, spectacle wear coverage, as well as cataract surgical coverage (CSC), place of surgery, and barriers to cataract surgery. CEH programme Alongside the RAAB survey, a CEH programme was launched in November 2018 in the same area, with collaboration between College of Ophthalmology and Allied Vision Sciences (COAVS), Peek Vision, and Christian Blind Mission (CBM).Communities were first sensitised by social organisers and Lady Health Visitors.Community eye screening was then carried out in Basic Health Units.Where indicated, screening participants were referred to rural health centres for refraction and provision of primary eye care.People with conditions requiring ophthalmological input were referred onwards to secondary or tertiary ophthalmological services.Hospitals were informed of referrals, and patients were sent automated text or voice messages reminding them of their appointments.The Peek software system was used to track this data, including the number of people screened, the screening quality, number of people referred, treated and non-attenders/left-behind groups.This allowed continuous improvement of the programme to improve uptake with a focus on equity and efficient use of the available resources.A more detailed summary of the programme has previously been described elsewhere [11]. Census data Since the RAAB and CEH programmes were carried out, 2017 census data [12] has been made available regarding the Talagang population.This has been used in this analysis for extrapolation to regional magnitude of visual impairment and blindness. Data security The storage and transmission of this data was carried out in line with a Data Protection Agreement with the local stakeholders, and followed the European Union General Data Protection Regulation (GDPR). Data analysis RAAB7 software and Peek Capture smartphone application were used to collect data for the survey and community programmes respectively.Data from RAAB survey automated reports were obtained from the RAAB repository with permission of the PI of the project (COAVS, Lahore).Population age and gender composition of people living in Talagang Tehsil were obtained from 2017 census data and used to extrapolate RAAB sample results to the population who live in the survey area.Anonymised programme data and RAAB data were processed using Stata 14.0 (StataCorp, Texas, US), R studio (R Studio, Massachusetts, US) and Google Sheets (Google, California, US).R Shiny software (R Studio, Massachusetts, US) was used for further data management and figure creation. Magnitude of visual impairment: RAAB Survey Of 2,889 people who were invited to participate, 2,824 (response rate = 97.8%)participated in the RAAB survey.A summary of the results of the survey is presented in Fig. 1. Causes of blindness were avoidable in 97.6% of patients; the most common causes of blindness and visual impairment were: untreated cataract, uncorrected refractive errors, diabetic retinopathy, glaucoma, posterior segment pathology and corneal opacities.In participants with untreated cataract, the main barriers to uptake of cataract surgery were fear (55.6%) and cost (38.9%).In those who had had cataract surgery, the outcome was good (18.4%) or very good (62.8%) in 81.3% of cases. Implemented CEH programme In Talagang, in the Community Eye Health (CEH) and School Eye Health programmes combined, between 2018 and 2022: 420,147 people were screened; 65,874 referred on to triage for further assessment; 31,454 people were prescribed spectacles, and 3,482 attended hospital services.Figure 2 shows an overview of the programme participants aged 50 or over, seen during the first four years of the Community Eye Health programme. The eye health need surfaced by the RAAB survey and the amount of need that has been reached by the programme are compared in Table 1.Extrapolation of RAAB data estimates that 54.5% (39,233/71,950) of the 50 + population in Talagang had VI in at least one eye.41.8% (30,090/71,950) of people had VI in their better seeing eye. As shown in Table 1, there were an estimated 39,233 people who had any VI or blindness in at least one eye at the beginning of the programme.The RAAB survey and CEH programme combined screened 9,178 (1,381 + 7,797) of those people (23.4%) during the first four years.Figure 3 shows the proportion of VI which was reached by the programme, by visual acuity status.In most groups, the proportion of reach was between 20 and 26%, while the programme reached over a third of people with bilateral severe VI. Figure 4 visualises the causes of blindness identified by RAAB survey and the distribution of eye conditions among the 513 blind people who were reached by programme.As shown in this figure, untreated cataract was the leading cause of blindness in 55% of cases in the survey and 61% of cases in the programme.URE was the second most common cause of VI (25.0%) but not a major cause of blindness. As shown in Fig. 4, while cataract was the leading cause of blindness in both settings, the distribution of other causes differed between the programme and RAAB survey.The proportion of VI secondary to cataract or URE met by the programme is shown in Fig. 5: approximately 18.3% of cataract surgical need and 21.1% of need for refractive services in this area have been reached during the first 4 years of the programme (average 5% per year). Discussion South Asia has both the highest age-standardised prevalence of moderate-severe VI of any Global Burden of Disease region, and the largest absolute number of cases [1].Effective refractive error coverage (eREC) describes the proportion of people with vision impairing refractive error or presbyopia, whose need has been sufficiently met, i.e. with refractive correction (spectacles or contact lenses) that resolves their vision impairment.There is vast global inequity in eREC, with 79.1% distance eREC in high-income countries, but only 9.0% in South Asia and 5.7% in Sub-Saharan Africa [6].Despite South Asia bearing such a large proportion of VI need, it is underrepresented in research into eye health service improvement: for example, in a recent global scoping review, it was found that the majority of research into interventions to improve cataract services has been undertaken in high-income countries, with only 6.3% in South Asia (n = 9/143) [13].Encouragingly, there has been significant improvement of eREC in the region over the last two decades [6]. Robustly collected survey data can be used to improve CEH programmes at all stages of the programme.Surveys carried out prior to programme initiation allow for better understanding of which conditions are likely to be seen in a region, and their magnitude.In this way, RAAB surveys can be used to help plan service provision [14,15].Comparison of repeated eye health surveys (undertaken before and after CEH programme interventions) has enabled assessment of the effectiveness of the programmes carried out in the intervening period, for example in Nigeria [16] and India [17]. We presented an evidence-based and comparable estimation of the magnitude of eye care needs and causes of vision impairment in a district level population, applying RAAB methodology.This allowed the CEH programme to establish a defined baseline against which to measure its progress towards eradicating avoidable blindness.After introduction of RAAB methodology in 2006, over 300 RAAB studies have been completed in low-and middle-income countries.Previous studies have assessed sequential RAABs alongside health systems/workforce programme data [16], and compared RAAB results with historical cataract surgical rate (CSR) programme data [18].However, to the best of our knowledge, this is the first study that compares baseline RAAB data with subsequent programme reach to demonstrate the need met by an eye health programme in a defined population, and that which still needs to be reached. Based on initial RAAB data, this programme reached almost 20% of previously unmet need due to the main causes of visual impairment, equally in both men and women.Regional variation is large; a previously published RAAB in Pakistan, carried out in Lahore, found VI prevalence to be remarkably low in the area: 1.9%, with a CSC of 84.0% [19].However, even with such good service coverage, there was significant inequity between genders, with CSC of 94.1% for men, and 72.1% for women.Here, VI prevalence was found to be much higher in Talagang.The proportion of cataract and URE need reached by this programme was equitable between men and women, as shown in Fig. 5b. Census data is usually required for extrapolation from survey results to national or regional magnitude.In this case, Pakistan's census had been delayed, such that at the initial time of RAAB survey, the most recent formal national census with available data was from 1998.As the 2017 census data are now available, the estimated need is higher than it was initially.This reduces the estimated proportion of need which has been met by the programme, though they remain encouraging at 18.3% of cataract and 21.1% of URE (see Fig. 5). Since the time of data extraction and analysis, more people may have attended hospital eye services and more of the pending need may have been met.Regardless, focusing on strengthening the last part of the referral system, i.e. hospital attendance following referral (Fig. 2), seems an area with room for further improvement in this context, to close the loop of eye care needs.This was flagged as a priority by ongoing monitoring and evaluation during the programme, and work has already been undertaken to tackle this [11].As a newly implemented programme, approximately one fifth of people in need have been successfully reached so far, which is very encouraging for this early implementation phase.The leading cause of blindness as demonstrated in Fig. 4 was untreated cataract both in survey and programme data (55% and 61%).There were however discrepancies between other common causes, especially corneal opacities, which were reported with much higher frequency during the RAAB survey than in the CEH programme.Of note, more details of diagnosis were provided in the RAAB, while in programme data more cases were categorised as "other".Corneal causes of VI may have been reported as "other" in some cases, leading to this difference.Although epidemiological data collection is not the primary goal of a programme, this might imply potential for improvement in precision of data collection within the programme, and could represent a limitation in our results.Alternatively, as RAAB surveys are statistically powered for their primary outcome: prevalence of blindness, the RAAB estimates of distribution of causes may be less precise. Study limitations include that our estimates of need were based on point prevalence estimates from the RAAB survey, without modelling of new case incidence or population growth during the study time frame, or reduction in cases due to death or treatment elsewhere.This avoided introduction of an extra layer of uncertainty within our estimates, but means that some new cases may not be included in the estimates. Another interesting result to note is that the prevalences of visual impairment and blindness were similar in the RAAB results to the group which was screened as "not healthy" in the first stage of the programme (Table 1), and attended triage.There are various possible explanations for this.If screening is accurate, with minimal false negatives, then disproportionately more healthy people without visual impairment are attending for screening than people with visual impairment.Screening out many of the healthy people is then returning proportions to the population baseline.This could occur if it is easier for people with good vision to become aware of the programme or to attend (e.g.due to ease of travel).Focusing on community awareness and programme enrolment methods such that people with visual impairment are more likely to participate could increase the effectiveness of the programme in reaching the population in need. Further research is ongoing in the form of data collection within the continued programme.Visual outcome of patients who have undergone cataract surgery or received refractive correction would be beneficial to allow analysis of the effectiveness of interventions: eREC and eCSC.A repeat RAAB later would add helpful information Conclusions In this study, we compared start line RAAB survey estimates directly to programme data, to assess programme reach.The programme reached an estimated 20.1% of all expected cases of blindness in the survey area (n = 513/2,556) in the first four years, 18.3% of the unmet need from cataract, and 21.1% of URE, equally in both men and women.In combination, the survey and programme reached 23.4% of people with VI.Use of survey data in this way allows for improved monitoring and evaluation, highlighting any inequities in who is accessing the programme, as well as enabling calculation of the proportion of need which has been met.This can help planners of eye care programmes to allocate resources and to estimate the required duration of a programme to meet existing backlogs. Fig. 2 Fig. 2 Programme workflow, November 2018 to November 2022, for the population aged ≥ 50 in Talagang Tehsil.The sum of the percentages of patients who have received spectacles, medication, hospital (ophthalmologist) or specialist referrals may exceed 100% as some patients have multiple outcomes Fig. 3 Fig. 3 Combined reach to people with visual impairment in Talagang Tehsil, District Chakwal, Pakistan.Percentage of estimated unmet need reached during the RAAB survey and the first four years of the programme Fig. 4 Fig. 5 Fig. 4 Causes of blindness in RAAB survey, 2018, and among CEH programme participants (triage stage).Programme data refers to the first four years of the programme Table 1 Comparison of magnitude of visual impairment frequency between the RAAB sample and the CEH programme RAAB Sample Extrapolation to Area Population (Adjusted for population demographics) Eye Care Programme* (First four years) Presenting Visual Status in the Worse Eye (Uni-and Bilateral Visual Impairment) *The programme data presented in this column is based on full visual acuity assessments in the triage centres.A further 12,704 people were identified as healthy during the screening programme using a pass/fail screening threshold of 20/40 (6/12).
2024-07-10T16:43:48.342Z
2024-07-10T00:00:00.000
{ "year": 2024, "sha1": "3077bf8c655c8a52478677b6bf949c5e728da5d7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "3077bf8c655c8a52478677b6bf949c5e728da5d7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
39198714
pes2o/s2orc
v3-fos-license
Constructing bispectral dual Hahn polynomials Using the concept of $\mathcal{D}$-operator and the classical discrete family of dual Hahn, we construct orthogonal polynomials $(q_n)_n$ which are also eigenfunctions of higher order difference operators. It has a long tradition in the context of orthogonal polynomials: it goes back a century and a half ago when E.B. Christoffel (see [7] and also [44]) studied it for the particular case r(x) = x. The purpose of this paper is to show that this procedure also works for constructing Krall polynomials from the dual Hahn orthogonal polynomials. Dual-Hahn polynomials (R α,β,N n ) n are eigenfunctions of a second order difference operator acting in the quadratic lattice λ(x) = x(x + α + β + 1). The examples of bispectral dual Hahn polynomials constructed in this paper are also interesting by the following reason. As it has been shown in [12] and [13], when one applies duality (in the sense of [36]) to Krall-Charlier, Krall-Meixner or Krall-Krawtchouk orthogonal polynomials, exceptional discrete polynomials appear. Exceptional and exceptional discrete orthogonal polynomials p n , n ∈ X N, are complete orthogonal polynomial systems with respect to a positive measure which in addition are eigenfunctions of a second order differential or difference operator, respectively. They extend the classical families of Hermite, Laguerre and Jacobi or the classical discrete families of Charlier, Meixner and Hahn. The last few years have seen a great deal of activity in the area of exceptional orthogonal polynomials (see, for instance, [12], [13], [20] (where the adjective exceptional for this topic was introduced), [21], [22], [39], [41], [42], [43], and the references therein). The most apparent difference between classical or classical discrete orthogonal polynomials and their exceptional counterparts is that the exceptional families have gaps in their degrees, in the sense that not all degrees are present in the sequence of polynomials (as it happens with the classical families) although they form a complete orthonormal set of the underlying L 2 space defined by the orthogonalizing positive measure. This means in particular that they are not covered by the hypotheses of Bochner's and Lancaster's classification theorems for classical and classical discrete orthogonal polynomials, respectively (see [5] or [35]). The connection by duality between Krall discrete and exceptional discrete polynomials is remarkable because anything similar it is known to happen for classical polynomials and differential operators. For exceptional Charlier and Meixner polynomials one can construct exceptional Hermite and Laguerre polynomials by passing to the limit in the same form as one goes from Charlier and Meixner to Hermite and Laguerre in the Askey tableau. The relation between Krall and exceptional polynomials at the level of the classical discrete families is rather helpful even at the classical level. For instance, using it one can lighten the difficult problem of finding necessary and sufficient conditions for the existence of a positive measure with respect to which the exceptional Hermite and Laguerre polynomials are orthogonal. In a forthcoming paper [14], we will construct Hahn polynomials by applying duality to the Krall dual Hahn polynomials constructed in this paper, with some applications to the construction of exceptional Jacobi polynomials. For the dual Hahn polynomials (R α,β,N n ) n , we have to work in the subring P λ of P consisting of polynomials in the variable λ, where λ(x) = x(x + α + β + 1): The measure with respect to which our bispectral dual Hahn polynomials are orthogonal, will be defined by applying a Christoffel transform to the dual Hahn measure. More precisely, for real numbers α, β and a positive integer N , we denote by ρ α,β,N the dual Hahn weight (see (4.5) below). Let F = (F 1 , F 2 , F 3 ) be a trio of finite sets of positive integers (the empty set is allowed). Under mild condition on the parameters α, β and N , we will then prove (in a constructive way) that the weight ρ F α,β,N defined by has associated a sequence of orthogonal polynomials and they are eigenfunctions of a higher order difference operator (for an example of orthogonal polynomials constructed from the dual Hahn family and satisfying fourth order difference equations see [45]). In order to prove this result, we use D-operators and the approach developed in [15] for constructing Krall polynomials from the Charlier, Meixner and Krawtchouk families. This approach has three ingredients (which will be considered in Section 3). The first ingredient: D-operators. This is an abstract concept introduced in [11] by the author which has shown to be very useful to generate Krall, Krall discrete and q-Krall families of polynomials (see [11], [1], [15], [16]). To define a D-operator, we need a sequence of polynomials (in λ) (p n ) n , deg λ p n = n, and an algebra of operators A acting in P λ . In addition, we assume that the polynomials p n , n ≥ 0, are eigenfunctions of certain operator D p ∈ A with eigenvalues that are linear in n: that is, we assume that D p (p n ) = np n , n ≥ 0. Observe that no orthogonality conditions are imposed at this stage on the polynomials (p n ) n . Given a sequence of numbers (ε n ) n , a D-operator D associated to the algebra A and the sequence of polynomials (p n ) n is defined by linearity in P from (−1) j+1 ε n · · · ε n−j+1 p n−j , n ≥ 0. We then say that the lowering operator D is a D-operator if D ∈ A. Using D-operators we can construct from the polynomials (p n ) n a huge class of families of polynomials (q n ) n which are also eigenfunctions of operators in the algebra A. Indeed, assume we have m D-operators D 1 , D 2 , . . . , D m (not necessarily different) defined by the sequences (ε h n ) n , h = 1, . . . , m, and that these sequences are also defined for n ∈ Z. We write ξ h x,i , i ∈ Z and h = 1, 2, . . . , m, for the auxiliary functions defined by For m arbitrary polynomials Y 1 , Y 2 , . . . , Y m , we consider the sequence of polynomials (q n ) n defined by . To ensure that at least a finite family of the polynomials q n has degree n we assume that there exists a positive integer M such that the (quasi) Casorati determinant , satisfies that Ω(n) = 0 for n = 0, 1, · · · , M (this is necessary if we want the polynomials q n , 0 ≤ n ≤ M , to be orthogonal). Notice that the dependence in λ in the determinant (1.4) appears only in the first row, and hence q n is a linear combination of m + 1 consecutive p n 's. The magic of D-operators is that, whatever the polynomials Y j 's are, there always exists an operator D q in the algebra A for which the polynomials q n , 0 ≤ n ≤ M , are eigenfunctions. Moreover, the operator D q can be explicitly constructed from the operator D p using the D-operators D j , j = 1, . . . , m. To stress the dependence of the polynomials q n , n ≥ 0, on the polynomials Y j , j = 1, . . . , m, we write q n = Cas Y1,...,Ym n . Polynomials defined by other similar forms of Casorati determinants have also a long tradition in the context of orthogonal polynomials and bispectral polynomials. Casorati determinants appear, for instance, to express orthogonal polynomials with respect to the Christoffel or Geronimus transform of a measure. See [44], Th. 2.5 for the Christoffel transform and [18], [19] or [46] (and the references therein) for the Geronimus transform. The Geronimus transform associated to the polynomial q(x) = (x − f 1 ) · · · (x − f k ) is defined as follows: we say thatμ is a Geronimus transform of µ if qμ = µ. Notice that the Geronimus transform is reciprocal of the Christoffel transform (see the preliminaries for more details). In this paper, we construct three different D-operators for dual Hahn polynomials (see Lemma 4.1 in Section 4). One important difference between dual Hahn polynomials and the Charlier, Meixner or Krawtchouk families consider in [15] is that for dual Hahn polynomials one of the sequences ε n vanishes at certain positive integer. For the benefit of the reader, we display here this sequence and the associated D-operator: the sequence given by defines the following D-operator (see (1.3)) for the dual Hahn polynomials: Notice that ε N +1 = 0. The second ingredient. With the second ingredient, orthogonality with respect to a measure enters into the picture (Section 4). Indeed, even if we assume that the polynomials (p n ) n are orthogonal, only for a convenient choice of the polynomials Y j , j = 1, . . . , m, the polynomials (1.4) q n = Cas Y1,...,Ym n , n ≥ 0, are also orthogonal with respect to a measure. When we take the polynomials (p n ) n to be the dual Hahn polynomials, the second ingredient establishes how to chose the polynomials Y j 's such that the polynomials q n = Cas Y1,...,Ym n (1.4) are also orthogonal with respect to a measure. As for the case of Charlier, Meixner and Krawtchouk (studied in [15]), this second ingredient turns into a very nice symmetry between the dual Hahn family and the polynomials Y j 's. Indeed, the polynomials Y j 's can be chosen to be Hahn polynomials, but with a suitable modification of the parameters. More precisely, given a D-operator for the dual Hahn family and a nonnegative integer j we provide a polynomial Y j of degree j such that for any different nonnegative integers g 1 , . . . , g m , the polynomials q n = Cas Yg 1 ,...,Yg m n , n ≥ 0, are orthogonal with respect to a measureρ. For the D-operator display above we have , where by (h α,β,N n ) n we denote the Hahn polynomials (see (4.10) below). The third ingredient. We still need a last ingredient for identifying the measurẽ ρ with respect to which the polynomials q n = Cas In Section 5 we will put together all these ingredients to construct bispectral dual Hahn polynomials. We finish pointing out that, as explained above, the approach of this paper is the same as in [15] for Charlier, Meixner and Krawtchouk polynomials. Since we work here in a quadratic lattice with a trio of finite sets of positive integers (instead of at most two sets as in [15]), and more parameters, the computations are technically more involved. Anyway, we will omit those proofs which are too similar to the corresponding ones in [15]. Preliminaries For a linear operator D : P → P and a polynomial P (x) = k j=0 a j x j , the operator P (D) is defined in the usual way P (D) = k j=0 a j D j . For a moment functional µ on the real line, that is, a linear mapping µ : P → R, the n-th moment of µ is defined by µ n = µ, x n . It is well-known that any moment functional on the real line can be represented by integrating with respect to a Borel measure (positive or not) on the real line (this representation is not unique [9]). If we also denote this measure by µ, we have µ, p = p(x)dµ(x) for all polynomial p ∈ P. Taking this into account, we will conveniently use along this paper one or other terminology (orthogonality with respect to a moment functional or with respect to a measure). We say that a sequence of polynomials (p n ) n , p n of degree n, n ≥ 0, is orthogonal with respect to the moment functional µ if µ, p n p m = 0, for n = m and µ, p 2 n = 0. Since the most important examples consider in this paper are orthogonal polynomials with respect to a degenerate measure (when N is a positive integer the dual Hahn polynomials R α,β,N n are orthogonal with respect to a finite combination of deltas), we will stress this property of non-vanishing norm when necessary. As we wrote in the introduction, for the dual Hahn polynomials (R α,β,N n ) n , we have to work in the subring P λ defined by (1.1). Hence for a moment functional µ on the real line, we also denote by µ the corresponding moment functional in P λ defined by µ, p(λ) = µ, p(λ(x)) . Favard's Theorem establishes that a sequence (p n ) 0≤n≤M (where M is a positive integer or infinity) of polynomials, p n of degree n, is orthogonal (with non null norm) with respect to a measure if and only if it satisfies a three term recurrence relation of the form (p −1 = 0) where (a n ) n , (b n ) n and (c n ) n are sequences of real numbers with a n c n = 0, 1 ≤ n ≤ M . If, in addition, a n c n > 0, 1 ≤ n ≤ M , then the polynomials (p n ) 0≤n≤M are orthogonal with respect to a positive measure, and the reciprocal is also true. If M = ∞, this measure will have infinitely many points in its support, otherwise the support might be formed by finitely many points. As we wrote in the Introduction, the kind of transformation which consists in multiplying a moment functional µ by a polynomial r is called a Christoffel transform. The new moment functional rµ is defined by rµ, p = µ, rp . Its reciprocal is the Geronimus transformμ which satisfies rμ = µ. Notice that the Geronimus transform of the moment functional µ is not uniquely defined. Indeed, write a i , i = 1, . . . , u, for the different real roots of the polynomial r, each one with multiplicity b i , respectively. It is easy to see that ifμ is a Geronimus transform of µ then the moment functionalμ + ai is also a Geronimus transform of µ, where α i,j are real numbers. These numbers are usually called the free parameters of the Geronimus transform. In the literature, Geronimus transform is sometimes called Darboux transform with parameters while Christoffel transform is called Darboux transform without parameters. The reason is the following. The three term recurrence relation (2.1) for the orthogonal polynomials with respect to µ can be rewritten as λp n = J(p n ), where J is the second order difference operator J = a n+1 s 1 + b n s 0 + c n s −1 and s l the shift operator (acting on the discrete variable n): s l (x n ) = x n+l . For any f ∈ C, decompose J into J = AB + f I whenever it is possible, where A = α n s 0 + β n s 1 and B = δ n s −1 + γ n s 0 . We then callJ = BA + f I a Darboux transform of J with parameter f . It turns out that the second order difference operatorJ associated to a Geronimus transformμ of µ can be obtained by applying a sequence of k Darboux transforms (with parameters f i , i = 1, . . . , k) to the operator J associated to the measure µ. This kind of Darboux transform has been used by Grünbaum, Haine, Hozorov, Yakimov and Iliev to construct Krall and q-Krall polynomials. For the particular cases of Laguerre, Jacobi or Askey-Wilson polynomials, one can found Casorati determinants similar to (1.4) in [25], [26], [27], [28] or [29]. The family of measures ρ F α,β,N in the Introduction is defined by applying a Christoffel transform to the dual Hahn weight. But, it turns out that they can also be defined by using the Geronimus transform. This Geronimus transform is however defined by a different polynomial. We also have to make a suitable choice of the free parameters of this Geronimus transform and apply it to a dual Hahn weight but maybe with different parameters and affected by a shift in the variable. The following example will clarify this point. Consider From the definition of the dual Hahn weight ρ α,β,N , we have after a simple computation where w * ;α,β,N (x) denotes the mass at x of ρ α,β,N (see (4.5) below). This shows that where the free parameters (associated to the roots λ = 0, λ = λ(N ) and λ = λ(−1− α)) have to be necessarily chosen equal to p(0)w * ;α,β,N (0), p(λ(N ))w * ;α,β,N (N ) and 0, respectively. Along this paper, we use the following notation: given a finite set of positive inside of a matrix or a determinant will mean the submatrix defined by  The main ingredients 3.1. D-operators. The concept of D-operator was introduced by the author in the paper [11]. In [11], [15], [16] and [1], it has been showed that D-operators turn out to be an extremely useful tool of a unified method to generate families of polynomials which are eigenfunctions of higher order differential, difference or q-difference operators. Hence, we start by reminding the concept of D-operator. As we wrote in the Introduction, for the dual Hahn polynomials (R α,β,N n ) n , we have to work in the subring P λ of P consisting of polynomials in the variable λ, The subring P λ can be easily characterized as follows. Consider the involution I : P → P defined by Clearly we have I(λ) = λ. Hence every polynomial in P λ is invariant under the action of I. And conversely, if p ∈ P is invariant under I, then p ∈ P λ . We consider the shift operators in P λ acting on x: s j (p(λ)) = p(λ(x + j)). To stress that we will sometimes write s x,j instead of s j . Notice that for p ∈ P λ , s x,j (p) does not belong, in general, to P λ . We consider difference operators T in P λ of the form . . , r, s ≤ r, and where Q[x] denotes the linear space of rational functions. We denote by A λ the algebra formed by all the operators T of the form (3.2) which maps P λ into itself: The second order difference operator for the dual Hahn polynomials (4.3) belongs then to A λ . The starting point to define a D-operator is a sequence of polynomials (in λ) (p n ) n , deg λ p n = n, and a subalgebra of operators A of the algebra A λ (hence acting in the subring P λ and mapping it into itself). In addition, we assume that the polynomials p n , n ≥ 0, are eigenfunctions of certain operator D p ∈ A. We write (θ n ) n for the corresponding eigenvalues, so that D p (p n ) = θ n p n , n ≥ 0. Since we are interested in the dual Hahn polynomials, we only consider here the case when the sequence of eigenvalues (θ n ) n is linear in n. Given a sequence of numbers (ε n ) n , a D-operator associated to the algebra A and the sequence of polynomials (p n ) n is defined as follows. We first consider the operator D : P → P defined by linearity from We then say that the lowering operator D is a D-operator if D ∈ A. The following Theorem was proved in [15] and shows how to use D-operators to construct new sequences of polynomials (q n ) n such that there exists an operator D q ∈ A for which they are eigenfunctions. We use m arbitrary polynomials (in x) Y 1 , Y 2 , . . . , Y m and m D-operators D 1 , D 2 , . . . , D m (not necessarily different) defined by the sequences (ε h n ) n , h = 1, . . . , m: We will assume that for h = 1, 2, . . . , m, the sequence (ε h n ) n is a rational function in n (actually that is the case for the three D-operators we will construct in Section 4 for the dual Hahn polynomials). We write ξ h x,i , i ∈ Z and h = 1, 2, . . . , m, for the auxiliary functions defined by We will consider the m × m (quasi) Casorati determinant defined by . The next Theorem is a slight modified version of Theorem 3.2 of [15] (adapted to the particularities of dual Hahn polynomials). This Theorem shows how to use D-operators to construct new sequences of polynomials (q n ) n such that there exists an operator D q in A for which they are eigenfunctions Theorem 3.1 (Theorem 3.2 of [15]). Let A and (p n ) n be, respectively, a subalgebra of operators A of the algebra A λ (hence acting in the subring P λ and mapping it into itself ), and a sequence of polynomials (in λ) (p n ) n , deg λ p n = n. We assume that (p n ) n are eigenfunctions of an operator D p ∈ A with eigenvalues equal to n, that is, D p (p n ) = np n , n ≥ 0. We also have m sequences of numbers (ε 1 n ) n , . . . , (ε m n ) n , which define m D-operators D 1 , . . . , D m (not necessarily different) for (p n ) n and A (see (3.5))) and assume that for h = 1, 2, . . . , m, each sequence (ε h n ) n is a rational function in n. Let where M is certain positive integer or infinity and Ω is the Casorati determinant defined by (3.7). Consider the sequence of polynomials (q n ) n defined by . For a rational function S(x) and h = 1, . . . , m, we define the function M h (x) by If we assume that the functions S(x)Ω(x) and M h (x), h = 1, . . . , m, are polynomials in x then there exists an operator D q,S ∈ A such that D q,S (q n ) = λ n q n , 0 ≤ n ≤ M. Moreover, an explicit expression of this operator can be displayed. Indeed, write P S for the polynomial defined by Then the operator D q,S is defined by where D p ∈ A is the operator for which the polynomials (p n ) n are eigenfunctions. Moreover λ n = P S (n). Notice that the dependence in λ of the polynomials (3.8) appears only in the first row, and hence q n is a linear combination of m + 1 consecutive p n 's. In Section 4, we will apply Theorem 3.1 to the dual Hahn polynomials. We will see there that the degree of the polynomial P S (see (3.10)) will give the order of the difference operator D q,S (3.11) with respect to which the new polynomials (q n ) n are eigenfunctions. This will be a consequence of the following Lemma (which we will prove in Section 6). Lemma 3.2. With the same notation as in the previous Theorem, write and Ω h g , h = 1, · · · , m, g ∈ N, for the particular case of Ω when Y h (x) = x g . Assume that Ψ h j , h, j = 1, · · · , m, are polynomials in x and writed = max{deg Ψ h j : h, j = 1, · · · , m}. Then M h and SΩ h g , h = 1, · · · , m, g ∈ N, are also polynomials in x. If, in addition, we assume that the degree of S( To compute the degree of the polynomial P S (see (3.10)) we will use the following Lemma (which we will also prove in Section 6). For a complex number u ∈ C, write s u j , j = 0, 1, 2, · · · , for the polynomial Given a trio U = (U 1 , U 2 , U 3 ) of finite sets of nonnegative integers, we write m j for the number of elements of U j , j = 1, 2, 3, m = m 1 + m 2 + m 3 and (3.13) For real numbers N, α, β, consider the rational function P defined by (3.14) P where p is the polynomial The determinant (3.14) should be understood in the way explained in the Preliminaries (see (2.2)). If with leading coefficient given by 3.2. When are the polynomials (Cas Y1,...,Ym n ) n orthogonal? Only for a convenient choice of the polynomials Y j , j = 1, . . . , m, the polynomials (q n ) n (3.8) are also orthogonal with respect to a measure. In [15], Sect. 4, a method to check the orthogonality of the polynomials (q n ) n (3.8) was provided. This tool assumed that the sequences (ε h n ) n∈Z , h = 1, . . . , m, do not vanishes for any n. This is not the case for the dual Hahn polynomials, but, as we next show, that method can be modified to include also that case. The measure µ might be degenerate, in which case for some n 0 we might have a n0 c n0 = 0. Define the auxiliary numbers ξ h n,i , i ≥ 0, n ∈ Z and h = 1, . . . , m, by We then define the sequence of polynomials (q G n ) n by . Notice that if for each h = 1, . . . , m, Y h (n) = Z h g h (n), is a polynomial in n and for some M ∈ N, Ω G (n) = 0 for 0 ≤ n ≤ M , then the polynomials q G n (3.22) fit into the definition of the polynomials (3.8) in Theorem 3.1, and hence they are eigenfunctions of an operator in the algebra A. The key to prove that the polynomials (q G n ) n are orthogonal with respect to a measureρ are the following formulas. Assume that ε h n = 0, n ≤ 0, h = 1, · · · , m and that there exists a constant c G = 0 such that , (3.25) whereg h are the m different numbers (3.21) and pG(x) = m j=1 (x −g i ). We then have the following version of Lemma 4.2 of [15]. Lemma 3.4. Assume that ε h n = 0, n ≤ 0, h = 1, · · · , m, and that there exist M, N (each of them can be either a positive integer or infinity) such that a n c n = 0 for 1 ≤ n ≤ N and Ω G (n) = 0 for 0 ≤ n ≤ M . Assume also that (3.23), (3.24) and (3.25) hold. Then the polynomials q G n , 0 ≤ n ≤ min{M − 1, N + m}, are orthogonal with respect toρ and have non-null norm. The proof is completely analogous to that of Lemma 4.2 of [15] and it is omitted. Finite sets of positive integers. We still need a last ingredient for identifying the measureρ with respect to which the polynomials (q G n ) n (3.22) are orthogonal. The measures ρ F α,β,N (1.2) in the Introduction depends on certain finite sets F 1 , F 2 and F 3 while the polynomials (q G n ) n depend on the finite set G (the degrees of the polynomials Z's). The relationship between the sets F 's and G will be given by the following transforms of finite sets of positive integers. Consider the sets Υ and Υ 0 formed by all finite sets of positive or nonnegative integers, respectively: Υ = {F : F is a finite set of positive integers}, Υ 0 = {F : F is a finite set of nonnegative integers}. We consider an involution I in Υ, and a family J h , h ≥ 1, of transforms from Υ into Υ 0 . For F ∈ Υ write F = {f 1 , . . . , f k } with f i < f i+1 , so that f k = max F . Then I(F ) and J h (F ), h ≥ 1, are defined by Something similar happens for the transform J h with respect to {0, 1, . . . , f k +h−1}. Notice that and if n F denotes the cardinal of F , we also have For a trio F = (F 1 , F 2 , F 3 ) of finite sets of positive integers, we will write (the use of, for instance, f 2 j to describe elements of F 2 is confusing because it looks like a square, this is the reason why we use the notation f 2⌉ j ). Dual-Hahn polynomials and their D-operators We start with some basic definitions and facts about dual Hahn and Hahn polynomials, which we will need later. Dual Hahn polynomials are eigenfunctions of the second order difference operator where (to simplify the notation we remove the parameters in some formulas). As in Section 3, the shift operators in P λ act in x: s x,j (p) = p(λ(x+j)). In particular, this implies that Γ ∈ A λ , where A λ is the algebra defined by (3.3). Dual Hahn polynomials satisfy the three term recurrence formula (R −1 = 0) where a n = n(n + α), Hence, when N is not a positive integer and α, −β − N − 1 = −1, −2, · · · , they are always orthogonal with respect to a moment functional ρ α,β,N . When N is a positive integer and α, β = −1, −2, · · · − N , α + β = −1, · · · , −2N − 1, we have Notice that R α,β,N n , R α,β,N n = 0 only for 0 ≤ n ≤ N . The moment functional ρ α,β,N can be represented by either a positive or a negative measure only when N is a positive integer and either −1 < α, β or α, β < −N , respectively. Dual Hahn polynomials satisfy the following identities (−n) j (n + α + β + 1) j (−N + j) n−j (α + j + 1) n−j (−x) j j! (we have taken a slightly different normalization from the one used in [33], pp, 234-7). Notice that h α,β,N n is always a polynomial of degree n. A straightforward computation shows the hypergeometric representation Hahn polynomials satisfy the following second order difference equation In the following Lemma (which will be proved in Section 6), we include the three D-operators we have found for dual Hahn polynomials. define three D-operators (see (3.5)) for the dual Hahn polynomials and the algebra A λ of operators defined by (3.3). More precisely We can apply Theorem 3.1 to produce from arbitrary polynomials Y j , j ≥ 0, a large class of sequences of polynomials (q n ) n satisfying higher order difference equations. But only for a convenient choice of the polynomials Y j , j ≥ 0, these polynomials (q n ) n are also orthogonal with respect to a measure. As we wrote in the Introduction, when the sequence (p n ) n is the dual Hahn polynomials a very nice symmetry between the family (p n ) n and the polynomials Y j 's appears. Indeed, the polynomials Y j can be chosen as Hahn polynomials with parameters depending on the D-operator D h . This symmetry is given by the recurrence relation (3.19), Then they satisfy the recurrence (3.19), where (a n ) n∈Z , (b n ) n∈Z , (c n ) n∈Z are the sequences of coefficients in the three term recurrence relation for the dual Hahn polynomials (R α,β,N n ) n (4.4) and respectively. Proof. We only prove the first case. The recurrence relation (3.19) is then Bispectral dual Hahn polynomials In this section we put together all the ingredients showed in the previous Sections to construct bispectral dual Hahn polynomials. Along this section we assume that N is a positive integer. This condition in necessary for the existence of a positive weight for the dual Hahn polynomials, and only in this case we have an explicit expression of that weight. However, this condition is not needed in our construction and hence the results in this Section are also valid when N is not a positive integer (once one has adapted the constrains on the parameters α, β and N ). Since we have three D-operators for dual Hahn polynomials, we make a partition of the indices in Theorem 3.1 and take In particular, the auxiliary sequences of numbers ξ h x,i , h = 1, · · · , m, i ∈ Z, (see (3.6)) are then the following rational functions of x For i ∈ Z, we finally write Z i = {j ∈ Z : j ≤ i}. We are now ready to establish the main Theorem of this paper. Theorem 5.1. Let F = (F 1 , F 2 , F 3 ) be a trio of finite sets of positive integers (the empty set is allowed, in which case we take max F = −1). For h 1 , h 3 ≥ 1, consider the trio U = (U 1 , U 2 , U 3 ) whose elements are the transformed sets J hj (F j ) = U j = {u j⌉ i : i ∈ U j }, j = 1, 3, and I(F 2 ) = U 2 = {u 2⌉ i : i ∈ U 2 }, where the involution I and the transform J h are defined by (3.26) and (3.27), respectively. Define m = m 1 + m 2 + m 3 . Let α and β be real numbers satisfying where we denote by f i,M the maximum element in F i , i = 1, 2, 3. In addition, we assume that Consider the dual Hahn and Hahn polynomials (R α,β,N n ) n (4.1) and (h α,β,N n ) n (4.10), respectively. Assume that Ω U α,β,N (n) = 0 for 0 ≤ n ≤ N + m 1 + m 2 + 1 where the m × m Casorati determinant Ω U α,β,N is defined by We then define the sequence of polynomials q n , n ≥ 0, by Then (1) The polynomials q n , 0 ≤ n ≤ N + m 1 + m 2 , are orthogonal and have non-null norm with respect to the measurẽ (2) The polynomials q n , 0 ≤ n ≤ N + m 1 + m 2 , are eigenfunctions of a higher order difference operator of the form (3.2) with (which can be explicitly constructed using Theorem 3.1). Proof. First of all, notice that we have performed a straightforward normalization of the polynomials q n , 0 ≤ n ≤ N + m 1 + m 2 (with respect to (3.8)). Notice that the assumption (5.4) on the parameters α and β implies that α,β = −1, · · · , −Ñ ,α +β = −1, · · · , −2Ñ − 1, and hence the dual Hahn weight ρα ,β,Ñ (x + f 2,M + 1) is well defined and its support is {−f 2,M − 1, · · · , N + f 1,M + h 1 }. Using the assumptions (5.4) on the parameters α and β, we deduce that the support of the measureρ F ,h1,h3 Notice that the support is formed by Taking into account that U 1 = J h1 (F 1 ), U 2 = I(F 2 ) and (3.28) we get Before going on with the proof we comment on the assumption that Ω U α,β,N (n) = 0 for 0 ≤ n ≤ N + m 1 + m 2 + 1. If F 3 = 0, since the sequence ε h n , h ∈ U 3 , vanish for n = N +1, it is no difficult to see that Ω U α,β,N (n) = 0 for N +m 1 +m 2 +2 ≤ n ≤ N +m (the proof is similar to that of the first part of Lemma 3.3). If F 3 = ∅, the situation is different, and, except for exceptional values of the parameters α, β and N , we have Ω U α,β,N (n) = 0 for all n ≥ 0. In this cases, the polynomials (q n ) n are defined for all n ≥ 0 and always have degree n (in λ). However, it is not difficult to see that for n ≥ N + m 1 + m 2 + 1, the polynomial q n (λ(x)) vanishes in the support ofρ F ,h1,h3 α,β,N . Hence it is still orthogonal with respect to this measures but has null norm. This is completely analogous to the situation with the dual Hahn polynomials R n (λ), which are defined for all n ≥ 0 (except when α = −1, −2, · · · ) and always have degree n. But if n ≥ N + 1, they vanish in the support of its weight (see (4.2)). To prove (1) of the Theorem, we use the strategy of the Section 3.2. We need some notation. Write Z h j , h = 1, . . . , m, j ≥ 0, for the polynomials It is easy to see that pG has simple roots if and only if These constrains follow easily from the assumptions (5.4) on the parameters α and β. Hence, pG has simple roots. Proceeding as in the proof of Theorem 1.1 in [15], one can prove that 3). From the recurrence relation for the dual Hahn polynomials, we get that a n c n = 0 for 0 ≤ n ≤ N , but c N +1 = 0. It is also easy to check that ε h n = 0, h = 1, · · · , m, when n is a negative integer. Since we assume that Ω U α,β,N (n) = 0 for 0 ≤ n ≤ N +m 1 +m 2 +1, the orthogonality of the polynomials q n , 0 ≤ n ≤ N +m 1 +m 2 , with respect toρ U α,β,N is now a consequence of the Lemmas 4.2, 3.4 and the identities (5.2) and (5.3). They have also non-null norm. We now prove (2) of the Theorem. Using (5.3), it is straightforward to see that , and Z l j , l = 1, · · · , m, j ≥ 0, and G = {g 1 , · · · , g m } are defined by (5.9) and (5.10), respectively. Consider the particular case of the polynomial P (3.14) in Lemma 3.3 for Y l (x) = Z l g l (x) (and denote it again by P ), and write S for the rational function where p is the polynomial (3.15) in Lemma 3.3. A simple computation shows that S(x)Ω U α,β,N (x) = P (x). Since the sequences (ε h n ) n (5.2) generate the D-operators in Lemma 4.1 for the dual Hahn polynomials, we get, as a direct consequence of Theorem 3.1, that the polynomials q n , 0 ≤ n ≤ N +m 1 +m 2 , are eigenfunctions of a higher order difference operator D q,S in the algebra A λ (3.3), explicitly given by (3.11). We now compute the order of D q,S . Since SΩ U α,β,N = P , Lemma 3.3 gives that the degree of mi 2 (notice that the assumption (3.16) in Lemma 3.3 are just (5.12) above). Hence the polynomial P S defined by P S (x) − P S (x − 1) = S(x)Ω U α,β,N (x) has degree d + 1. Taking into account that the m-tuple G (5.10) is formed by the sets J h1 (F 1 ), I(F 2 ) and J h3 (F 3 ), the definitions of the involution I (3.26) and the transform J h (3.27) give That is, P S is a polynomial of degree r. Consider the coefficients B and D of s x,1 and s x,−1 in the second order difference operator Γ for the Dual Hahn polynomials (4.3). We then deduce that the operator P S (Γ) has the form r l=−rh and u 1 denotes the leading coefficient of the polynomial P S . Using (4.3), we deduce that bothh −r and h r are rational functions whose numerators are polynomials of degree 3r and whose denominators are polynomials of degree 2r. Consider now the coefficientsB h andD h of ∆ x,1 and ∇ x,−1 in any of the Doperators D h for the Dual Hahn polynomials (see Lemma 4.1). Using Lemmas 3.2 and 3.3, we can conclude that the polynomials M h (3.9) have degree at most v h = r − g h . Since Y h has degree g h , this shows that the operator M h (Γ)D h Y h (Γ) has the form r l=−rĥ As before, we deduce that bothĥ −r andĥ r are rational functions whose numerators are polynomials of degree 3r − 1 and whose denominators are polynomials of degree 2r. To complete the proof of (2) of the Theorem it is enough to take into account the expression of D q,S given by (3.11). Notice that we can generate more higher order difference operators with respect to which the polynomials (q n ) n are eigenfunctions by choosing a polynomial p, considering the rational function S p = pS, where S is defined by (5.16), and proceeding as in the proof of (2) of the previous Theorem. We guess that using this approach one can generate the whole algebra of difference operators having the polynomials (q n ) n as eigenfunctions (except for some exceptional values of the parameters α, β and N ). be a trio of finite sets of positive integers (the empty set is allowed, in which case we take max F = −1). Let α and β be real numbers satisfying Consider the weight ρ F α,β,N defined by where ρ α,β,N is the dual Hahn weight. Assume that Ω U α+f2,M +f3,M +2,β+f2,M −f3,M ,N −f1,M −f2,M −2 (n) = 0, 0 ≤ n ≤ N + m 1 + m 2 + 1, where U = (I(F 1 ), I(F 2 ), I(F 3 )). Then the measure ρ F α,β,N has associated a sequence of orthogonal polynomials and they are eigenfunctions of a higher order difference operator of the form (3.2) with (which can be explicitly constructed using Theorem 3.1). The hypothesis on Ω U (n) = 0, for 0 ≤ n ≤ N + m 1 + m 2 + 1, in the previous Theorem and Corollary is then sufficient for the existence of a sequence of orthogonal polynomials with respect to the (possible signed) measureρ F ,h1,h3 α,β,N . We guess that this hypothesis is also necessary for the existence of such sequence of orthogonal polynomials. Notice that there are different sets F 1 and F 2 for which the measures ρ F α,β,N (5.17) are equal. Each of these possibilities provides a different representation for the orthogonal polynomials with respect to ρ F α,β,N in the form (5.7) and a different higher order difference operator with respect to which they are eigenfunctions. It is not difficult to see that only one of these possibilities satisfies the condition f 1,M , f 2,M < N/2. This is the more interesting choice because it minimizes the order 2r of the associated higher order difference operator. This fact will be clear with an example. Take N = 100 and the measure µ = (x − 1)(x − 5)(x − 68)ρ α,β,N . There are eight couples of different sets F 1 and F 2 for which the measures µ and ρ F α,β,N (5.17) coincide (except for a sign). They are the following Only one of these couples satisfies the assumption f 1,M , f 2,M < N/2: Actually, it is easy to check that this couple minimizes the number Hence, it also minimizes de order 2r of the difference operator with respect to which the polynomials (q n ) n are eigenfunctions. Proofs of the Lemmas In this Section, we include the proofs of Lemmas 3.2, 3.3 in Section 3 and Lemma 4.1 in Section 4. Proof of the Lemma 3.2. To simplify the notation write d g = deg(S(x)Ω h g ). By hypothesis, we have d g ≤ g + d 0 . We can also write On the one hand, from the definition of Ψ h j , one has On the other hand, by expanding the (quasi) Casorati determinant Ω h g by its h-row, we get This shows that both M h and SΩ h g are polynomials in x. Moreover, the degree of M h is at mostd. Hence, ifd ≤ d 0 , the proof is finished. We then assume that d > d 0 . Using that we get for SΩ h g the expansion We now prove by induction on g that (−1) j j g a h,j i = 0, for d 0 + g < i ≤d. Indeed, for g = 0, the polynomial in the left hand side of (6.3) has degree d 0 , and the polynomial in the right hand side has degree at mostd. The particular caso of (6.4) for g = 0 then follows from the expansion (6.1). Assume now that (6.4) holds for any nonnegative number 0, 1, · · · , g − 1. Take now a number i with d 0 + g < i ≤d. The induction hypothesis shows that for v = 1, · · · , g, then m j=1 (−1) j j g−v a h,j i = 0. Hence the addends in the right hand side of (6.3) corresponding to v = 1, · · · , g, have degree at most d 0 + g. Since the polynomial in the left hand side of (6.3) has degree at most d 0 + g as well, one can deduce that also the first addend (v = 0) in the right hand side of (6.3) has degree at most d 0 + g. Using again (6.1), we get that also m j=1 (−1) j j g a h,j i = 0 for d 0 + g < i ≤d. To finish the proof it is enough to insert in (6.2) the expansion of Ψ h j and use where s u j (x) is the polynomials defined by (3.12). In order to prove that P is a polynomial, it is enough to prove that We now prove the claim. Write φ U for the special case of the determinant det Q(x) when We first prove that the claim follows if we prove it for φ U . Indeed, for arbitrary polynomials Y i with leading coefficient equal to r i , we can write where the sum is taken over all the trios V = (V 1 , V 2 , V 3 ) satisfying that 0 ≤ v j⌉ i ≤ u j⌉ i , for i ∈ U j , j = 1, 2, 3, and at least for some j and i 0 with i 0 ∈ U j , v j⌉ i0 < u j⌉ i0 . The claim for det Q now follows easily. Proof of the Lemma 4.1. We prove the Lemma only for D 1 (the proof for the other D-operators is similar and it is omitted). But this last identity can be checked easily from the power expansion of the hypergeometric function.
2014-07-25T17:08:45.000Z
2014-07-25T00:00:00.000
{ "year": 2015, "sha1": "cb8743521359f815f8500ba8627e2af2219e2687", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jat.2014.09.004", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "cb8743521359f815f8500ba8627e2af2219e2687", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
244938713
pes2o/s2orc
v3-fos-license
Potential of Quinine Sulfate for COVID-19 Treatment and Its Safety Profile: Review Abstract The coronavirus disease 2019 (COVID-19) pandemic is currently the largest and most serious health crisis in the world. There is no definitive treatment for COVID-19. Vaccine administration has begun in various countries, but no vaccine is 100% effective. Some people are not protected after vaccination, and there are some groups of people who cannot be vaccinated therefore, research on COVID-19 treatment still needs to be done. Of the several drugs under study, chloroquine (CQ) and hydroxychloroquine (HCQ) are quite controversial, although they have good activity against SARS-CoV-2, both drugs have serious side effects. Indonesia with its wealth of natural ingredients has one potential compound, quinine sulfate (QS), which has the same structure and activity as CQ and HCQ and a better safety profile. The aim of this article was to review the potential of QS against the SARS-Cov-2 virus and outline its safety profile. We conclude that QS has the potential to be developed as a COVID-19 treatment with a better safety profile than that of CQ and HCQ. Introduction The coronavirus disease 2019 (COVID-19) is currently a very serious health problem in the world. The World Health Organization (WHO) categorized the disease as a pandemic on March 11, 2020. The infection, which is caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), is spreading rapidly to various countries. 1,2 Currently, there is no definitive treatment for COVID-19. Various efforts are being made on a global and national scale to deal with the spread of this infection. Research on drugs that can be used to treat COVID-19 continues, among which are Chloroquine (CQ) and Hydroxychloroquine (HCQ). However, the use of these two drugs caused serious side effects such that on November 13, 2020, the Indonesia Food and Drug Administration issued a notification letter regarding the revocation of the Emergency Use Authorization (EUA) for CQ and HCQ. Previously, the United States Food and Drug Administration (US-FDA) had revoked the EUA for CQ and HCQ on June 15, 2020. Subsequently, the WHO stopped the clinical trial (Solidarity Trial) of HCQ because the drug's potential benefits for such use do not outweigh its known and potential risks. 3 Indonesia, with its wealth of natural ingredients, has one natural compound, Quinine Sulfate (QS), which is an active compound from quinine extract used to treat malaria subjects. This compound has been used for 80 years under the category of a limited over-the-counter drug so that the profile of these drugs is well understood. 4 QS compounds have a structure and mechanism similar to those of CQ and HCQ. These three compounds can bind to the Lys353 residue in the peptide domain of the Angiotensin Converting Enzyme 2 (ACE-2) receptor to prevent binding between the virus and the ACE-2 receptor in humans and prevent viral fusion with cells; thus, they have the potential to treat COVID-19. 5 Based on its characteristic which is the main component of natural compounds, it is hoped that QS can be an alternative therapy for COVID-19 that has effectiveness rivaling that of CQ and HCQ, but with a better safety profile. A recent study by Große et al indicated that QS is a potential treatment option for SARS-CoV-2 infection, which is well tolerated with a toxicological profile that is predictable and significantly better than that of HCQ and CQ. 6 However, it is undeniable that QS also has several side effects that affect hematology, kidney function, liver function and cardiovascular function. 7 Hematological disorders that are often experienced with the use of QS include: thrombocytopenia, microangiopathic hemolytic anemia, neutropenia, disseminated intravascular coagulation, eosinophilia, autoimmune hemolytic anemia, lymphopenia, and methemoglobinemia, as well as impaired kidney function, characterized by increased creatinine levels. Liver disorders are characterized by elevated serum transaminases. Meanwhile, the cardiovascular system usually manifests chest pain, T wave inversion and pericarditis. 8 Apart from these side effects, quinine has been used for more than 70 years and shows a good safety profile as long as it is used as prescribed, and the therapeutic dose is not exceeded. The side effects that occur due to the use of quinine are also reversible, can be cured and are overcome by discontinuing quinine. 9 Method Scientific data searches were carried out online on open access articles. This review includes research articles from the PubMed, Science Direct and Google Scholar journal databases for the period 2011-2021 with the keywords "Quinine Sulfate (QS) for COVID-19", "QS safety profile", "chloroquine and hydroxychloroquine for COVID-19", "Chincona Bark for COVID-19" and "traditional herbal medicine for COVID-19". The inclusion criteria for the articles were that they had to be in English; they had to include activity against SARS-Cov-2 and Angiotensin Converting Enzyme 2 (ACE2) receptor; and they had to be clinical trials, meta-analysis, randomized controlled trials, reviews or systematic reviews. Articles were not used if the language of publication was not English, or if the article did not contain the desired keywords. The search results from the total database yielded 104 articles. Eighty-four of these articles were excluded because they did not meet the inclusion criteria or met at least one of the exclusion criteria. Finally, 20 articles were available for review discussing the potential of QS for COVID-19 treatment and its safety profile. Quinine Sulfate Quinine is an alkaloid compound contained in Cinchona bark. 4 The Cinchona plant belongs to the Plantae kingdom, the Magnoliophyta division, the Magnoliopsida class, the Gentianales order, the Rubiaceae family, and the Cinchonoideae subfamily. 10 The most common compounds from the Cinchona plant are quinine, quinidine, cinchonine and cinchonidine. Over the years, the active ingredient in Cinchona bark has been used to treat fevers. Quinine was isolated and identified as the oldest effective drug for the treatment of malaria to date. In 1934, a chemist from Germany, Hans Andersag, synthesized the CQ molecule from quinine for the first time. 11 HCQ was synthesized in 1946 and proposed as a safer alternative to CQ in 1955. 12 Until now, the quinine compound, which is the main component of secondary metabolites in the Cinchona plant, is still used as a potent malaria drug. 4 Several studies have revealed that the alkaloids in quinine have other potential activities, such as antiobesity, anticancer, antioxidant, antiinflammatory, antimicrobial, and antiviral. 13,14 Potential of QS for COVID-19 Treatment CQ, HCQ and QS are antimalarial drugs derived from the alkaloid quinoline. 15 Quinoline is a heterocyclic aromatic organic compound with the molecular formula C9H7N and has a double ring structure containing a benzene ring that fuses with pyridine on two adjacent carbon atoms. 16 CQ, HCQ and QS have a similar structure with a specific benzene ring ( Figure 1). 17,18 CQ, HCQ and QS inhibit quinone reductase-2 (hQR2) in human red blood cells. 19 The inhibitory activity of quinoline on hQR2 has also been demonstrated through other in-vitro studies. 20 Inhibition of the hQR2 enzyme involved in the biosynthesis of sialic acid (a cell transmembrane protein acid monosaccharide required for ligand recognition) makes CQ a broad antiviral agent. It should be noted that human coronavirus HCoV-O43 and orthomyxoviruses use sialic acid groups as receptors. 21 226 Since HCQ and CQ are quinine derivatives and similar chemical structures, it is thought that they can be used as a therapeutic agent in the treatment or prevention of COVID-19. 23 Thus, QS, like CQ, has potential antiviral activity. QS, CQ and HCQ are weak bases. 24 This property is known to increase the pH of acidic intracellular organelles, such as endosome and lysosomes. This is essential for membrane fusion so that it interferes with the fusion process of SARS-CoV-2 in cells, thus SARS-CoV-2 cannot reproduce itself. 22,25 SARS-Cov-2 uses the ACE-2 receptor to infect human cells. 26 A recent in-silico study by Lestari et al aimed to compare the affinity of CQ, HCQ and QS bonds to ACE-2 receptors using molecular docking studies. The results showed that all compounds (CQ, HCQ and QS) can interact with amino acid residues in the ACE-2 receptor peptidase domain. Of the three compounds, quinine showed the strongest affinity for the ACE-2 receptor with the value of free energy bonding (ΔG = −4.89 kcal/mol) followed by HCQ (ΔG = −3.87 kcal/mol), and CQ (ΔG = −3.17 kcal/ mol). 5 The more negative the ΔG value, the stronger the ligand-receptor complex bonds. 27 In addition, these three compounds also form hydrogen bonds with Lys353. From this study, it can be concluded that QS has a higher binding affinity than 2 other drugs currently used as COVID-19 therapy so that it has the potential to be used as a therapeutic agent for COVID-19 by binding to the ACE-2 receptor. 5 Another study analyzed 9 antimalarial phytochemical compounds in-silico using Chimera software, the ligands that have the most affinity with the ACE2 receptor are CQ, quinine, artemisinin and febrifugine. These four compounds form hydrogen bonds with Thr371, Glu406, Arg518, Asp368 28 . An in-silico study has also been carried out to see if the interaction between quinine and doxycycline was targeted against non-structural protein (nsp 12), which plays a vital role in replication and transcription of the corona viral genome. The compounds doxycycline and quinine were found to have good binding affinity with the corona viral non-structural protein. 29 An in-silico study of 13 compounds with the best binding affinity towards SARS-CoV-2 protease was carried out. The ligands were subjected to molecular docking using Autodock Vina. Of the 13 traditional herbal compounds, including quinine, all had good binding affinity for the SARS-CoV-2 protease, Epicatechin and apoquinine showed the highest binding affinity. 30 Roza et al, 2020, determine the interaction between SARS-CoV-2 and quinine derivative compounds. The results showed that from the 10 tested compounds against SARS-CoV-2 virus cells, all of them have the ability to act as an antivirus, including quinine. 31 Trina et al, 2020, conducted a molecular docking study of 13 plant bioactive compounds against SARS-CoV-2 Main Protease (MPro) and Spike (S) Glycoprotein Inhibitors, including quinine. Quinine has a better binding affinity than CQ and HCQ (as standard). 32 A summary of several in silico studies of Quinine Sulfate against SARS-CoV-2 is shown in Table 1. A study conducted by Grobe et al, 2020, investigated the effect of QS, CQ and HCQ on the inhibition of SARS-CoV-2 cell replication in Vero B4 cells. Three days post infection, cell culture supernatants were harvested and virus production analyzed by Western blot. The results showed that 10 μM QS could reduce the replication of the strong SARS-CoV-2 virus where virion progeny production was almost completely blocked. The inhibition of SARS-CoV-2 virus replication by QS was better than that by CQ and HCQ, with 10 μM QS, virus replication was reduced by 90%, whereas, with HCQ, it was only reduced by 50%. It can be concluded that QS has a stronger activity than CQ and HCQ. 6 The study also measured the To facilitate detection and analysis of infected cells, as well as to measure the spread of the virus in living cells, an infectious clone of SARS-CoV-2 expressing mNeonGreen reporter gene was used. 33 Caco-2 cells were infected on 96-well plates with a relatively high multiplicity of infection (MOI) of 3, for strong fluorescence readings. At 48 hpi (hour post infection) the cells were fixed, and the nucleus was stained with Hoechst to determine the level of relative infection (mNeonGreen +/Hoechst + cells) and potential toxic effects of treatment and infection. This analysis confirmed the results obtained from Vero cells and showed that QS concentrations up to 100 μM were non-toxic to Caco-2 cells. Furthermore, the 50 μM and 100 μM QS treatments could almost completely inhibit SARS-CoV-2 infection at high MOI. Therefore, QS has the potential as a treatment option that can be tolerated and widely used for SARS-CoV-2 infection, with a predictable toxicological profile and is significantly better when compared with CQ or HCQ. 6 In Vero cells, quinine inhibited SARS-CoV-2 infection more effectively than CQ, and HCQ and was less toxic. In human Caco-2 colon epithelial cells, as well as the lung cell-line A549 stably expressing ACE2 and TMPRSS2, quinine also showed antiviral activity. In consistency with Vero cells, quinine was less toxic in A549 as compared to CQ and HCQ. This study also confirms that in Calu-3 lung cells, expressing ACE2 and TMPRSS2 endogenously. In Calu-3, infections with high titers of SARS-CoV-2 were completely blocked by quinine, CQ, and HCQ in concentrations above 50 μM. The estimated IC 50 were ~25 μM in Calu-3, while overall, the inhibitors exhibit 228 IC 50 values between ~3.7 to ~50 μM, dependent on the cell line and multiplicity of infection (MOI). Conclusively, these data indicate that quinine could have the potential as a treatment option for SARS-CoV-2, as the toxicological and pharmacological profile seems more favorable when compared to its progeny drugs HCQ or CQ. 34 Another in-vitro study using Vero B6 cells infected with the SARS-CoV-2 strain (IHUMI-3). Quinine showed medium antiviral in-vitro activity with EC 50 of 10.7 ± 2.0 uM and EC 50 of 38.8 ± 34 uM. 35 A 600 mg single dose of QS led to blood Cmax around 3.5 mg/L (around 8.5 uM). 36 In rat, after intravenous dose of 10 mg/kg of quinine, the observed concentration lung/blood ratio was 246. 37 An in-vitro effective concentration in the lungs to cure SARS-CoV-2 is achievable in human. If its clinical efficacy in human is confirmed, quinine could be administered intravenously in patients before the cytokine storm. 35 A summary of several in vitro studies of Quinine Sulfate against SARS-CoV-2 is shown in Table 2. Research on the effect of quinine as an antiviral was first reported by Seeler et al, 1946. This study concluded that quinine has an antiviral effect against influenza viruses by consistently slowing the course of infection with the influenza virus. 38 Quinine's antiviral activity has also been demonstrated in several other viruses, such as H1N1 39 , Influenza A virus (IAV), Human Immunodeficiency Virus (HIV), Zika virus, Ebola and dengue. 40,41 In addition, the antiviral effect of quinine was described by Baroni at al, who evaluated QS in HaCat HSV-1 infected cells. The results of these studies indicate that quinine has the effect of inhibiting viral infection through indirect pathways, such as activating the protein heat shock response, interfering with several viral replication pathways, and inhibiting Nuclear Factor kB (NF-kB) by blocking gene expression. 42 Antiviral mechanisms of action of quinine include the inhibition of cytokine production (management of cytokine storm), and T cell release of IL-1,2,6,18, TNFα and INFγ, reduce levels of chemokines CCL2 and CXCL10, inhibition of micro-RNA expression, decreased TH17related cytokines, decreased DNA, RNA and protein synthesis in thymocytes. 43,44 Besides the direct antiviral activity, it is also possible to act by suppressing the synthesis of cytokines and especially pro-inflammatory factors with its anti-inflammatory effect. In-vitro data show that quinine, CQ and HCQ inhibit SARS-CoV-2 replication. 23 Quinine is a good candidate for the development of an effective drug to treat SARS-CoV-2 because of its DNA- intercalating properties. 45,46 As fever is one of the most common side effects of COVID-19 and quinine has prominent antipyretic effects, these alkaloids could be introduced as a treatment for handling this complication of COVID-19. 35,47 A recent study demonstrated that the antiviral mechanism of quinine is indirect killing of the virus. The study was conducted to investigate the effects of QS on dengue virus-infected cells. Due to the relative similarity of the structure of the dengue virus and SARS-CoV-2, it is possible for SARS-CoV-2 to use several relatively similar methods to infect cells and trigger cytokines in fighting viruses. 48 Host cells infected with the virus can initiate viral RNA release and interfere with normal protein synthesis. However, the expression of the Pathogen Recognition Receptor (PRR) known as Retinoic acid-Inducible Gene I (RIG-I) in infected host cells increases slowly to promote the Interferon-I (IFN-I) signaling pathway and increases the expression of IFN-stimulated genes (RNase L, PKR), which can inhibit protein synthesis, thereby inhibiting viral replication. 49 The RNase L pathway can remove ssRNA in virus-infected cells, meanwhile PKR blocks translation, and affects signal transduction. 50 The targets of quinine action are inhibition of genomic replication and translation of infected host cells and increased expression of RIG-I and IFN-α ( Figure 2). 51 It has been shown that IFN-α is a cytokine secreted by host cells to fight viruses. 48 The cytokine storm that is currently being discussed may not appear after quinine administration for COVID-19 because available data indicate that the release of TNFα, the most important cytokine in determining the severity of COVID-19 symptoms, is inhibited by quinine. 52,53 Because quinine blocks TNF-α expression at the transcription level of mRNA, analyzed by northern blot, it will reduce the inflammatory reaction in infected individuals rather than promote the inflammatory process. 52,54,55 For example, research on IBD patients relative to SARS-CoV -2 shows possible protective effects of anti-TNFα antibodies in Crohn's patients. 56 A summary of several potentials of Quinine Sulfate against SARS-CoV-2 is shown in Table 3. Currently, clinical trials of QS to determine its efficacy and safety are being carried out in Indonesia. In Indonesia QS has been used to treat patients with malaria. Currently, there is no clinical research data to suggest a dose of QS for COVID-19. The dosage form currently available in Indonesia is QS 200 mg. Therefore, based on the recommendations of the COVID-19 Treatment Protocol 2nd Edition regarding the dose of HCQ and available doses from QS, with the concept of re-purposing drug, this study uses the same dose as HCQ, so it is still in its current therapeutic dose has been used. In addition, the HCQ dose and QS dose used for malaria are the same, so the dose QS used for COVID-19 refers to the HCQ dose previously used for COVID-19. Nevertheless, in vitro studies showed that the toxicity profile of QS was better than that of HCQ in Vero B6 cells with CC50 > 100 M, while HCQ was 20.4± 1.4 M. 35 The plasma concentration of QS that can cause toxicity ranges from 5 to 17.8 mg/L, meanwhile the plasma concentration in 100 mg QS is 0.5 mg/L so that the dose of QS used for COVID-19 is still within safe limits. 57 It can be concluded that QS has immunostimulating and immunosuppressant activity in fighting viral infections. When quinine effectively intensifies the production of IFN-α cytokines, it functions as an immunostimulator to inhibit viruses. In contrast, quinine inhibits TNF-α release and has an immunosuppressant effect. These two different activities may have beneficial effects on people who are infected with COVID-19. Safety Profile of QS Some investigators consider quinine to be substantially safe at therapeutic doses, but above the therapeutic dose may have serious side effects. Accidental or intentional overdoses have been linked to serious and fatal cardiac arrhythmias. 58,59 The most common side effect is a symptom called "cinchonism syndrome". Mild cinchonism syndrome includes headache, vasodilation and sweating, nausea, tinnitus, hearing loss, vertigo or dizziness, blurred vision, and color perception disorders. More severe cinchonism syndromes include vomiting, diarrhea, abdominal pain, The anti-SARS CoV-2 inhibitory potential of quinine has been demonstrated by molecular docking analysis using the COVID-19 protease 6LU7 as a target [62] 7 Antiviral activity by increasing the synthesis of RIG-I and INF-α. Both will block viral mRNA translation through PKR activation and degrade poly mRNA by activating RNAse(L), thereby inhibiting protein synthesis. [51] 8 Antiviral activity via DNA intercalation [46] 9 -Antipyretic activity -Antiviral activity against SARS-CoV-2 with EC 50 deafness, blindness, and disturbances in the heart rhythm or conduction. Most of the cinchonism syndrome can be cured and treated with quinine discontinuation (reversible). 60,61 Apart from these side effects, quinine has been used for more than 70 years and shows a good safety profile as long as it is used as prescribed and the therapeutic dose is not exceeded. Besides being used as a malaria drug, several studies have revealed that quinine contains alkaloids with have other potential activities, such as antiobesity, anticancer, antioxidant, anti-inflammatory, antimicrobial, and antiviral. The side effects that occur due to the use of quinine are also reversible, can be cured and are overcome by discontinuing quinine. 60,61 Conclusion QS is an alkaloid compound contained in Cinchona bark. The potential of QS for COVID-19 treatment, among others, has the same basic structure with CQ and HCQ, namely Quinoline, which can inhibit viral fusion; is weakly alkaline so that it can increase the pH of cell organelles; has higher binding affinity to SARS-CoV-2 compared with CQ and HCQ; has antiviral activity against SARS-CoV-2 in-vitro; has other antiviral activity and acts as an immunomodulator. It is undeniable that QS also has some side effects, but the side effects caused are reversible and in long-term use and large doses. The in-vitro study also stated that the toxicity profile of QS is better than both CQ and HCQ. We conclude that QS has the potential to be developed as a COVID-19 treatment with a better safety profile than that of CQ and HCQ.
2021-12-08T16:11:55.569Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "4cc9f5ebe04d38f9eb83edb1231df426f70a7f86", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=76499", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2626949e87515c51929dfc9974cd58f552b814bd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256949246
pes2o/s2orc
v3-fos-license
Relationship between aqueous humor cytokine level changes and retinal vascular changes after intravitreal aflibercept for diabetic macular edema The aim of this work was to investigate the changes in aqueous humor cytokine levels after intravitreal injection of aflibercept in diabetic macular edema (DME) and to evaluate the relationship between cytokines modifications and central macular thickness (CMT) and retinal/choroidal vascular changes using structural and functional optical coherence tomography (OCT). Aqueous concentrations of 38 cytokines were measured via multiplex bead assay. In addition, spectral domain OCT and OCT angiography with SSADA software (XR Avanti® AngioVue) were performed at baseline and after intravitreal injections. VEGF, IL-6, IL-5, IL-1β, Eotaxin, GRO, IL-12p40, IL-12p70, IL-1RA, Flt-3L and IP-10 showed a statistically significant decrease through the follow-up (p < 0.05; p < 0.001), while Fraktalkine and GM-CSF significantly increased (p < 0.05). Best corrected visual acuity significantly increased and CMT significantly decreased during follow-up (p < 0.001 and p = 0.013). Superficial capillary plexus and deep capillary plexus density significantly increased (p < 0.001 and p = 0.014). A positive relation was found between GRO, VEGF, Fraktalkine, IP-10, IL-12p70 aqueous humor levels and CMT (p < 0.05; p < 0.001). Aflibercept is a primary anti-VEGF treatment producing a decrease of DME due to the reduction of vascular permeability, nevertheless other inflammatory cytokines showed modification after aflibercept intravitreal injections probably related to edema modification or to an interaction of aflibercept with other inflammatory cytokines. Many studies focused on the analysis of aqueous humor cytokines changes after anti-VEGF intravitreal treatment to better understand response and targets of therapy and prognostic factors for a successful management of the disease 10-12 . Aflibercept is one of the primary anti-VEGF treatment for DME. It decreases retinal vascular permeability due to its strong binding with VEGF factor that plays a key role in DME pathogenesis [3][4][5][6][7][8] . The aim of the current study was to investigate changes in aqueous cytokine levels after intravitreal injection (IVI) of aflibercept in patients suffering from DME and to evaluate the relationship between cytokines modifications and functional and anatomical parameters. Results 20 eyes of 20 patients affected by type 2 diabetes with non-proliferative DR (NPDR) and DME were examined. 20 eyes of 20 healthy subjects were evaluated as controls. The demographic and clinical characteristics of the DME group are summarized in Table 1. Anatomical and functional parameters. Central macular thickness (CMT) significantly decreased during the whole follow-up (p = 0.003), from 429.5 µm at baseline to 256.0 µm at 150 days postoperatively (Table 2). On parallel, best-corrected visual acuity (BCVA) significantly increased during 5-month follow-up (p < 0.001), particularly at 60 days post IVI compared to the previous time point. Parafoveal superficial capillary plexus density (SCPD) and deep capillary plexus density (DCPD) showed a significant increase (p < 0.001 and p = 0.019; Table 2), while choriocapillaris density (CCD) did not significantly modify during the whole study. Two out of the 20 eyes developed posterior vitreous detachment (PVD) during follow-up, one eye after the first injection and the second eye after the third injection. Aqueous cytokine levels. A total of 38 cytokines were analyzed from aqueous humor samples of 20 eyes performing IVI and of 20 eyes of controls performing cataract surgery. Only those samples with measurable concentration of cytokines at baseline and at 4 weeks after the second injection and after the fourth injection were included in the analysis. Cytokine concentrations at different follow-up time are reported in Table 3 Table 3. Aqueous cytokine levels at baseline and during the follow-up. Cytokine levels are expressed as median and Interquartile range (IQR). * Friedman test evaluating variation in cytokine levels among time; † p < 0.05, ‡ p < 0.01 pairwise post-hoc analysis vs previous. measurement. Bolded p-value were significant after FDR correction. # Baseline vs controls. For same cytokines, the concentration values were at the lowest test sensitivity threshold; therefore exact concentration values are not presented. The excluded cytokines were IL-1α, IL-2, IL-9, TGFα, IFNγ, sCD40L and TNFα. A statistically significant difference between controls and subjects with diabetes at baseline was observed in aqueous humor levels of the following cytokines: IL-10, IL-12p70, IL-12p40, IL-15, EGF, FGF-2, VEGF, MIP-1β, MCP-3 and MDC (p < 0.05; p < 0.001; Table 3). VEGF levels showed a significant reduction from baseline to the last postoperative control (120 days) (p = 0.027; Table 3). A positive and significant relationship was found between absolute variation of GRO, VEGF, Fraktalkine, IP-10, IL-12p70 aqueous humor levels at 120 days and absolute variation of CMT (p < 0.05; p < 0.001; Table 4). A negative and significant relationship was found between absolute variation of VEGF aqueous humor levels and absolute variation of BCVA (p = 0.041; Table 4), while a positive and significant relationship between absolute variation of IL-1RA and BCVA was detected (p = 0.025; Table 4). No statistically significant relationships were found between absolute variation of cytokine levels and absolute variation of foveal and parafoveal DCPD (data not shown). Aqueous levels of VEGF, IL-1β and IP-10 were significantly and negatively related to parafoveal SCPD (p < 0.05, p < 0.001; Table 4). On the contrary, IL-6 and IL-5 were significantly and positively related to parafoveal SCPD (p < 0.05; Table 4). Table 4. Univariate linear mixed model analyses between the absolute variation of aqueous cytokines levels and anatomical and functional parameters. Abbreviations: BCVA = best corrected visual acuity, CMT = central macular thickness; SCPD = superior capillary plexus density; b = regression coefficient adjusted for age, gender and duration of diabetes; SE = standard error. Bolded p-value were significant after FDR correction. Discussion Aflibercept is considered a first-line therapy for central-involved DME and has been demonstrated to have a higher gain in visual acuity at 1-year follow-up than bevacizumab and ranibizumab 13 . It specifically targets retinal endothelial cells (REC) decreasing their permeability due to its blockage effect on VEGF-A. The latter has been reported to be higher in DR patients at all stages 14 . However, some patients do not respond to anti-VEGF treatment suggesting that other inflammatory factors are probably involved in DME pathogenesis 15 . The current study evaluated the influence of anti-angiogenic therapy using aflibercept on different cytokines in the aqueous humor of DME patients. VEGF and several inflammatory cytokines have been related to DME development, although the real mechanism of each molecule is still unclear 15 . Several cytokines showed modifications after IVI of anti-VEGF, probably related to interaction of the drug with other possible contributors to DME. The VEGF-induced effects are supposed to be influenced and mediated by other cytokines with a cascade mechanism 15 . It has already been reported that in vitro VEGF-induced REC proliferation can be prevented from anti-VEGF therapies. Aflibercept can also bind placental growth factor (PIGF) as well, conversely to bevacizumab and ranibizumab, with a higher capability to inhibit proliferation and migration processes of REC. Moreover, the restoration of REC barrier was obtained with lower concentrations of aflibercept compared to ranibizumab 4 . Funatsu et al. 4 reported that VEGF, ICAM-1, IL-6 and MCP-1 in the vitreous fluid were statistically significantly higher in patients suffering from diabetes than those without diabetes and found higher levels in severe DME, defined as hyperfluorescent, than in mild fluorescent DME characterized by a reduced fluorescein leakage at the macula. In addition, all these molecules showed a significant overall correlation with CMT but only VEGF and ICAM-1 were significant associated with the level of DME severity. Aqueous levels of VEGF have been previously reported to be significantly correlated with IL-6 concentration in aqueous humor 4 . IL-6 is an inflammatory cytokine involved in enhancement of vascular permeability in DME. Aqueous levels of IL-6 have been found significantly higher than those in plasma 4 . However, it has not been found any significant association between IL-6 and severity of DME 4 . In our study, VEGF showed a significant reduction after aflibercept treatment with decreasing values after subsequent injections. Our data described a significant reduction of IL-6 after treatment and no relationship between IL-6 and CMT as well. Moreover, a significant relationship between IL-6 and SCPD in the parafoveal area was reported supporting its role in retinal microvascular damages 4 . Higher level of inflammation-induced molecules such as IP-10 and MCP-1 were observed in subjects with diabetic maculopathy 16 . IP-10 is a chemokine secreted by monocytes, endothelial cells and fibroblasts, which enhances the T-helper type 1 immune reactivity and has been reported to inhibit angiogenesis 17 . IP-10 has been positively related with increased VEGF levels in patients with DR 18 . MCP-1 has been identified as an inducer of endothelial cell chemotaxis in vitro and as a mediator of inflammatory angiogenesis in vivo 16 . A significant decrease of IP-10 and MCP-1 were not found after anti-VEGF injections, but only after corticosteroids as Yu's et al. reported, suggesting different targets and pharmacokinetics of the two therapies 16 . On the contrary, Shiraya et al. found significant reduction of IP-10 after ranibizumab intravitreal injection for DME, correlating with the decrease of central retinal thickness 19 . In our study IP-10 decreased significantly during follow-up and was positively related to CMT. IL-1β is a pro-inflammatory and pro-neovascularization molecule that has previously been described to be significantly involved in retinal microvascular damages of diabetic maculopathy disease 20 . A significant reduction of IL-1β during the treatment period was found in our sample. Moreover, our study reported significant changes of Eotaxin and GM-CSF levels in aqueous humor after anti-VEGF therapy. The reduction of Eotaxin concentration has already been observed in Shiraya et al. 's 19 study after ranibizumab injections. Lower levels of Eotaxin have been considered as a prognostic factor for therapy response 19 . GRO is a chemokine belonging to the CXC family, which recruits neutrophils and basophils and is involved in the action of inflammation and angiogenesis 21 . Increased values of GRO level were found in the plasma of patients with diabetes compared to the controls and in the vitreous of patients with proliferative diabetic retinopathy (PDR) 22 . In our study GRO significantly reduced during treatment and was directly related to CMT. Fractalkine, the sole member of the CX3C chemokine family, is an angiogenic mediator in vitro and in vivo and has been found elevated in patients with PDR 23 . In our study Fractalkine was significantly related to CMT, but significantly increased during follow-up after repeated aflibercept injections. The significance of this result should be better explored. IL-5 is a B-cell-proliferating factor. Concentration of IL-5 has been reported higher in DR patients compared to patients suffering from diabetes without retinopathy and among different stages of DR IL-5 has been found higher in PDR than NPDR 24 . This finding has suggested a role of IL-5 in the development of retinal neovascularization in patients with DR. In our cases the IL-5 resulted reduced after treatment with continuous decrease after repeated injections. This is in agreement with previous studies that demonstrated a reduction of IL-5 after IVI of anti VEGF, such as ranibizumab and bevacizumab related to reduction of retinal thickness 25 . IL-12p40 has been found increased in PDR patients compared to normal controls and has been hypothesized to have a role in the inhibition of retinal neovascularization 26 . In our study IL-12p40 continuously decreased during follow-up after aflibercept injections. It has been shown that intraocular cytokine concentration may change after intravitreal injection if a posterior vitreous detachment occurs 27,28 . Our series showed the occurrence of posterior vitreous detachment in 2 out of 20 eyes during follow-up that could have partially caused the detection of lower levels of cytokines concentration. In our work retinal vessel density changes were also studied to analyze the action of aflibercept on anatomical parameters of the patients. A significant increase of superficial and deep vessel density was observed during follow-up. Retinal capillary vessels increase in the superficial and deep plexuses was probably due to a vascular rearrangement after edema reduction or disappearance as shown by central retinal thickness decrease. A contribution to vessel density changes was also probably due to vessel caliber changes related to anti-VEGF treatment 29 . Recently, a prospective cohort study reported the significant association between the level of VEGF at baseline and the anatomic response in terms of center-involving DME improvement after the intravitreal treatment 12 . Correlations analyses between cytokines and vessel density showed that reduction of aqueous levels of VEGF, IL-1β and IP-10 during follow-up was significantly related to an increase of parafoveal superficial vessel density possibly due to a reduction of macular edema after anti-VEGF treatment or to a direct action of anti VEGF on vessel caliber. On the contrary, the reduction of aqueous level of IL-6 and IL-5 during follow-up were significantly related to a reduction of superficial parafoveal vessel density. The latter result is not easily understandable considering the reduction of macular edema; it can be hypothesized that an interaction among cytokines and particularly between these two cytokines and VEGF could lead to a direct action on vessel caliber causing variation of vessel density. During repeated injections there were cytokines (VEGF, IL-5, IL-1β, Eotaxin, IL-12p70, IL-12p40, IP-10 and IL-1RA) decreasing after the first injections (first 2 injections) that continued to decrease significantly through the follow-up, cytokines (IL-6, GRO) that slightly increased after the first 2 injections and decreased significantly thereafter through the follow-up compared to baseline values, while other cytokines (Fraktalkine and GM-CSF) increasing after the first 2 injections that significantly increased with subsequent injections. The behavior of aqueous humor cytokines was consistent with the continuous central retinal thickness reduction and retinal vessel density increase during follow-up. Roh et al. demonstrated the continuous VEGF reduction after consecutive anti-VEGF injections in age related macular degeneration that paralleled the anatomical and functional changes 30 . The association between continuous variation of cytokines and continuous anatomical variation during treatment confirms the efficacy of repeated injections in retinal pathologies such as DME and seems to be essential to prevent possible relapses. This study has some limits, such as the small sample size and a short follow-up suggesting that a wider period would be needed to investigate long term changes of cytokines in aqueous of patients suffering from DME and to better understand the timing of therapy efficacy. In addition, this work did not investigate samples of proliferative diabetic retinopathy cases but restricted the analysis only in patients with NPDR and DME. In conclusion our study showed that in DME patients morphological parameters such as retinal thickness and retinal vessel density significantly improved after aflibercept injections during a 5-month follow-up period. Functional improvement in terms of BCVA increase was also observed during follow-up, although visual acuity improvement did not reflect the anatomical improvement in terms of central macular thickness at the end of follow-up. This could be probably related to the morphological status of the central retina. It has been found that integrity of the ellipsoid band and the preservation of the external limiting membrane are predictors of functional improvement after treatment 31 . Of interest among different correlations between cytokines and anatomical and functional results was the positive relationship between absolute variation of VEGF and absolute variation of central macular thickness and the negative relationship between absolute variation of VEGF levels and absolute variation of BCVA. Moreover, superficial vessel density was significantly inversely related to VEGF. Subjects and design. Twenty eyes of 20 type 2 diabetes mellitus and diabetic retinopathy patients with NPDR, according to the simplified version of the ETDRS classification and complicated by macular edema (DME group) (11 males; 9 females; mean age of 63.4 ± 7.3 years) were enrolled in the study. If both eyes of a patient met the inclusion/exclusion criteria, the eye with higher CMT was selected as the study eye. Twenty eyes of 20 healthy subjects, without any ocular disease except for cataract, and about to undergo cataract surgery were considered as controls (control group) (12 males; 8females; mean age of 60.9 ± 7.3 years). The enrollment period of both two groups was between June 2016 and January 2017 at the University "G. d' Annunzio", Chieti-Pescara, Italy. All patients of DME group were treatment naïve. They underwent 5 consecutive IVI of 2 mg aflibercept (Eylea, Regeneron Pharmaceuticals) 30 days apart from each other. All subjects enrolled in the study were diagnosed assessing DR using color fundus photography, fluorescein angiography (FA), spectral domain optical coherence tomography (SD-OCT) and were evaluated with a comprehensive ophthalmologic examination. This prospective study adhered to the tenets of the Declaration of Helsinki and was approved by the Ethics Committee "Department of Medicine and Science of Aging, University "G. D' Annunzio" Chieti-Pescara, Italy" (LED, n° 05, March 2016). Written informed consent was obtained from the subjects after explanation of the nature and possible consequences of the study. Criteria for inclusion were: (1) age >18 years old; (2) BCVA greater than 0.5 LogMAR in the study eye at baseline examination; (3) presence of recent DME; (4) CMT > 300 µm as measured using the SD-OCT at the baseline examination. SD-OCT Angiography with XR Avanti and vascular layer segmentation. SD-OCT (XR Avanti ® ; Optovue, Inc., Fremont, CA, USA) and OCT angiography (OCTA) with SSADA software (XR Avanti ® AngioVue) were performed in all the participants at baseline and 4 weeks after each injection.OCTA scans were acquired following a standardized protocol based on the SSADA algorithm (version 2017.1.0.144) as previously described 32 . Vascular retinal layers were visualized and segmented, as previously described, in the superficial capillary plexus (SCP), deep capillary plexus (DCP) and choriocapillaris (CC) 33 . Quantitative vessel analysis. Objective quantification of vessel density was carried out for each eye using SSADA software. A quantitative analysis was performed on the OCTA en-face images for each eye using AngioVue software as previously described 32 . Surgical procedure and sample collection. DME group underwent an IVI of 2 mg (0.05 mL) of aflibercept using the standard injection procedurein the operating room. For each patient an aqueous sample was collected at baseline before the first injection (T0), at 60 days (T1) before the third IVI and at 120 days (T2) before the fifth IVI and during cataract surgery procedure for control group, by aspirating 0.05-0.1 mL of aqueous using a sterile syringe with a 30-gauge needle at the temporal limbus. Aqueous humor samples were rapidly frozen at −80 °C until assayed. Aqueous humor sampling and analysis of cytokines. Aqueous humor samples were used to quantify the production of 38 cytokines using Milliplex Human Cytokine/Chemokine Magnetic Bead Panel (HCYTMAG-60K-PX38, Millipore, Billerica, MA) according to manufacturer' protocols. This approach allowed for the simultaneous measurement of the following human analytes: EGF, FGF-2, Eotaxin, TGF-α, G-CSF, Flt-3L, GM-CSF, Fractalkine, IFNα2, IFNγ, GRO, IL-10, MCP-3, IL-12p40, MDC, IL-12p70, IL-13, IL-15, sCD40L, IL-17A, IL-1RA, IL-1α, IL-9, IL-1β, IL-2, IL-3, IL-4, IL-5, IL-6, IL-7, IL-8, IP-10, MCP-1, MIP-1α, MIP-1β, TNFα, TNFβ, VEGF. Briefly, undiluted aqueous humor samples (25 µl neat per well) were thawed and mixed well by vortexing prior to add to 25 μl of Assay Buffer. Then, 25 μl of magnetic beads coated with specific antibodies were added to this solution and incubated overnight at 4 °C with shaking. At the end of the incubation, the plate was washed twice with Wash Buffer and incubated 1 hour with 25 μl of biotinylated Detector Antibody at RT. Then, the plate was incubated for 30 minutes with Streptavidin-Phycoerythrin at RT, washed twice, and incubated with 150 μl of Sheath Fluid for 5 minutes at RT. The plate was ran immediately on a Luminex ® 100 ™ /200 ™ platform (Luminex Corporation, Austin, TX) with xPONENT 3.1 software. Standard curves for each analyte (in duplicate) were generated by using the reference standards supplied with the kit. Analytes concentrations in sample were determined with a 5-parameter logistic curve. Final concentrations were calculated from the mean fluorescence intensity and expressed in pg/mL. The assay was performed in a 96-well plate, using all the assay components provided in the kit. All incubation steps were performed at room temperature and in the dark to protect the beads from light. When biochemical variables had sample values below the lower limit of detection (Out-Of-Range values or OOR); when OOR values were over the 30% of the total sample, the analytes were excluded. Main outcome measures. BCVA; CMT; SCPD, DCPD and CCD in the foveal and parafoveal area were evaluated at baseline and 30 days after each IVI for a total follow-up period of 150 days in all patients. Aqueous humor levels of 38 cytokines were analysed at baseline (before the first IVI), and at 60 days (before the third IVI) and at 120 days (before the fifth IVI) for the DME group and during cataract surgery procedure for the control group. Relationship between variation of morphological, functional parameters and aqueous humor cytokines levels were evaluated in all patients. Statistical analysis. A Shapiro-Wilk's test was performed to evaluate the departures from normality distribution for each variable. Mann-Whitney's U test was performed to evaluate differences between controls and cases cytokines levels at baseline. Non-parametric Friedman test was performed to evaluate differences of each parameter over the time from baseline to 150 days measurements. Pairwise post-hoc analysis was then performed using a Wilcoxon-Nemenyi-McDonald-Thompson symmetry test. The relationships among absolute variation of aqueous cytokines levels and absolute variation of morphological and functional parameters that are significantly at univariate analysis were estimated by linear mixed regression models, adjusted for age, gender and history of diabetes. The number of type I errors, was controlled applying the Gavrilov-Benjamini-Sarkar procedure to bound the false discovery rate (FDR) ≤ 0.05. Statistical analysis was performed using IBM ® SPSS Statistics v 20.0 software (SPSS Inc, Chicago, Illinois, USA).
2023-02-18T15:43:08.699Z
2018-11-08T00:00:00.000
{ "year": 2018, "sha1": "6d40a006caa68316dd1aba0004adedb0946d05d2", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-35036-9.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "6d40a006caa68316dd1aba0004adedb0946d05d2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
51933815
pes2o/s2orc
v3-fos-license
Targeting Interleukin-6 (IL-6) Sensitizes Anti-PD-L1 Treatment in a Colorectal Cancer Preclinical Model Background Limited efficacy of immune checkpoint blockades was observed in clinical trials in colorectal (CRC) patients, especially in the microsatellite-stable patients. Interleukin-6 (IL-6) is critical in modeling immune responses in cancers. However, the effects of targeting IL-6 in combination with immune checkpoint blockades is unknown in CRC. Material/Methods In the present study, we investigated the profile of IL-6 expression in tumor tissues of CRC patient and we established CRC mouse models with various IL-6 expression levels using CT26 cells and MC38 cells. Effects of anti-IL-6 and anti-PD-L1 combination treatment were tested in these models. Results A total of 105 CRC patients were included in this study, with 41 (39%) females and 64 (61%) males. Sixty patients showed IL-6 high expression and 45 patients showed IL-6 low expression. The patients with IL-6 high expression tended to have shorter survival (median survival time of 25.5 months) than the patients with IL-6 low expression (median survival time of 46 months, P value=0.013). In the CRC mouse models, tumors with IL-6 overexpression tended to grow faster than the tumors with IL-6 knockout. The numbers of CD8+ T cells and CD4+ T cells were decreased in IL-6 overexpressed tumors. On the contrary, myeloid-derived suppressor cells and regulatory/suppressor T cells were more numerous in tumors with IL-6 overexpression. PD-L1 expression was upregulated in the tumors with IL-6 overexpression. Importantly, an IL-6 blockade reversed the anti-PD-L1 resistance and prolonged tumor-bearing mouse survival. Conclusions Our study indicates that IL-6 induces strong immunosuppression in the CRC microenvironment by recruiting immunosuppression cells and impairing T cell infiltration. Inhibition of IL-6 enhanced the efficacy of anti-PD-L1 in CRC, providing a novel strategy to overcome anti-PD-L1 resistance in CRC. Background Tumor-specific adaptive immunity, such as cytotoxic T lymphocyte (CTL) response, can result in promising anti-tumor effects in human malignancies [1,2]. Immune checkpoints are either inhibitory or stimulatory pathways hardwired into the immune system for maintaining self-tolerance and modulating the duration and amplitude of immune responses [3,4]. In cancers, the inhibitory immune checkpoints, such as cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) and programmed death 1 (PD-1) limit the anti-tumor immune response, thereby facilitating tumor progression [3,4]. Immune checkpoint blockades (ICBs) blocking the immune inhibitory pathways have shown promising effects in certain late-stage cancers [5,6]. However, in colorectal cancer (CRC), especially in the microsatellite-stable CRC subtype, their efficacy is limited [7,8]. IL-6 is a pleiotropic cytokine that is involved in tumor growth, invasion, and metastasis [9][10][11]. It is known that IL-6 is a pivotal modulator in the initiation of prostate tumorigenesis, tumor growth, metastasis, and resistance to chemotherapy [12]. In pancreatic cancer, IL-6 is necessary for pancreatic intraepithelial neoplasia (PanIN) maintenance and progression [13]. In CRC patients, the serum levels of IL-6 are elevated in CRC patients with advanced tumors and poor prognosis, suggesting that IL-6 facilitates CRC development [14,15]. However, the immunoregulatory roles of IL-6 and potential therapeutic effects of blocking IL-6 are unknown in CRC. In the present study, we aimed to validate the prognostic value of IL-6 in a Chinese CRC patients cohort and to elucidate the immunomodulatory effects of IL-6 in CRC. More importantly, we sought determine if blocking IL-6 would sensitize CRC tumors to the existing ICBs, thereby reducing drug resistance in CRC patients. Patient samples We collected a total of 105 formalin-fixed and paraffin-embedded (FFPE) CRC tissue samples. All patients were diagnosed from April 2011 to June 2012 at the Cancer Hospital of China Medical University. The samples were collected during the surgery. The follow-up started from the date of the surgery and ended in June 2016. Survival time was defined as the interval between the date of diagnosis and the date of death or the end of follow-up. This study was approved by the local Ethics Committee of Cancer Hospital of China Medical University. All patients signed the informed consent before surgery. Immunohistochemistry Expression of IL-6 in the FFPE tumor tissue sections was evaluated by standard immunohistochemistry (IHC). Briefly, the slides were deparaffinized with xylene and rehydrated with ethanol. Then, they were submerged in the 10 Mm citric acid buffer (Sigma-Aldrich, MO, USA) and heated in a microwave oven for 20 min for antigen retrieval. After cooling to room temperature, the slides were incubated with 3% hydrogen peroxide for 20 min and then incubated with 5% bovine serum albumin buffer (these 2 steps were both performed at room temperature). The primary antibody of IL-6 (Santa Cruz, CA, USA) was diluted by 1: 100 and added to the slides for incubation overnight at 4°C. Horseradish-peroxidase-conjugated secondary antibody (Santa Cruz, CA, USA) was diluted to 1: 1000 and added to the slides for incubation for 1 h at room temperature. The stain was developed using DAB (Abcam, MA, USA). Counterstaining was performed using hematoxylin. PBS replaced the primary antibody in the negative control. All the slides were observed under a microscope by 2 pathologists independently and without knowing the aim of this study and the clinical features of the patients. The slides with more than 30% IL-6 positive cells were defined as having IL-6 high expression. Overexpression and knockout Expression of IL-6 was regulated in CT26 cells and MC38 cells. Lentivirus IL-6 overexpression vector (Il6 Mouse ORF Clone Lenti Vector, Origene) and CRISPR vector (Il6 -mouse gene knockout kit via CRISPR, Origene) were used. We used the corresponding empty vector as a transfection control. HEK293T cells were transfected with vectors for amplification of the lentivirus. Then, CT26 and MC38 cells were infected with the harvested lentivirus. A standard lentivirus packaging and infection protocol was followed (from Origene). IL-6 expression was confirmed by flow cytometry. Animal model BALB/c mice and C57BL/6J mice (6-week-old, 20-22 g, male) were used to establish CT26 and MC38 syngeneic mouse models, respectively. All the mice were obtained from Shanghai SLAC Laboratory Animal Center of the Chinese Academy of Sciences, China. Before inoculation, cells were harvest from cell culture flasks with medium removed and counted after trypan blue staining. Then, the cells were washed with PBS and resuspended by Matrigel to a concentration of 1×10 6 cells/100 μl. Afterwards, a 100-μl cell suspension was slowly injected into the right flank of each mouse. Treatment started 1 week after the inoculation. We administered 10 mg/kg anti-PD-1 ligand 1 (PD-L1) and 5 mg/kg anti-IL-6 intraperitoneally twice per week. Tumor volume and survival status were measured and recorded regularly. Tumor length and width were measured weekly using calipers. Tumor volume was calculated as: tumor volume=length×width 2 ×v/6. Survival time was defined as the interval between the date of tumor cell inoculation and the date of death or the observation endpoint. Mice with extreme volume (more than 2000 mm 3 ), obvious weight loss (more than 30%), excessive ascites, or any other distress were sacrificed and considered as deaths. All mice were raised in specific-pathogen-free conditions with free access to standard food and clean water. Flow cytometry Flow cytometry was performed to classify the tumor-infiltrating immune cells in the CRC mouse models: CD4 + T cell (CD3 + CD19 -CD4 + ), CD8 + T cell (CD3 + CD19 -CD8 + ), myeloid-derived suppressor cells (MDSCs, CD11b + Gr-1 + ), and regulatory/suppressor T cells (Tregs, CD3 + CD19 -CD4 + CD25 + Foxp3 + ). All the fluorescencelabeled antibodies were purchased from Biolegend. Standard flow cytometry procedures were followed. Briefly, fresh tumor tissues harvested from mouse models were chopped into small pieces and digested into single cells with collagenases type I and type IV. Cells were first stained for cell membrane markers and then processed by fixation and permeabilization buffer for cytoplasm staining. Samples were analyzed using an FACSCanto II machine (Becton Dickinson, NJ, USA). The data were visualized using Flow Jo software. Statistical analysis GraphPad (CA, USA) and SPSS 17.0 software (Chicago, IL, USA) were used to conduct all the statistical analyses in this study. One-way ANOVA and the t test were performed to evaluate the difference between different experimental groups. Bonferroni's pairwise comparisons were used to analyze the difference between 2 individual groups after one-way ANOVA. Kaplan-Meier survival analysis and the log-rank test were used to evaluate the difference in survival between colon cancer patients with different IL-6 expressions. Two-tailed P<0.05 was considered as statistically significant. IL-6 high expression was related to poor survival of CRC patients To evaluate the role of IL-6 in CRC development, we collected tumor tissues from 105 CRC patients, of whom 41 (39%) were females and 64 (61%) were males. Sixty patients with obvious IL-6 expression were considered as IL-6 high. The average age of the patients with IL-6 high expression was 56.4 years and that of the patients with IL-6 low expression was 58.3 years. When they were diagnosed as having CRC, 57.8% of the high-IL-6 patients were at an advanced stage vs. 51.7% of the low-IL-6 patients, but the difference was not significant. Other clinicopathological parameters such as tumor size and lymph node metastasis were not correlated with IL-6 expression. The survival analysis indicated that patients with IL-6 high expression tended to have poorer survival than the patients with IL-6 low expression (log-rank test, P value=0.013, Figure 1). The patients with IL-6 high expression had a median survival time of 25.5 months, while the patients with IL-6 low expression had a median survival time of 46 months (P IL-6 overexpression promoted PD-L1 expression in the CRC mouse model To examine the role of IL-6 in CRC development, we established CRC mouse models using CT26 and MC38 cell lines with altered IL-6 expression: wild-type cell lines (WT), cells transfected with empty vector (IL-6-EV), cells transfected with IL-6 CRISPR (IL-6 -/-), and cells transfected with IL-6 overexpression vector (IL-6-OE) (Figure 2A-2C). Tumor growth of these mouse models was recorded ( Figure 2D, 2E). The data indicated that the tumors with IL-6 overexpression tended to grow faster than the tumors with IL-6 knockout, but the difference was not significant ( Figure 2D, 2E). IL-6 expression suppressed immune response in the CRC mouse model We evaluated the numbers of tumor-infiltrating immune cells in CT26 and MC38 tumors, aiming to characterize the immune signature in CRC models with different IL-6 expression levels. Interestingly, we found that the numbers of CD8 + T cells and CD4 + T cells were decreased in IL-6-overexpressed tumors but were increased in the IL-6-deficient tumors ( Figure 3). However, the numbers of MDSCs and Tregs were increased in the tumors with IL-6 overexpression and decreased in the tumors with IL-6 deficiency ( Figure 4). PD-L1 is a potent inhibitory regulator of anti-tumor immunity. We found that PD-L1 expression was significantly increased in the tumors with IL-6 overexpression ( Figure 4F, 4G). Taken together, our results show that IL-6 expression causes a compromised anti-tumor immunity in CRC. Tumors with IL-6 deficiency (IL-6 (-/-)) grew slightly slower than the tumors with IL-6 overexpression (IL-6-OE) and control groups, but no significant difference was observed. *** P value less than 0.001; WT -wild-type; EV -empty vector; OE -overexpression; MFI -mean fluorescence intensity; sample size of every experimental group was 10. Anti-IL-6 treatment acted synergistically with anti-PD-L1 treatment in the CRC mouse model Because IL-6 overexpression promoted PD-L1 expression and inhibited anti-tumor immune activity in CRC, we explored the effect of anti-IL-6 treatment in combination anti-PD-L1 treatment in both the CT26 mouse model and the MC38 mouse model. Importantly, the result showed that anti-IL-6 treatment or anti-PD-L1 treatment alone did not have an obvious effect on these mouse models ( Figure 5). However, anti-IL-6 and anti-PD-L1 combined treatment significantly suppressed tumor growth ( Figure 5). Therefore, we believe that anti-IL-6 and anti-PD-L1 acts synergistically in inhibiting colon cancer growth. Discussion Accumulating evidence from preclinical and clinical studies suggests that the inflammatory cytokines and chemokines have critical roles in cancer immunotherapy [16]. In normal . (B, C) The ratio of CD8 + T cells was significantly increased in the tumor tissues with IL-6 deficiency but was decreased in the tumors with IL-6 overexpression. (D, E) The number of CD4 + T cells in tumors with IL-6 deficiency was significantly enhanced but was decreased in tumors with IL-6 overexpression. ** P value less than 0.01; *** P value less than 0.001; **** P value less than 0.0001; WT -wild-type; EV -empty vector; OE -overexpression; sample size of every experimental group was 10 mice. 5506 condition, IL-6 is secreted by T-helper cells and macrophages to stimulate immune response and maintain chronic inflammation [17]. IL-6 is one of the major cytokines in the tumor microenvironment and is an important factor involved in multiple processes of tumor development, such as apoptosis, survival, proliferation, angiogenesis, invasiveness, and metastasis, and, most importantly, the immune response [18]. In this study, we focused on the effects of IL-6 expression on immune regulation of CRC. To understand the roles of IL-6 in CRC, we first correlated IL-6 expression with CRC patients' survival. Patients with IL-6 high expressions had an obvious shorter survival time than the patients with IL-6 low expression. This data agrees with previous reports [15] and suggests that IL-6 high expression has an adverse effects on CRC patients. We further investigated the mechanisms by which IL-6 overexpression can promote CRC progression. Interestingly, overexpression of IL-6 in mouse CRC cell lines did not increase tumor growth dramatically, which made us think that IL-6 produced by CRC tumor cells may function in the tumor microenvironment to promote tumor progression. Anti-tumor immune cells are major components of the tumor microenvironment and control the most vulnerable point of cancer [19]. Intensive infiltration of immune cells, such as T cells and NK cells, is correlated with better survival of CRC patients [20,21]. Using flow cytometry, we found that CD4 and CD8 T cell populations were suppressed by IL-6 overproduction in the tumor tissue. However, the immunosuppressive cells, such as MDSCs and Tregs, were more numerous in IL-6-overexpressed tumors. PD-L1 is a transmembrane protein speculated to play a major role in suppressing the immune system during particular events, such as cancer. High expression of PD-L1 in tumor tissue has potent effects on inhibiting anti-tumor immunity [22]. In our animal model, we showed that IL-6 overexpression upregulated PD-L1 levels in tumor tissues. These data clearly indicate that IL-6 overexpression in CRC promotes tumor progression by inducing immunosuppression. Immunotherapy targeting the immune checkpoints has been approved for the treatment of an expanding list of cancers, including microsatellite instable (MSI) CRC [23]. Clinical trials of PD-L1/PD-1 antibodies in MSI CRC patients have shown favorable results, but only about 15% of CRC patients are MSI subtype [24]. The rest of the microsatellite-stable (MSS) CRC patients showed a very low responsive rate [23]. These clinical observations underscore the compelling need to reverse the non-responsive CRC tumors for better therapeutic responses to immune checkpoint blockades. Previous studies have clearly shown that immune-suppressive factors in the tumor microenvironment are one of the major factors causing immunotherapy failure [25]. Each treatment can only reduce some of these suppressive factors. Thus, releasing more immunosuppressive factors by combining different treatments that target different immunosuppressive mechanisms will be a promising strategy to achieve better responses in immunotherapies. Since we showed that high IL-6 expression could induce a suppressive immune phenotype, we further investigated the feasibility of combining anti-IL-6 with anti-PD-L1 in CRC. Importantly, we noticed that anti-IL-6 treatment was able to reverse anti-PD-L1 resistance in CRC mouse models. Our results and those of previous studies [25]
2018-08-14T19:40:33.022Z
2018-08-08T00:00:00.000
{ "year": 2018, "sha1": "0ab8d10462fbe9ee7ca39369c9962575ed1690cd", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc6097097?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "0ab8d10462fbe9ee7ca39369c9962575ed1690cd", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16447979
pes2o/s2orc
v3-fos-license
Multiple myeloma--a case-control study. A total of 399 patients with multiple myeloma and an equal number of match controls were interviewed about factors possibly related to the causes of their disease. Factors studied included occupation, chemical exposure, radiation exposure, prior diseases, immunizations, chronic infections and markers for defects in immune regulation. A strong risk associated with agriculture/food processing was observed (RR = 1.8, P = 0.002). The risk could not be restricted to those exposed to animals or meat products, or those exposed to pesticides. Significant excesses were also noted for reported exposures to chemicals and gases/fumes, but no specific agent or group of agents could be identified. Cases had fewer tonsillectomies above the age of 10 (P = 0.01). A large excess of shingles (herpes zoster) was observed in cases (P less than 0.001), but most of the excess cases occurred within 10 years of diagnosis, suggesting this was a preclinical manifestation of disease rather than a cause of it. Multiple myeloma is one of the more common haematopoietic malignancies, accounting for 20% of all haematopoietic malignancies, but only 1% of all malignancies in England and Wales in 1982. Its reported incidence has been increasing rapidly in most parts of the world (Cuzick et al., 1983) and attempts have been made to determine to what extent these changes reflect more complete ascertainment or increases in the true incidence of disease (Linos et al., 1981;Velez et al., 1982;Turesson et al., 1984). Little is known about the aetiology of myeloma. Several reports have indicated an increased risk associated with farming and agriculture (Milham, 1971;Burmeister et al., 1983;Gallagher et al., 1983;Pearce et al., 1986). Radiation has also been linked to myeloma (Ichimaru et al., 1982;Cuzick, 1981), but this is unlikely to explain very many cases in the general population because of low exposure levels. Various chemicals have been suggested as increasing the risk of myeloma, including asbestos, arsenic, cutting oils, heavy metals, petrochemicals, and materials associated with plastics and rubber manufacture, but none of these observations is secure (see Blattner, 1982, for a review). Increased risks in leather workers (Dorken & Vollmer, 1968;Decoufle et al., 1977) and woodworkers (Brinton et al., 1976;Milham, 1976) have also been reported. Failure of immune regulation is postulated to be important in myeloma, possibly resulting from the effects of chronic antigenic stimulation. It has also been suggested that certain drugs and chemicals known to increase the risk of other non-Hodgkins lymphomas might be relevant to the aetiology of myeloma (see Greene, 1982, for a review). Faced with this wide range of possible causative agents for a disease with few known causes but increasing reported frequency, we have undertaken a broadly based exploratory case-control study. Methods Cases and controls were obtained from six different parts of England and Wales between May 1978 and December 1984. Cases were identified at major referral centres in these areas and the diagnostic criteria were the same as for the then current Medical Research Council's therapeutic trial, namely at least two of the following: (i) Plasma-cell infiltration in marrow smears or sections. (iii) Monoclonal immunoglobulin in serum or urine. Two controls were sought for each case matched for age (±3 years) and sex. One control was selected from the general surgical and medical wards of the same hospital as the case, excluding patients with other cancers and other long standing medical conditions (Hospital Control). A second control was selected at random from the same general practitioner as the case (GP control). The recruitment of GP controls proved unwieldly in London and was abandoned there. The distribution of cases and controls in the different areas is detailed in Table I. After obtaining consent, the interviewer administered a detailed questionnaire which required 45-60min to complete and obtained a small blood sample. Details of previous medical history were confirmed from medical records where possible. As very little is known about the aetiology of this disease, the questionnaire was far ranging and probed into previous occupations, chemical and radiation exposures, prior diseases, immunizations, family history, chronic infections and defects in immune regulation. Statistical methods The main methods of analysis were matched and unmatched logistic regression (Breslow & Day, 1981). The results were usually quite similar and the results reported here are based on a matched analysis unless otherwise stated. All significance levels are based on two-sided tests. Results A total of 409 cases were interviewed, 399 matched casehospital control pairs and 260 matched case-GP controls were available for analysis. No important differences were found between the analysis of case-hospital control pairs and case-GP control pairs, although the latter were often less significant because of the reduced sample size, even when the relative risk estimates were similar. The age at diagnosis and sex distribution of the cases matched to hospital controls are given in Table II and the broad diagnostic categories of the hospital controls are shown in Table III. No differences could be found between cases and controls in terms of marital status (Table IV) or social class (Table V). Other social class indicatorsage at leaving school and type of present accommodationwere also very similar (data not shown). Risk according to employment of one year or greater duration in specific industries is shown in Table VI. There is a clear excess in the food processing/agricultural industries (relative risk= 1.8, P=0.002). Marginal and generally nonsignificant excesses are observed in the chemical industry (P= 0.03) and amongst individuals involved in asbestos insulation, photography, petroleum and painting. The type of occupation within agriculture/food processing aWhen more than one subtype was stated, all were counted. is broken down further in Table VII. The excess among butchers is of interest but the numbers are small and otherwise the subgroups do not appear remarkable. When individuals in the food processing/agriculture industry were classified as to whether or not they worked with animals or carcasses, only a very slightly greater relative risk was found in the exposed group (7.8% cases vs. 3.8% hospital controls exposed; 10.0% vs. 6.3% unexposed). Significant excesses are found among people exposed to chemicals and to gases or fumes for 10 years or more but no other specific agent or groups of agents which were especially risky could be identified (Table VIII). In particular no excess of disease was found in individuals exposed to metals or metal dusts, resins or glues, oil, dyes or paints, or solvents. An excess was found with asbestos exposure when hospital controls were used, but this disappeared when GP controls were used (data not shown). Some differences were found in the number of cases who had radiotherapy at least one year before diagnosis of disease, or a corresponding interval for controls (Table IX). Although the numbers were small the excess appears only in those treated for previous malignancies. Of the 10 irradiated cases, 8 had their first radiotherapy at least ten years before diagnosis of myeloma compared to 1 of 2 controls. No important differences could be found in the separate number of X-rays received either overall or to specific parts of the body (Table X). No significant differences between cases and controls could be found in terms of immunization history as measured by recall of immunizations other than those received in HM services (Table XI) (Table XII). Further analyses allowing for the number of postings of each individual also showed no differences. Common childhood viral illnesses were also similar between cases and controls with the exception of shingles and infectious mononucleosis above the age of 20 years (Table XIII). Occurrences of these diseases were ignored if they occurred in the year before diagnosis but there was some evidence that shingles (herpes zoster) heralded the development of myeloma several years before diagnosis (Table XIV). Because of small numbers, the situation is less clear for infectious mononucleosis as 6 out of 9 cases occurred at ages greater than 20 years, but only 2 of these occurred within 10 years of diagnosis. Cases had fewer tonsillectomy operations than controls above the age of 10 (P=0.01), but differences were found before the age of 10 (Table XV). A number of diseases thought possibly to be associated with immune function including diabetes mellitus, malaria, peptic ulcer, psoriasis, rheumatoid arthritis, rheumatic fever, thyroiditis, and tuberculosis were recorded but none showed any relationship with myeloma. A history of asthma, eczema, or allergies was also obtained for subjects and their first degree relatives (Table XVI). No significant differences were found either in well- Table XIV Numbers (%) of cases and controls with shingles (herpes zoster) above age 20 according to interval before myeloma diagnosis (or corresponding interval for controls) Discussion The most clear cut finding in this study was an approximately twofold risk of multiple myeloma amongst individuals working in agriculture and food processing. This confirms observations related to farming and agriculture based on death certificates by Milham (1971) in Washington, and Burmeister et al. (1983) in Iowa and two previous smaller ease control studies in British Columbia (Gallagher et al., 1983) and New Zealand (Pearce et al., 1986). Analysis of death certificates in England and Wales has not shown an excess of multiple myeloma in this group of occupations, but it is of interest that an excess of lymphoma and myeloma combined in farmers, farm managers and market gardeners and an excess of anaemia in food processors has been observed (Registrar General, 1978). A non-significant excess of myeloma in the food industry reported earlier (Adelstein, 1972) was not apparent in the more recent report. The risk appeared similar in farming/agriculture and food processing and could not be attributed solely to those exposed to animals or meat. Thus neither the use of pesticides for farming nor exposure to some virus or antigen associated with meat alone can explain these observations. To account for these data either some other common exposure is needed, multiple factors must be entertained, or the excess in some subgroup(s) must be attributed to chance. Exposure to chemicals was also significantly associated with risk when cases were compared to hospital controls. However no individual chemicals or groups of chemicals appeared to be specifically implicated, and excess risks were not found amongst individuals exposed to metals or metal dust, resin or glues, solvents, dyes or paints, or oil. Some suggestion of a relationship with asbestos exposure was seen but it was only significant when hospital controls were used. There were too few exposures to radiotherapy or occupational radiation to be able to discount moderate risks, but it is clear that these exposures were too rare to account for very many cases of myeloma. The excess of cases irradiated for malignant conditions more than 10 years before diagnosis (3 of which were for cervix cancer), parallels the excess seen after 10 years in a large international study of patients irradiated for cervix cancer (Day & Boice, 1983, summary chapter). There was no indication that diagnostic X-rays had any effect on the development of myeloma. Answers to a wide range of questions probing into chronic antigenic stimulation and defective immune response were generally negative and suggests that further research in this area will have to concentrate on more specific features based on prospective biochemical measurements. The finding of an excess of shingles in the 10 years before the diagnosis of myeloma is more likely to be due to an early preclinical manifestation of myeloma rather than a cause of it. The deficit of tonsillectomies above the age of 10 in the myeloma patients is more difficult to explain and needs to be confirmed in further studies. In this study two groups of controls were obtainedhospital controls and GP controls. All interviews were conducted in private and the same interviewer always questioned each member of a matched case-control triple. There was some evidence that GP controls reported higher levels of immunizations, childhood illnesses, asthma and allergies than hospital controls, but these were marginal and would not modify our conclusions. The similarity of the results when comparing cases to either control groups is reassuring with regard to possible selection and recall bias.
2016-05-16T07:07:06.410Z
1988-05-01T00:00:00.000
{ "year": 1988, "sha1": "4d43486897e8bf17046d7b314212a2d295958332", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc2246387?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "4d43486897e8bf17046d7b314212a2d295958332", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238416775
pes2o/s2orc
v3-fos-license
Mapping of the Covid-19 Vaccine Uptake Determinants From Mining Twitter Data Opinion polls on vaccine uptake clearly show that Covid-19 vaccine hesitancy is increasing worldwide. Thus, reaching herd immunity not only depends on the efficacy of the vaccine itself, but also on overcoming this hesitancy of uptake in the population. In this study, we revealed the determinants regarding vaccination directly from people’s opinions on Twitter, based on the framework of the 6As taxonomy. Covid-19 vaccine acceptance depends mostly on the characteristics of new vaccines (i.e. their safety, side effects, effectiveness, etc.), and the national vaccination strategy (i.e. immunization schedules, quantities of vaccination points and their localization, etc.), which should focus on increasing citizens’ awareness, among various other factors. The results of this study point to areas for potentially improving mass campaigns of Covid-19 immunization to increase vaccine uptake and its coverage and also provide insight into possible directions of future research. I. INTRODUCTION According to current knowledge, mass vaccination is the only way to contain the spread of the SARS-CoV-2 virus, the cause of the Covid-19 pandemic. To bring this pandemic to an end, a large proportion of the world needs to be immune to the SARS-CoV-2 virus. Herd immunity is a key concept for pandemic control and its extinction [9]. However, to achieve herd immunity and cut the transmission chain, using a vaccine with a claimed 95% efficacy, we need to vaccinate at least 63% to 76% of the population [7]. This required vaccine coverage is certainly very high, and may not be easily attained for many reasons. This is a huge challenge not only for pharmaceutical companies and finite healthcare resources, but also for government agencies and regulatory authorities [8], [9], [31]. Reference [10] highlighted the role of vaccination programmes, which must be effective and widely adopted. The observed poor uptake of vaccines in the population makes it difficult to limit the negative impact of Covid-19 on health worldwide. Statistics show that the percentage of citizens who have received at least one dose of the vaccine in the European Union (EU) is around 50% [6]. Some countries exceed this average, such as Germany -53%, and Finlandalmost 60%; however, vaccination rates are significantly off The associate editor coordinating the review of this manuscript and approving it for publication was Derek Abbott . target. While, previously, the biggest problem with the vaccination program was low supply, today it is low demand. Many people do not want to be vaccinated. Despite the fact that governments are taking a wide range of measures in response to the Covid-19 outbreak, effective ways to encourage citizens to vaccinate are hard to find. To achieve the goals of the vaccination policy, in addition to overcoming the logistical and supply challenges, it is extremely important to counteract the reluctance to vaccinate, which is steadily growing. Vaccine hesitancy is a complex issue driven by a mix of demographic, social and behavioral factors. Determinants concerning vaccine uptake are complex and context-specific, as they vary according to the time, place and severity of the disease and the vaccine characteristics [5]. Many reviews have focused on the classification of possible determinants of vaccine aversion and the wider uptake of different vaccines, for example, the uptake of the influenza vaccine by older people [3], the tetanus/diphtheria/polio vaccine for children [4] or childhood vaccines till ≤7 years of age [5]. In the face of the current Covid-19 pandemic, a pragmatic methodology (beyond questionnaire experiments) is needed to reach the main determinants of Covid-19 vaccine acceptance, which is lacking in the literature. For that reason, this study aims to fill this gap. The study is based on text data obtained from Twitter regarding vaccines in Poland. By applying (i) a taxonomy model of 5As, and VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ (ii) a bottom-up approach during data analysis -the mining of tweets from the public discussion provided the topics, and finally, after further analysis, a set of key determinants of vaccine uptake was obtained, and the model was expanded with another dimension labeled Assurance, thus forming 6As. The proposed approach (i) examines the main determinants of vaccine uptake, (ii) identifies possible root causes of nonvaccination, (iii) outlines the relevance of the determinants for citizens' perceptions, and (iv) can support the subsequent design of robust and evidence-based interventions by governments. Reaching the main determinants of vaccine uptake can help with designing and targeting vaccination strategies, in order to gain extensive acceptance in the population. This is a key path to ensuring a fast liberation from the Covid-19 pandemic. The main contribution of this paper is (i) the identification of an additional sixth dimension in the 5As taxonomy, labeled Assurance; (ii) a preliminary proof-of-concept of the 6As; (iii) a validation of the usability of textual data from public discussions in identifying and classifying the determinants of vaccine uptake; (iv) the development of a bottom-up methodology for the examined issues. The remainder of this paper is structured as follows. First, we review the background and relevant literature in Section 2. Section 3 introduces the research methodology. Section 4 presents the empirical results obtained in the study, with a discussion of the findings and implications. Finally, Section 5 concludes the study. II. THEORETICAL BACKGROUND AND RELATED STUDIES Vaccine 'hesitancy' is an emerging term in the scientific literature and public discourse (i.a. social media) on vaccine decision-making and the determinants of vaccine uptake. The reasons behind decisions to refuse or delay vaccination are varied and context-specific, thus there is no single form that vaccine hesitancy takes [11]. According to [2], the acceptance and adherence to public health recommendations by the population depend largely on the way people perceive a threat. The study of [12] revealed a comprehensive list of concerns related to the Covid-19 immunization of people who do not wish to be vaccinated. Respondents most frequently reported: lack of proper testing of vaccines (74.1%), vaccine adverse effects (65.1%), lack of vaccine effectiveness (44.9%) and improper transport/storage of vaccines (14%). However, the results of campaigns to encourage vaccination are not only dependent on vaccine efficacy and safety. Effective communication campaigns are needed, based on transparency and focusing on restoring trust in authorities, the government and medical professionals [14]. According to [13], vaccine acceptance among the general public and healthcare workers plays a crucial role in the successful control of the pandemic. We can consider immunization programs to be effective when there are high rates of coverage and acceptance in the population [15]. To achieve this, detecting the determinants of Covid-19 vaccine acceptance is crucial. Reference [16] distinguished the determinants of Covid-19 vaccine acceptance, based on textual data collected from Weibo, a crucial public opinion platform in China. The main determinants of Covid-19 vaccine acceptance in China included the price and side effects. In turn, the study of [17] aimed to assess the prevalence of the acceptance of the Covid-19 vaccine, and the determinants of this among people in Saudi Arabia. By usage of a questionnaire, the researchers found perceived risk and trust in the health system to be significant predictors of the uptake of the Covid-19 vaccine. The work of [18] focused on examining Covid-19 vaccine acceptance rates in Russia. The study identified a wide range of factors associated with Covid-19 vaccine uptake, which were grouped into the following main areas: sociodemographic and health-related characteristics, cues to action, perceived benefits and barriers. When the vaccine was proven to be safe and effective, the rate of vaccine acceptance increased. Moreover, gender and income significantly influenced the acceptance rates. Whereas [19] examined the individual, communication, and social determinants associated with vaccine uptake. Their study identified ethnicity, risk perceptions, exposure to different media for Covid-19 news, party identification, and confidence in scientists as factors that would affect Covid-19 vaccine uptake. A review of previous research on vaccine uptake (see: Table 1) indicates that this phenomenon is increasingly gaining academic attention. Facing the fast-paced dynamic of the coronavirus pandemic, researchers use the different environments to collect data and use a variety of methods for data analysis. The rapid and easily accessible environment of social media, here namely Twitter, is popular and very often used to gain international insights into public opinion on the Covid-19 vaccination. However, a lack of research dedicated to the usage of the 5As framework is clearly visible. The studies previously referred to above provide evidence that vaccine uptake may be determined by a complex mix of demographic, social and behavioral factors. To order these factors, the present study was based on the 5As taxonomy according to [1]. They identified the determinants of vaccine uptake as 5As dimensions: Access, Affordability, Awareness, Acceptance and Activation. Determinants extracted from a systematic literature review had been assigned to each dimension, and this approach facilitated their understanding. Their study proved that the 5As taxonomy captured all the identified determinants of vaccine uptake. Therefore, in this study we decided to use this framework in our methodology to develop a structured classification. A sixth dimension, labeled Assurance, was uncovered during the empirical stage of this study. Table 2 includes a definition for each of these six dimensions. By knowing the major determinants of vaccine uptake, actions can be better tailored to effectively improve the success rate of the vaccination program. III. METHODOLOGY This section provides the research methodology adopted in the current study. Section A aims to presents the method of data collection. Section B describes data analysis. Section C explains the bottom-up approach taken in the present study. A. DATA COLLECTION AND PREPARATION The starting point in the empirical part of the study were textual data obtained from Twitter. Discussions between users on Twitter, which constitute opinions, insights and comments on vaccines, are valuable material that, after appropriate processing, will provide new knowledge. A scraping of Twitter data was conducted via QDA Miner software, using the keywords: , with the period between 1 st to 30 th May 2021. This query was designed to obtain a broad spectrum of data from discussions among Twitter users about vaccinations and vaccines. We collected in total 125 495 tweets only in Polish. The Polish language is so unique that it is not used outside of Poland. The assumption of focusing only on Polish tweets was aimed at: (i) selecting only one country for evaluation as a case study; (ii) providing access to discussions regarding homogeneous government regulations on vaccination; and (iii) guaranteeing the relative universality of the results for other European Union countries, given that Poland is also a member. After collecting the data, we performed the pre-processing steps. Tweets in a language other than Polish were deleted, duplicate or empty tweets were removed, and finally, we obtained a set of 105 849 tweets ready for further data analysis. B. DATA ANALYSIS First, topic modeling was performed to extract the latent topics in the tweet data using the QDA Miner software. A 33-topic model was found to be optimal in terms of the average semantic coherence of the model. As a result of this phase, we obtained topics, described by top-weighted keywords. Next, an iterative process of topic labeling was performed. Second, we employed coding to identify relevant interactions between the topics and then aggregate them into higher-order concepts (categories of determinants). The topics were coded and classified under each dimension of the As framework. For example, the tweet extract ''I came for a vaccination, but it is a pity that the vaccines did not come '' 3 was classified as evidence of the topic concerning problems with delays in Covid-19 vaccine deliveries. When there are problems with the supply of vaccines, people who want to be vaccinated generally have a problem with vaccine uptake. Therefore, this topic was included in the Access dimension. Finally, as a result of this phase, we obtained 17 determinants. Then, each determinant was categorized as a representative of Access, Affordability, Awareness, Acceptance, Activation or other using the definitions of each dimension according to Table 1. C. THE BOTTOM-UP APPROACH The methodology developed for this study is presented in Figure 1. The activities performed, and the methods and software used at each stage of the bottom-up approach are discussed in more detail below. 1) DATA COLLECTION Involves sourcing relevant data according to a chosen set of keywords and a defined time period. For this study they have already been determined in Section A. Data collection was conducted using the commercial software QDA Miner, which is part of the ProSuite software [45]. The ProSuite program provides advanced tools for a thorough analysis of data and consists of the following modules: (i) QDA Miner for qualitative data analysis, (ii) WordStat intended for content analysis and text mining, (iii) SimStat designed for statistical analysis. It also offers the option of scraping tweets. In other words, data extraction from Twitter was automated with QDA Miner. In total, 125 495 tweets were collected in this phase. The following information for each tweet was downloaded: (i) tweet full text, (ii) the numbers of favorites and retweets, (iii) user geolocation; (iv) user description/selfcreated profile, (v) tweet date and hour. In order to check whether the data are balanced, we divided all tweets into subsets (covering a period of 7 days) to identify tweets in the material in terms of a place and date of publication. The content in each subset was then compared to see if the data were evenly distributed. This experiment proved that the data was well balanced. It should be stressed that the research material collected at this stage is represented by unorganized data, with colloquial language, slang, abbreviations or extensions, etc. Thus, the subsequent stage of preparing the data is needed. 2) TEXT PREPARATION Consists of the following tasks: (a) parsing, which means analyzing data and breaking them down into smaller blocks, which separately can be easily interpreted and managed; and (b) preprocessing, also called text cleaning of data, which includes the following jobs: (i) tokenization, where the words 3 In Polish: ''Przyjechałem na szczepienie, ale szkoda że szczepionki nie przyjechały '' are transformed from the text into structured sets of elements called tokens; (ii) compiling a stop word list, where the words which have low informative value or are semantically insignificant (e.g. and, a, or, the) are eliminated; and (iii) stemming, where the words are reduced to their basic form, i.e. word stems are identified. At this phase, we used the WordStat software. We also detected the language of the tweets and retained only tweets in Polish resulting in a dataset with 105 849 tweet documents for further analysis. 3) TOPIC MODELING is a method for finding a group of words (i.e topic) from a collection of documents. This is a way to achieve recurring patterns of words in textual data. There are many techniques possible to obtain topic models (e.g. the Latent Dirichlet Allocation, LDA). However, ours was based on an algorithm implemented in the WordStat software. Unsupervised learning was chosen because it is commonly used and allows us to conduct exploratory analyses of large text data in social science research [47]. As a result of topic modeling with the usage of the WordStat software, 33 topics, described by top-weighted keywords, were obtained. Next, an iterative process of topic labeling was performed: (i) topics were labeled to create the first version of labels based on the keywords with the greatest weight, (ii) names of labels were polished through in-depth reading of the most representative topic tweets, and (iii) the final set of topic labels was created. Similar to [47], [49] and [52], our thematic approach relied on human interpretation. Thus, this approach could be significantly influenced by personal understanding of the topics and a variety of biases. The results of this stage are part of the supplementary material: Table B. Next, the proportions of occurring topics were calculated as a percentage (TP, %). 4) AGGREGATION OF TOPICS INTO DETERMINANT CONCEPTS As a result of an in-depth analysis of textual material, by aggregating topics we created 17 determinants from 33 topics representing some kind of problem. It was assumed that each problem/topic could be linked by several determinants. So-called card sorting [53], which means that each topic written on an individual card was assigned to a logically coherent group, was used for creating a determinant. Then the obtained data were entered into the table. The results were presented in the supplementary material: Table C and Table D. Two determinants outside the 5As framework were revealed at this stage. These were referred to as Protection and Insurance. A similar type of topic classification, not into determinants but overriding themes, was done in the works [47], [52] and [49]. There are many approaches for extracting knowledge from a short text (tweets). A comparison of selected research approaches can be traced in Table 3. 5) LINKING DETERMINANTS WITH 6As Having applied the method used in the previous stage, 17 determinants were assigned to suitable dimensions of the 5As model. This analysis resulted in discovering an additional dimension, which was labeled Assurance. Thus, the research extended the model to 6As. The main topics (including the determinants of vaccine uptake emerging from the tweet topics) along with examples of comments were included in Table 5 in Appendix. By following the steps presented in Fig. 1, it is possible to access relevant knowledge and discover hot threads raised in social media discussions regarding the Covid-19 vaccination. This, in turn, provides a good basis for designing governmental guidelines for improving vaccination policies and increasing their effectiveness. IV. RESULTS AND DISCUSSION A set of 33 topics was extracted from the large text dataset representing tweets on the topic of the Covid-19 vaccination. In the next phase of the study, a total of 17 determinants influencing vaccine uptake were identified. They are included in Table 4. Moreover, the list of topics, extended by sample comments providing evidence for each identified factor, is presented in Table 5 (in Appendix). The calculation of topic proportions (TP%) made it possible to calculate the share of each As dimension (Fig. 2). The results of this study show that Covid-19 vaccine uptake mostly depends on the dimensions defined as Awareness (39.4 %) and Access (27.3 %) to the vaccine. Awareness covers the availability of a wide range of actual and detailed information regarding vaccines in the population, such as immunization schedules, vaccine side effects, safety and efficacy. Whereas Access is linked to the organization of the national vaccination strategy in terms of the following factors: problems with scheduling vaccinations and long queues, delays in vaccine deliveries, poor organization of vaccinations, too few vaccination points, and localization problems, e.g. too far from home. These findings are consistent with the study of [20], who tested Covid-19 vaccine hesitancy in a representative working-age population in France. Their survey experiment showed that detailed knowledge regarding new vaccine characteristics and the national vaccination strategy determine Covid-19 vaccine uptake. The percentage share of all factors identified under each of the 6As dimensions is presented in Fig. 3. The following subsections summarize the evidence identified for each dimension of the 6As framework. A. ACCESS FACTORS ASSOCIATED WITH VACCINE UPTAKE According to the WHO's guidelines, a COVID-19 vaccine allocation strategy should ensure that vaccines are free at the medical point of service, are allocated transparently, and with a participatory prioritization process. Due to the this, vaccines in EU countries are free of charge, so a determinant related to the price was not included in the Access group. However, the role of access on vaccine uptake was reflected in obstacles concerning scheduling vaccinations, long queues, and delays in vaccine deliveries. These problems, highlighted in the Twitter discussions, related to the improper organization of many steps in the immunization process, are major barriers to convenient access. Thus, they need urgent improvement and reinforcement. Another group constitutes unclear procedures and regulations. Many problems were reported in the discussions, such as inconsideration of people from risk groups, exclusion of immobile and non-digital groups, and poor organization of vaccinations for the partially disabled, all of which significantly hinder access to vaccination. The location of vaccination points also had an impact on uptake. Prior studies showed that the organization of vaccinations with convenient access, e.g. in a workplace [22] or at a school [23], results in increased vaccine uptake. The inclusion of help and facilities from the government is also an important determinant of convenient access to immunization. The analysis of the tweets revealed that, especially in the context of elderly people, there is a lack of assistance with registration and reaching the vaccination points. Mobile home vaccination teams would be a good solution. B. AFFORDABILITY FACTORS ASSOCIATED WITH VACCINE UPTAKE The affordability factors identified in the present study consist of two main groups of determinants. First, is the price of additional services, which concerns a payment, e.g. assistance with registration and reaching the vaccination points. This is especially true for elderly or disabled people who need the support of third parties to undergo the vaccination procedure. Not everyone can count on free support from their family members. This follows indirectly from the study [40], which found that seniors who lived alone had a lower likelihood of having received the vaccine than those who lived with others. Some have to pay for the help of an assistant in this process. The second determinant is time cost, which is influenced by the lack of clear rules for the vaccination procedure. Twitter comments identify time losses resulting from unclear laws and regulations. An example of such a tweet is: ''@Szczepimysie Hello. Where should my friend who is allergic and had an anaphylactic shock, register in Pabianice? She was already registered for today and went to be immunized but was refused vaccination due to risk''. 4 Prior studies 4 In Polish: ''@szczepimysie Witam. W jakim miejscu w Pabianicach ma się zarejestrować moja znajoma, która jest alergiczką i miała kiedyś wstrząs anafilaktyczny. Była już zarejestrowana na dzisiaj i poszła na szczepienie ale odmówiono jej szczepienia ze względu na ryzyko''. revealed that time cost was a significant predictor of MMR (measles, mumps and rubella) non-vaccination in university students [24], and was a declared disincentive to receive vaccinations in 22% of health professionals surveyed [25]. C. AWARENESS FACTORS ASSOCIATED WITH VACCINE UPTAKE As mentioned earlier, the determinant group belonging to the Awareness dimension covers the largest range (39.4%) in the entire As framework. It groups several threads covered in tweets, constituting 'hot' topics. Four determinants are included in Awareness. First, for increased vaccine uptake, people value the availability of actual information. A study of tweets revealed that the continuous volatility and inconsistency of information, the low quality of shared statistical data posted on the administration portal, as well as the lack of transparency of information from the government are factors that need improvement to increase vaccination coverage. The research of [42] stated that respondents reporting higher levels of trust in information from government sources were more likely to be vaccinated. Second, detailed knowledge about vaccines plays a crucial role. This is in line with the work of [26], who found that more knowledge regarding vaccines improved uptake among health professionals. Moreover, according to the study of [25], people who were given more information concerning personal benefits and risks were more likely to be vaccinated. Third, another diagnosed determinant is consideration of the vaccination and its side effects. This determinant was also identified in the research of [1] and [27]. The main topics on Twitter concerned fear caused by the increased number of deaths after vaccination, and captured the health risks vs. the usefulness of vaccination. Our findings are similar to the study of [22], who proved that the main reasons given for not receiving the vaccine were the belief that it had significant side effects, and concerns about its effectiveness. Finally, the last factor was the awareness of the vaccination schedule. Lack of knowledge in this area is an obvious factor contributing to non-vaccination. Thus, thorough information campaigns are necessary so that people do not have to undertake a long search for where to go and at what times to get vaccinated. This is in line with [32], who pointed to an important factor being campaigns, which support people in gaining proper information and help build effective community engagement, and local vaccine acceptability and confidence. In addition, we found that inconsistent risk messages in terms of the Covid-19 vaccine safety and efficacy from officials, public health experts and individuals, which were expressed in mass media, may contribute to a decrease in the acceptance of vaccination, due to a decline of confidence. This is consistent with the study of [21], who found distrust in vaccine safety to be a crucial determinant of Covid-19 vaccine hesitancy. Twitter users often expressed opinions about vaccine safety and questioned its effectiveness due not only to vaccine novelty, but also other factors (Fig. 4). There is agreement with many prior studies [2], [25], [26] and [28] that efficacy and safety concerns, including side effects associated with vaccination, can have hugely detrimental effects on the uptake. E. ACTIVATION FACTORS ASSOCIATED WITH VACCINE UPTAKE Activation refers to the actions taken that will make individuals more likely to receive vaccines. Three types of incentive techniques have been identified to stimulate activation: (i) prompts and reminders, (ii) workplace policies, (iii) incentives. The first group included two topics with negative sentiment. The need for direct (or telephone) contact especially with seniors regarding vaccination was pointed out, as this group is constantly overlooked in government programs due to digital exclusion [34], [44]. This result is also consistent with [40], who revealed that receiving a reminder from a doctor (67.7%) was an important influence on accepting a vaccination. According to [22], providing reminders to staff in aged care facilities significantly increased influenza vaccine uptake. Thus, sending reminders about vaccination terms to people is a good idea, and according to [33], for the elderly generation, also in the form of a personal letter. In addition, another theme of negative sentiment was the lack of contextualization advertising, best represented by the tweet: ''The vaccine isn't yogurt, but that's a bit how it's advertised??''. 5 In this area, an important element for improvement is the creation of thoughtful advertising. To support an effective launch of new Covid-19 vaccines, a government needs to understand the community's concerns, and design such advertising strategies that will neutralize them, and eventually encourage vaccine uptake. Since ''one size does not fit all'', the work of [41] recommended avoiding generic messages and instead, considering the different emotional states of the community in tailored vaccine communication efforts. Another determinant, labeled as Workplace policies, included the idea of compulsory vaccination, especially in certain professions (e.g. compulsory vaccination for all medical personnel and teachers). The tone of the tweets reflected the split of opinions on mandatory vaccination from acceptance to outright rejection of such a proposal. Examples were shared of forced vaccination by some employers, and the legal implications of this approach were discussed. The study of [38] suggests that obligatory mandates of the Covid-19 vaccination may be ineffective or, worse still, induce a backlash. In turn, the research of [42] reported that 48.1% of respondents would accept their employer's recommendation to vaccinate. They also claimed an attentive balance is required between educating the public about the necessity for universal vaccine coverage and avoiding any suggestion of coercion. Finally, the last group of determinants, called Incentives, covered such encouragements as lotteries, Covid certificates, and the development of incentive measures for vaccination (e.g. a discount code to get to the vaccination point). When planning vaccination policies, it is worth considering in-depth the strategy for introducing incentives, as the study of [35] found that financial incentives failed to increase vaccination willingness across income levels. Moreover, [36] claimed that payment for vaccination is morally suspect, likely unnecessary, and may be counterproductive. Similarly, [39] argued that financial incentives are likely to discourage vaccination (particularly among those most concerned about adverse effects), and instead, contingent nonfinancial incentives are the desired approach. F. ASSURANCE FACTORS ASSOCIATED WITH VACCINE UPTAKE A few topics mentioned factors associated with vaccine uptake which were not anticipated by the 5As taxonomy, triggering a sixth dimension, which we labeled Assurance. Three main themes emerged in this dimension: (i) discrimination against people who are not vaccinated, (ii) lack of insurance for severe vaccine adverse reactions, (iii) the need for preliminary medical tests before vaccination. The first of these created the Protection determinant, which includes comments presenting a wide range of discrimination against unvaccinated people (e.g. a curfew and travel ban for the unvaccinated, etc.). According to the public health principle of least harm to achieve a public health goal, policymakers should implement the option that least impairs individual liberties [43]. The next two topics were labeled Insurance. In this group of tweets, there were threads related to the lack of compensation in the case of death related to the Covid-19 vaccination, and insurance in the event of vaccine complications. The necessity of testing people before the vaccination itself was also indicated to diagnose possible contraindications and eliminate post-vaccination complications. Taking action in the scope described above would certainly increase confidence and contribute to increased vaccine uptake in the population. [37] examined whether compensation can significantly increase Covid-19 vaccine demand. The results showed that, for vaccines, compensation needs to be high enough because low compensation can backfire. V. CONCLUSION The goal of this study was to determine whether the five dimensions (5As) of Access, Affordability, Awareness, Acceptance and Activation could correctly cover and organize all the determinants identified from tweets regarding Covid-19 vaccine uptake. This study proved: (i) the existence of a further sixth dimension, labeled Assurance; (ii) a preliminary proof-of-concept of the 6As; (iii) the usability and importance of textual data from public discussions in identifying and classifying the different determinants of vaccine uptake. Besides the above-mentioned contributions of this research, another added value to the theory and literature is also the development of the bottom-up methodology used during data analysis. The empirical part of the present study showed that opinions expressed on social media, i.e. Twitter, constitute a valuable source of data. Knowledge hidden in this information and the discovered relationships should help design immunization campaigns in such a way as to fulfil the suggested needs of citizens and allay their fears as well. Policymakers need to design a well-researched immunization strategy to remove vaccination obstacles, false rumors, and misconceptions regarding the Covid-19 vaccines. Thus, the knowledge of determinants influencing Covid-19 vaccine acceptance can help to create communication strategies that are much needed to strengthen trust in government and health authorities. The study recognized that those interested in vaccination pay the greatest attention to the determinants in the area of Awareness and Acceptance. For this reason, the promotion of broad and detailed information regarding the vaccines and their side effects, safety and efficacy becomes a key direction in favor of Covid-19 vaccine uptake. In summary, knowledge about why people avoid the Covid-19 vaccination and which problems could act as obstacles during the immunization process may help government agencies, officials, and other authorities to (i) develop guidance for policies of immunization programs, (ii) create preventative measures against vaccine avoidance, (iii) increase public information campaigns designed to raise confidence in the effectiveness and safety of the vaccine, and finally (iv) design more tailored activities to increase the overall level of vaccine uptake in the population. However, the present study bears several limitations. First, this research focuses on the discussion via the Twitter platform and includes a short data retrieval period. Data that were collected and reported here are only a snapshot taken at an arbitrarily chosen point in time. These data were scraped in a highly changing environment of social media, with dynamic daily volatility in the perceived threat of the Covid-19 disease and issues of vaccines. Second, the study was narrowed down to only one country. Therefore, a generalization of results is difficult and it can be assumed that other threads may appear on social media discussions depending on the temporal and geographical scope of the study. Third, the study deliberately omitted the performance of a sentiment analysis of tweet data as this was not included in the purpose of the paper. In future, it is worth focusing on a task categorizing tweets for each topic into negative, neutral and positive. Nevertheless, the 6As taxonomy successfully captured all the determinants of Covid-19 vaccine uptake. Thus, future research may use this taxonomy to structure, classify and compare the significance of each of the 6As in explaining the immunization gap for different vaccines. In future research, a literature review could also be conducted to reveal current implementation strategies for Covid-19 vaccine promotion and to map them to the 6As framework identified in this study in order to determine gaps in recent research. APPENDIX See Table 5.
2021-10-07T13:10:09.677Z
2021-09-24T00:00:00.000
{ "year": 2021, "sha1": "af20c0d63ea643477f52ad506d69f1918351adfc", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9312710/09548084.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "af20c0d63ea643477f52ad506d69f1918351adfc", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
683771
pes2o/s2orc
v3-fos-license
The Informational Architecture Of The Cell We compare the informational architecture of biological and random networks to identify informational features that may distinguish biological networks from random. The study presented here focuses on the Boolean network model for regulation of the cell cycle of the fission yeast Schizosaccharomyces Pombe. We compare calculated values of local and global information measures for the fission yeast cell cycle to the same measures as applied to two different classes of random networks: random and scale-free. We report patterns in local information processing and storage that do indeed distinguish biological from random, associated with control nodes that regulate the function of the fission yeast cell cycle network. Conversely, we find that integrated information, which serves as a global measure of"emergent"information processing, does not differ from random for the case presented. We discuss implications for our understanding of the informational architecture of the fission yeast cell cycle network in particular, and more generally for illuminating any distinctive physics that may be operative in life. Introduction Although living systems may be decomposed into individual components that each obey the laws of physics, at present we have no explanatory framework for going the other way around ---we cannot derive life from known physics. This, however, does not preclude constraining what the properties of life must be in order to be compatible with the known laws of physics. Schrödinger was one of the first to take such an approach in his famous book "What is Life?" [1]. His account was written prior to the elucidation of the structure of DNA. However, by considering the general physical constraints on the mechanisms underlying heredity, he correctly reasoned that the genetic material must necessarily be an "aperiodic crystal". His logic was two---fold. Heredity requires stability of physical structure -hence the genetic material must be rigid, and more specifically crystalline, since those are the most stable solid structures known. However, normal crystals display only simple repeating patterns and thus contain little information. Schrödinger therefore reasoned that the genetic material must be aperiodic in order to encode sufficient information to specify something as complex as a living cell in a (relatively) small number of atoms. In this manner, by simple and general physical arguments, he was able to accurately predict that the genetic material should be a stable molecule with a non---repeating pattern -in more modern parlance, that it should be a molecule whose information content is algorithmically incompressible. Today we know Schrödinger's "aperiodic crystal" as DNA. Watson and Crick discovered the double helix just under a decade after the publication of "What is Life?" [2]. Despite the fact that the identity of the genetic material was discovered over sixty years ago, we are in many ways no closer today than we were before its discovery to having an explanatory framework for biology. While we have made significant each individual node. The SF networks therefore share important topological properties with the biological fission yeast cell cycle network, including degree distribution rank ordering the number of edges per node. It is unclear at present precisely which concept(s) of "information" are most relevant to biological organization (see e.g. [6] for a review of the philosophical debate on the ontological status of the concept of information in biology). We therefore consider several local and global measures in what follows, adopted from information dynamics [7---9] and integrated information theory [10,11], respectively. Features shared by the biological network and SF networks, but not the ER networks can be concluded to arise as a result of topological features, whereas those observed in the biological network that are not shared with the SF networks should be regarded as arising specifically due to network features distinctive to biological function. By comparing results for the biological network to both ER and SF random networks, we are therefore able to distinguish which features of the informational architecture of biological networks arise as a result of network topology (e.g., degree distribution, which is shared with the SF networks but not the ER networks) and which are peculiar to biology (and presumably generated by the mechanism of natural selection). By implementing the local measures of information processing and information storage provided by information dynamics, we find patterns in local information processing and storage that do indeed distinguish the biological from either the SF or ER random networks, associated with regulation of the function of the fission yeast cell cycle network by a subset of control nodes. Conversely however, we find that integrated information, which serves as a global measure of "emergent" information processing, does not differ from random for the case presented. We discuss implications for our understanding of the physical structure of the fission yeast cell cycle network based on the uncovered informational architecture, and a look forward toward illuminating any distinctive physics operative in life. A Phenomenological Procedure for Mapping the Informational Landscape of Biological Networks In this work, we address the foundational question of how non---random patterns of informational architecture uncovered in biological networks might offer general insights into the nature of biological organization. Henceforth we use the term "informational architecture" rather than "information" or "informational pattern", because architecture implies the constraint of a physical structure whereas patterns in the abstract are not necessarily tied to their physical instantiation (see also [12]). We focus our analysis on a Boolean network model for a real biological system -specifically the cell cycle regulatory network of the fission yeast Schizosaccharomyces Pombe (S. Pombe) -which has been demonstrated to accurately model cell cycle function [5]. A motivation for choosing this network for our study is that it is small, and accurately models what is arguably one of the most essential biological functions: regulation of cellular division. With just nine nodes the network is computationally tractable for all of the information theoretic measures we implement in our study -including the computationally---intensive integrated information [11]. The fission yeast cell cycle network also shares important features with other Boolean network models for biological systems, including the shape of its global attractor landscape and resultant robustness properties [5], and the presence of a control kernel (described below) [13]. It therefore serves our purposes as a sort---of "hydrogen atom" for complex information---rich systems. Although we focus here on this simple gene regulatory network, we note that our analysis is not level specific. The formalism introduced is intended to be universal and may apply to networks any level of biological organization from the first self---organized living chemistries all the way up to cities and technological societies. We note that although this special theme issue is focused on the concept of "DNA as information", we do not explicitly focus on DNA per se herein. We instead consider distributed information processing as it operates within the cell, as we believe such analyses have great potential to explain, for example, why physical structures capable of storage and heredity, such as DNA, should exist in the first place. Thus, we regard an explanation of "DNA as information" to arise only through a proper physical theory that encompasses living processes, which should be illuminated by the kinds of analyses presented here. 2---1) Fission Yeast Cell---Cycle Regulatory Network: A Case Study in Informational Architecture Regulation of cellular division is a central aspect of cellular function. In the fission yeast S. Pombe the cell passes through a number of phases, G1 - S - G2 - M, which collectively constitute its cell---cycle dictating the steps of cellular division to produce two daughter cells from a single parent cell. During the G1 stage, the cell grows, and if conditions are favourable, it can commit to division. During the S stage, DNA is replicated. In the G2 stage, there is a "gap" between DNA replication (in the S phase) and mitosis (in the M phase) where the cell continues to grow. In the M stage, the cell undergoes mitosis and two daughter cells are produced. After the M stage, the daughter cells enter G1 again, thereby completing the cycle. The biochemical reactions that form the network controlling the cell cycle for S. Pombe have been studied in detail, and a Boolean network model has been constructed that has been shown to accurately track the phases of cellular division for S. Pombe (see [5] and references therein). The interaction graph for the Boolean network model is shown in Fig. 1. Each node corresponds to a protein needed to regulate cell cycle function. Nodes may take on a Boolean value of '1' or '0', indicative of whether the given protein is present or not at a particular step in the cycle (labels indicate the relevant proteins). Edges represent causal biomolecular interactions between proteins, which can either activate or inhibit the activity of other proteins in the network (or themselves). In the model, the successive states Si of node i are determined in discrete time steps by the updating rule: where a ij denotes weight for an edge (i , j) and θ i is threshold for a node i. The threshold for all nodes in the network is 0, with the exception of the proteins Cdc2/13 and Cdc2/13*, which have thresholds of −0.5 and 0.5, respectively in the Boolean network model for S. Pombe. For each edge, a weight is assigned according to the type of the interaction: a ij (t) = −1 for inhibition and a ij (t) = +1 for activation, and a ij (t) = 0 for no direct causal interaction. This simple rule set captures the causal interactions necessary for the regulatory proteins in the fission yeast S. Pombe to execute the cell cycle process. Although many of the fine details of biological complexity (such as kinetic rates, signalling pathways, noise, asynchronous updating) are jettisoned by resorting to such a coarse---grained model, it does retain the key feature of causal architecture necessary for our analysis presented here. 2---2) The Dynamics of the Fission Yeast Cell---Cycle Regulatory Network In the current study, we will make reference to informational attributes of both individual nodes within the network, and the state of the network as a whole, which is defined as the collection of all Boolean node values (e.g., at one instant of time). We study both since we remain open---minded about whether informational patterns potentially characteristic of biological organization are attributes of nodes or of states (or both). When referring to the state---space of the cell cycle we mean the space of the 2 ! = 512 possible states for the nine---node network as a whole. We refer to the global causal architecture of the network as a mapping between network states (Fig. 2), and the local causal architecture as the edges (activation or inhibition) within the network (Fig. 1). Time Evolution of the Fission Yeast Cell---Cycle Network. Iterating the set of rules in Eq. 1 accurately reproduces the time sequence of network states corresponding to the phases of the fission yeast cell cycle, as measured in vivo by the activity level of proteins (see [5] for details). Initializing the network in each of the 512 possible states and evolving each according to Eq. (1) yields a flow diagram of network states that details all possible dynamical trajectories for the network, as shown in Fig. 2. The flow diagram highlights the global dynamics of the network, including its attractors. In the diagram, each point represents a unique pattern of Boolean values for individual nodes corresponding to a network state. Two network states are connected if one is a cause or effect of the other. More explicitly, two network states G and G' are causally connected when either → ′ (G is maps to, or is a cause for G') or ′ → (G' instead is the cause, and G the effect) when the update rule in Eq. 1 is applied locally to individual nodes. The notion of network states being a cause or an effect may be accommodated within integrated information theory [10; 11], which we implement below for the fission yeast cell cycle network. We note that because the flow diagram contains all mappings between network states it captures the entirety of the global causal structure of the network, encompassing any possible state transformation consistent with the local rules in Eq. 1. For the fission yeast cell cycle network, each initial state flows into one of sixteen possible attractors (fifteen stationary states and one limit cycle). The network state space can be sub---divided according to which attractor each network state converges to, represented by the colored regions in the left---hand panel of Fig. 2. About 70% of states terminate in the primary attractor shown in red [5]. This attractor contains the biological sequence of network states corresponding to the four stages of cellular division: G1-S-G2-M, which then terminates in the inactive G1 state. The model therefore directly maps the function of the cell cycle to the dynamics of its Boolean network representation. The Control Kernel Nodes: Regulators of Network Function. An interesting feature to emerge from previous studies of the fission yeast cell cycle network, and other biologically motivated Boolean network models, is the presence of a small subset of nodes -called the control kernel -that governs global dynamics within the attractor landscape [13]. When the values of the control kernel are fixed to the values corresponding to the primary attractor associated with biological function, the entire network converges to that attractor regardless of the initial state of the network (see Fig. 2, right panel) -that is, the control kernel nodes regulate the function of the network. Here, function is defined in terms of dynamics on the attractor landscape as noted above. The control kernel of the fission yeast network is highlighted in red in Fig. 1. As we will show, it plays a key role in the informational architecture of the fission yeast cell cycle. 2---3) Constructing Random Networks for Comparison to Biological Networks We compare the informational architecture of the fission yeast cell cycle Boolean network to two classes of random networks: Erdos---Renyi (ER) and Scale---Free (SF) 2 . Both classes retain certain features of the biological network's causal structure as summarized in Table 1, and described in detail by us in [12] (see also [14; 15; 16]). Both ensembles of random networks share the same number of nodes and the same total number of activation and inhibition edges and as the biological network. The ER networks are fully randomized with respect to the number of activation and inhibition links per node - that is, they have no structural bias (e.g., no hubs). By contrast, the SF networks share the same number of activation and inhibition links as the cell cycle network for each node, and therefore share the same degree distribution, defined as the rank ordering of the number of (in---directed and out---directed) edges per node. In what follows, we compare the informational properties of the fission yeast cell cycle biological network with samples drawn at random from both ER and SF network ensembles consisting of 1,000 sampled networks each, unless otherwise specified. We note that we purposefully do not use a fully randomized network ensemble having no structural features in common with the biological network in our comparison. This is because we are attempting to distinguish contributions to informational architecture peculiar to biological function from those that arise from more commonly studied topological features of the network, such as degree distribution. We note that many earlier studies focused solely on topological properties, such as scale---free degree distributions (where the rank ordering of the number of edges for each node follows a power---law), as distinctive aspects of evolved biological networks [17---19]. Our analysis of informational structure, however, uncovers a further (and potentially more significant) layer of distinctive features that go beyond topological considerations alone, which is best uncovered by considering random networks constrained to maintain topological features in common with the biological network. Quantifying Informational Architecture Information---theoretic approaches have provided numerous insights into the properties of distributed computation and information processing in complex systems [20]. Since we are interested in level non--specific patterns that might be intrinsic to biological organization, we investigate both local (node---to---node) and global (state---to---state) informational architecture. We note that in general, biological systems may often have more than two "levels", but focusing on two for the relatively simple case of the fission yeast cell cycle is a tractable starting point. To quantify local informational architecture we appeal to the information dynamics developed Lizier et al. [7---9], which utilizes Schreiber's transfer entropy [21] as a measure of information processing. For global architecture, we implement integrated information theory (IIT), developed by Tononi and collaborators, which quantifies the information generated by a network as a whole when it enters a particular state, as generated by its causal mechanisms [11]. In this section we describe each of these measures (more detailed descriptions may be found in [8] for information dynamics and [11] for integrated information theory). We note that while both formalisms have been widely applied to complex systems, they have thus far seen little application to direct comparison between biological and random networks, as we present here. We also note that this study is the first to our knowledge to combine these different formalisms to uncover informational patterns within the same biological network. 3---1) Information Dynamics Information dynamics is a formalism for quantifying the local component operations of computation within dynamical systems by analysing time series data [7---9, 21]. We focus here on transfer entropy (TE) [21] and active information (AI), which are measures of information processing and storage respectively [8]. For the fission yeast cell cycle network, time series data is extracted by applying Eq. 1 for 20 time steps, for each of the 512 possible initial states, thus generating all possible trajectories for the network. Time series data were similarly generated for each instance of the random networks in our ensembles of 1,000 ER and SF networks. The trajectory length of t=20 time steps is chosen to be sufficiently long to capture transient dynamics for trajectories before converging on an attractor for the cell cycle and for the vast majority of random networks. Using this time series data, we then extracted the relative frequencies of temporal patterns and used these to define the probabilities p necessary to calculate TE and AI as discussed below (see e.g. [22] for an explicit example calculation of TE). In an isolated mechanical system with no noise, past states are good predictors of future states. However, in a complex network, the past states of a given node will in general be inadequate to guide prediction of future states because of the strong nonlinear coupling of a node's dynamics to that of other nodes. Active information (AI) quantifies the degree to which uncertainty about the future of a node X is reduced by knowledge of only the past states of that same node, found from examining time series data. Formally: where ! indicates the set of all possible patterns of ! ! , !!! . Thus, AI is a measure of the mutual information between a given node's past, "stored" in its k previous states, and its immediate future (its next state). An information---theoretic measure that takes into account the flow or transfer of information to a given node from other nodes in a network is transfer entropy (TE). It is the (directional) information transferred from a source node Y to a target node X, defined as the reduction in uncertainty provided by Y about the next state of X, over and above the reduction in uncertainty due to knowledge of the past states of X. Formally, TE from Y to X is the mutual information between the previous state of the source y n and the next state of the target x n+1 , conditioned on k previous states of target, x (k) : where ! indicates the set of all possible patterns of sets of states ( ! ! , !!!,! ! ). The directionality of TE arises due to the asymmetry in the computed time step for the state of the source and the destination. Due to this asymmetry, TE can be utilized to measure "information flows" 3 , absent from non---directed measures such as mutual information. Both TE and AI depend on the history length, k, which specifies the number of past state(s) of a given node one is interested in using to make predictions of the immediate future. Typically, one considers → ∞ (see e.g., [9] for discussion). However, short history lengths can provide insights into how a biological network is processing information more locally in space and time. In particular, we note that it is unlikely that any physical system would contain infinite knowledge of past states and thus that the limit → ∞ may be unphysical. In truncating the history length k, we treat k as a physical property of the network related to memory about past states as stored in its causal structure. TE enables us to quantify how much information is transferred between two distinct nodes at adjacent time steps, and thus provides insights into the spatial distribution of the information being processed. By contrast, we view AI as a measure of local information processed through time. We compared the distribution of AI and TE for the fission yeast cell cycle network with two types of randomized networks (ER and SF), averaging over ensemble of 500 and 1,000 networks, respectively. The results are shown in Fig. 3, and demonstrate that the biological network displays a significantly higher level of information transfer than either class of random networks on average. Consider first the differences between the two comparison networks, ER and SF. Recall that ER networks do not share topological features with either the biological or SF networks except for the size of the networks, the number of self---loops and the total number of activation and inhibition links (Table 1). For the ER networks, connections between two nodes are made at random and as a result, the degree distribution is more homogeneous on average than that of the biological or SF networks (e.g., most networks will not have hubs). Our results indicate relatively low average information transfer (as measured by TE) between nodes in the ER networks (green, left panel Fig. 3) as compared to SF (blue, left panel Fig.3). The much higher TE between nodes in the biological and SF networks than the ER networks suggests that heterogeneity in the distribution of edges among nodes - which arises due to the presence of hubs in the SF and biological networks, but not the ER networks - plays a significant role in information transfer. However, heterogeneity alone clearly does not account for the high level of information transfer observed between nodes in the fission yeast cell cycle network, which is distinguished from the ensemble of SF networks in Fig. 3 despite sharing the same exact degree distribution. We note that although scale free networks with power law degree distributions have been much studied in the context of metabolic networks [18], signaling and protein---protein networks within cells [24], functional networks within the brain [25] and even technological systems [19], there has been very little attention given to how biological systems might stand out as different from generic scale free networks. Here both the biological and SF networks share similarities in topological structure (inclusive of degree distribution). However, the cell cycle network exhibits statistically significant differences in the distribution of information processing among nodes. The excess TE observed in the biological network (red, left panel Fig. 3) deviates between 1σ to 5σ from that of the SF random networks (blue, left panel Fig. 3), with a trend of increasing divergence from the SF ensemble for lower ranked node pairs that still exhibit correlations (e.g., where TE > 0). Interestingly, the biologically distinctive regime is dominated by information transfer from other nodes in the network to the control kernel, and from the control kernel to other nodes, as reported in by us in [12]. This seems to suggest that the biologically significant regime is attributable to information transfer through the control kernel, which as noted in Section 2---2 also has been shown to regulate function. The biological network is also an outlier in the total quantity of information processed by the network, processing more information on average than either the ER or SF random null models: the network is in the 100 th percentile for ER networks and 95 th percentile for SF networks for total information processed [12]. For the present study we also computed the distribution of AI (information storage) for the biological network and random network ensembles (right panel in Fig. 3). Information storage can arise locally through a node's self---interaction (self---loop), or be distributed via causal interactions with neighboring nodes. For the biological network (red, right panel Fig. 3), the control kernel nodes (labeled in red on the x---axis) have the highest information storage. Control kernel nodes have no self---interaction, so their information storage must be distributed among direct causal neighbors. Nodes that are self---interacting (labeled in blue on the x---axis) tend to have relatively low AI by comparison to the control kernel nodes (in this network self---interaction models self---degradation of the represented protein). This suggests that the distribution of information storage in the fission yeast cell cycle arises primarily due to distributed storage embedded within the network's causal structure. This distributed information storage acts primarily to reduce uncertainty in the future state of the control kernel nodes, which have the highest information storage. We note that the patterns in information storage reported here are consistent with that reported in [26] since this network has inhibition links, which detract from information storage. For the biological network, it is the control kernel nodes that store the most information about their own future state, as compared to other nodes in the network. The analogous nodes in the random networks also on average store the most information. Taken in isolation, our results for the distribution of AI therefore do not distinguish the biological network from random. However, we note that the random networks in our study are constructed to maintain features of the fission yeast cell cycle network's causal structure (see Table 1). It is therefore not so surprising that the ER and SF networks should share common properties in their information storage with the biological network. However, the interpretation of the distribution of AI among nodes is very different for the biological network than for the random networks. Why is this? The ensembles of random networks drawn from SF and ER networks will in general not share the same attractor landscape as the biological case (shown in Fig. 2). For the biological network, the control kernel nodes are associated with a specific attractor landscape associated with the function of the network. For the biological network, control kernel nodes contribute the most to information storage and are also associated with regulation of the dynamics of the biological network. For ER and SF ensembles, the analogous nodes likewise store a large amount of information (having inherited their local causal structure in the construction of our random ensembles), but these nodes do not necessarily play any role in regulating the global causal structure of the network. Thus although the AI patterns in Fig. 3 are not statistically distinct for the biological network as compared to the null models, only for the biological network is it the case that this pattern is associated with the function of the network, that is that the nodes storing the most information via local causal structure also play a role in regulating the global causal structure. 3---2) Effective and Integrated Information Whereas information dynamics quantifies patterns in local information flows, integrated information theory (IIT) quantifies information arising due to the network's global properties defined by its state---to--state transitions (global causal structure) [11]. IIT was developed originally as a theory for consciousness; for technical details see e.g., [27], but is widely applicable to other complex systems. In this paper we use IIT to quantify effective information (EI) and integrated information (φ) (both defined below) for the fission yeast cell cycle and also for the ensembles of random ER and SF networks. Unlike information dynamics, both EI and φ characterize the information generated by entire network states, rather than individual nodes. They do not require time series data to compute: one can calculate EI or φ for all causally realizable states of the network (states the network that are a possible output of its causal mechanisms) independent of calculating trajectories through state---space. These measures may in turn be mapped to the global causal structure of the network's dynamics through state space, e.g. for the fission yeast cell cycle, EI and φ for network states can be mapped to the network states in the flow diagram in Fig. 2. Effective information (EI) quantifies the information generated by the causal mechanisms of a network (as defined by its edges, rules and thresholds - see e.g., Eq. (1)), when it enters a particular network state G'. More formally, the effective information for each realized network state G', given by EI(G'), is calculated as the relative entropy (or Kullback---Leibler divergence) of the a posteriori repertoire with respect to the a priori repertoire: where H indicates entropy. The a priori repertoire is defined is as the maximum entropy distribution, , where all network states are treated as equally likely. The a posteriori repertoire, ( → ! ), is defined as the repertoire of possible states that could have led to the state G' through the causal mechanisms of the system. In other words, EI(G') measures how much the causal mechanisms of a network reduce the uncertainty about the possible states that might have preceded G', e.g., its possible causes. For the fission yeast cell cycle, the EI of a state is related to the number of in---directed edges (causes) in the attractor landscape flow diagram of Fig. 2. Integrated information (φ) captures how much the "the whole is more than the sum of its parts" and is quantified as the information generated by the causal mechanisms of a network when it enters a particular state G', as compared to sum of information generated independently by its parts. More specifically, φ can be calculated as follows: 1) divide the network entering a state G' into distinctive parts and calculate EI for each part, 2) compute the difference between the sum of EIs from every part and EI of the whole network, 3) repeat the first two steps with all possible partitions. φ is then the minimum difference between EI from the whole network and the sum of EIs for its parts (we refer readers to [11] for more details on calculating φ). If φ(G') > 0, then the causal structure of network generates more information as a whole, than as a set of independent parts when it enters the network state G'. For φ(G') = 0, there exist causal connections within the network that can be removed without leaking information. The distribution of EI for all accessible states (states with at least one cause) in the fission yeast cell---cycle network is shown in Fig. 4 where it is compared to the averaged EI for the ER and SF null network ensembles. For the biological network, most states have EI = 8, corresponding to two possible causes for the network to enter that particular state (the a priori repertoire contains 512 states, whereas the a posteriori repertoire contains just two, so . Comparing the biological distribution to that of the random network ensembles does not reveal distinct differences (the biological network is within the standard deviation of the SF and ER ensembles), as it did for information dynamics. Thus, the fission yeast cell cycle's causal mechanisms do not statistically differ from SF or ER networks in their ability to generate effective information. Stated somewhat differently, the statistics over individual state---to---state mappings within the attractor landscape flow diagram for the fission yeast cell cycle (Fig. 2) and the ensembles of randomly sampled networks are indistinguishable. Fig. 2). Larger points denote the biologically realized states, which correspond to those that a healthy functioning S. Pombe cell will cycle through during cellular division (e.g., the states corresponding to the G1-S-G2-M phases). Initially, we expected that φ might show different patterns for states within the cell cycle phases, as compared to other possible network states (such that biologically functional states would be more integrated). However, the result demonstrates that there are no clear differences between φ for biologically functional states and other possible network states. We also compared the average value of integrated information, !"# , taken over all realizable network states for the fission yeast cell cycle network, to that computed by the same analysis on the ensembles of ER and SF random networks. We found that there is no statistical difference between !"# for the biological and random networks: as shown in Table 2, all networks in our study show statistically similar averaged integrated information. At first, we were surprised that neither EI nor φ (or !"# ) successfully distinguished the biological fission yeast cell cycle network from the ensembles of ER or SF networks. It is widely regarded that a hallmark of biological organization is that "more is different" [28] and that it is the emergent features of biosystems that set them apart from other classes of physical systems [29]. Thus, we expected that global properties would be more distinctive to the biological networks than local ones. However, for the analyses presented here, this is not the case: the local informational architecture, as quantified by TE, of biological networks is statistically distinct from ensembles of random networks: yet, their global structure, as quantified by EI and φ, is not. There are several possible explanations for this result. The first is that we are not looking at the "right" biological network to observe the emergent features of biological systems. While this may be the case, this type of argument is not relevant for the objectives of the current study: if biological systems represent physics best captured by informational structure, than one should not be able to cherry---pick which biological systems have this property - it should be a universal feature of biological organization. Hence our loose analogy to the "hydrogen atom" -given the universality of the underlying atomic physics we would not expect helium to have radically different physical structure than hydrogen does. Thus, we expect that if this type of approach is to have merit, the cell cycle network is as good a candidate case study as any other biological system. We therefore consider this network as representative, given our interest in constraining what universal physics could underlie biological organization. One might further worry that we have excised this particular network from its environment (e.g., a functioning cell, often suggested as the indivisible unit, or "hydrogen atom of life"). This kind of excision might be expected to diminish emergent informational properties - it is then perhaps more surprising that the local signature in TE remains so prominent even though EI and φ are not statistically distinct. Another possible explanation for our results is that we have defined our random networks in a way that makes them too similar to the biological case, thus masking some of the differences between functional biological networks and our random network controls. It is indeed likely that biologically---inspired "random" networks will mimic some features of biology that would be absent in a broader class of random networks (e.g. such as the specific pattern in the distribution of information storage discussed in the previous section). However, if our random graph construction is too similar to the biological networks to pick up important distinctive features of biological organization, than it does not explain the observed unique patterns in TE nor that AI is largest for control kernel nodes which play a prominent role in the regulation of function for the biological network. We therefore accept that the lack of distinct emergent, global patterns in information generated due to causal architecture as a real feature of biological organization and not an artifact of our construction. This observation may offer clues to what may in fact be the most distinct feature of biological systems. Characterizing Informational Architecture The forgoing analyses indicate that what distinguishes biology as a physical system cannot be causal structure (topology) alone, but instead biology can be distinguished by its informational architecture, which arises as an emergent property of the combination of topology and dynamics. This is supported by the distinct scaling in information processing (TE) observed for the fission yeast cell cycle network as compared to ER and SF ensembles reported above. In our example, what distinguishes biology from other complex physical systems cannot be global topological features alone, since the SF networks differ in their patterns of information processing from the fission yeast cell cycle, despite sharing common topological features, such as degree distribution. Similarly, it cannot be dynamics alone due to the lack of a distinct signature in EI or φ for the biological network, as both are manifestations the global dynamics on the attractor landscape. An important question opened by our analysis is why the biological network exhibits distinct features for TE and shows patterns in AI associated with functional regulation, when global measures yield no distinction between the biological network and random networks. We suggest that the separate analysis of two distinct levels of informational patterns (e.g. node---node or state---state) as presented above misses what arguably may be one of the most important features of biological organization - that is, that distinct levels interact. This view is supported by the fact that the control kernel plays a prominent role in the distinctive local informational patterns of the fission yeast cell cycle network, but was in fact first identified for its role regulating dynamics on the global attractor landscape. The control kernel may therefore be interpreted as mediating the connection between the local and global causal structure of the network. The results of the previous section indicate that these same nodes act as a hub for the transfer of information within the network and for information storage. The interaction between distinct levels of organization is typically described as 'top---down' causation and has previously been proposed as a hallmark feature of life [30---32]. We hypothesize that the lower level patterns observed with TE and AI arise because of the particular manner in which the biological fission yeast cell cycle network is integrated, regulating dynamics on its global attractor landscape through the small subset of control kernel nodes via top---down' control. Instead of studying individual levels of architecture as separate entities, to fully understand the informational architecture of biological networks we must therefore additionally study informational patterns mediating the interactions between different levels of informational structure, as they are distributed in space and time. To test this hypothesis we analyse the spatiotemporal and inter--level architecture of the fission yeast cell cycle. 4---2) Spatiotemporal Architecture We may draw a loose analogy between information and energy. In a dynamical system, energy is important to defining two characteristic time scales: a dynamical process time (e.g., the period of a pendulum) and the dissipation time (e.g., time to decay to an equilibrium end state or attractor.) TE and AI are calculated utilizing time series data of a network's dynamics and, as with energy, there are also two distinct time scales involved: the history length k and the convergence time to the attractor state(s), which are characteristic of the dynamical process time and dissipation time, respectively. For example, the dissipation of TE may correlate with biologically relevant time scales for the processing of information, which can be critical for interaction with the environment or other biological systems. In the case study presented here, the dynamics of TE and AI can provide insights into the timescales associated with information processing and memory within the fission yeast cell cycle network. The TE scaling relation for the fission yeast cell cycle network is shown in Fig. 6 for history lengths k = 1, 2 … 10 (note: history length k = 2 is compared to random networks in Fig. 2). The overall magnitude and nonlinearity of the scaling pattern decreases as knowledge about the past increases with increasing k. The patterns in Fig. 6 show general trends that indicate that the network processes less information in the spatial dimension when knowledge about past states increases. In contrast, the temporal processing of information, as captured by information storage (AI) increases for increased k (not shown). To make explicit this trade---off between information processing and information storage we define a new information theoretic measure. Consider in---coming TE, and out---going TE for each node as the total sum of TE from the rest of network to the node and the total sum of TE from the node to the rest of network, respectively. We then define the Preservation Entropy (PE), as follows: denote AI, in--coming TE, and out---going TE, respectively. PE quantifies the difference between the information stored in a node and the information it processes. For PE(X) > 0, a node X's temporal history (information storage) dominates its information dynamics, whereas for PE(X) < 0, the information dynamics of node X are dominated by spatial interactions with rest of the network (information processing). Preservation entropy is so named because nodes with PE > 0 act to preserve the dynamics of their own history. Fig. 7 shows PE for every node in the fission yeast network, for history length k = 1, 2 … 10. As the history length increases the overall PE also increases, with all nodes acquiring positive values of PE for large k. For all k, the control kernel nodes have the highest PE, with the exception of the start node SK (which only receives external input). When the dynamics of the cell cycle network is initiated, knowledge about future states can only be stored in the spatial dimension so PE < 0 for all nodes. The transition to temporal storage first occurs for the four control kernel nodes, which for k = 4 have PE > 0, while others nodes have negative PE (self---loops nodes) or PE close to 0. All nodes make the transition to PE > 0 at history length k = 5. This suggests that the informational architecture of the cell cycle is space (processing) dominated in its early evolution and later transitions to being time (storage) dominated with a characteristic time scale. 4---3) Inter---level Architecture We pointed out in Section 3 the fact that the local level informational architecture picks out a clear difference between biological and random networks, whereas the global measures do not. We conjecture that this is because in biological systems important information flows occur between levels, i.e. from local to global and vice---versa, rather than being solely characteristic of local or global organization alone. In other words, network integration may not distinguish biological from random, but the particular manner in which a network is integrated and how this filters to regulate lower level interactions may be distinctive of life. In the case study presented here, this means there should be distinctive patterns in information flows arising from node---state, or state---node interactions. To investigate node---state and state---node interactions, we treat φ itself as a dynamic time series and ask whether the dynamical behavior of individual nodes is a good predictor of φ (i.e., of network integration), and conversely, if network integration enables better prediction about the states of individual nodes. To accomplish this we define a new Boolean "node" in the network, named the Phi---node, which encodes whether the state is integrated or not, by setting its value to 1 or 0, respectively (in a similar fashion to the mean---field variable in [30]). We then measure TE between the state of the Phi---node and individual nodes for all possible trajectories of the cell cycle, in the same manner as was done for calculating TE between local nodes. Although transfer entropy was not designed for analyses of information flows between 'levels', there is actually nothing about the structure of any of the information measures utilized herein that suggests they must be level specific (see [33] for an example of the application of EI at different scales of organization). Indeed, higher transfer entropy from global to local scales than from local to global scales has previously been put forward as a possible signature of collective behavior [31, 34---36]. The results of our analysis are shown in Fig. 8. The total information processed (total TE) from the global to local scale is 1.6 times larger than total sum of TE from the local to global scale. That is, the cell cycle network tends to transfer more information from the global to local scales (top---down) than from the local to global (bottom---up), indicative of collective behavior arising due to network integration. Perhaps more interesting is the irregularity in the distribution of TE among nodes for information transfer from global to local scales (shown in purple in Fig. 8), as compared to a more uniform pattern in information transfer from local to global scales (shown in orange in Fig. 8). This suggests that only a small fraction of nodes act as optimized channels for filtering globally integrated information to the local level. This observation is consistent with what one might expect if global organization is to drive local dynamics (e.g., as is the case of top---down causation), as this must ultimately operate through the causal mechanisms at the lower level of organization (such that it is consistent with known physics). Our analysis suggests a promising line of future inquiry characterizing how the integration of biological networks may be structured to channel global state information through a few nodes to regulate function (e.g., such as the control kernel). Future work will include comparison of the biological distribution for TE between levels to that of random network models to gain further insights into if, and if so how, this feature may be distinctive to biology. We note, however, that this kind of analysis requires one to regard the level of integration of a network as a "physical" node. But perhaps this is not too radical a step: effective dynamical theories often treat mean---field quantities as physical variables. Discussion Several measures of both information storage and information flow have been studied in recent years, and we have applied these measures to a handful of tractable biological systems. Here we have reported our results on the regulatory network that controls the cell cycle of the fission yeast S. Pombe, treated as a simple Boolean dynamical system. We confirmed that there are indeed informational signatures that pick out the biological network when contrasted with suitably defined random comparison networks. Intriguingly, the relevant biosignature measures are those that quantify local information transfer and storage (TE and AI respectively), whereas global information measures such as integrated information φ do not. The distinguishing feature of these measures is that information dynamics is local and correlative (using conditional probabilities), whereas integrated information is global and causative (using interventional conditional probabilities [37]). The signature of biological structure uncovered by our analysis therefore lies not with the underlying causal structure (the network topology) but with the informational architecture via specific patterns in the distribution of correlations unique to the biological network. Although the biological fission yeast cell cycle network and ensemble of random networks in our study share commonalities in causal structure, the pattern of information flows for the biological network is quite distinct from either the SF or ER random networks. We attribute this difference to the presence of the control kernel nodes. Both the biological network and the ensemble of SF random networks statistically differ in the distribution of the transfer of information between pairs of nodes from the more generic ER random networks. Surprisingly, the newly uncovered scaling relation in information transfer is statistically distinct for the biological network, even among the class of SF networks sharing a common degree distribution. The biologically most distinct regime of the scaling relation is associated with information transfer to and from the control kernel nodes and the rest of the network as reported by us in [12]. Our results presented here indicate the cell cycle informational architecture is structured to localize information storage within those same control kernel nodes. These results indicate that the control kernel -which plays a prominent role in the regulation of cell cycle function (by regulating the attractor landscape) - is a key component in the distinctive informational architecture of the fission yeast cell cycle network, playing both a prominent role in information storage and in the flow of information within in the network. While it is conceivable that these patterns are a passive, secondary, attribute of biological organization arising via selection on other features (such as robustness, replicative fidelity etc), we think the patterns are most likely to arise because they are intrinsic to biological function -that is, they direct the causal mechanisms of the system in some way, and thereby constitute a directly selectable trait [38 ; 39]. If information does in fact play a causal role in the dynamics of biological systems, than a fundamental understanding of life as a physical process has the potential to open up completely unexplored sectors of physics, as we know of no other class of physical systems where information is necessary to specify its state. Taking a more forward---thinking and necessarily speculative look at what our results suggest of the physics underlying life, we regard the most distinctive feature to be in how informational and causal structure intersect (consistent with other suggestions that life is distinguished by the "active" use of information, see e.g., [22, 31, 41---43]). Evidence for this view comes from the fact that the integration of the network is a better predictor of the states of individual nodes, than vice versa (see Fig. 8), an asymmetry perhaps related to the functionality of the network. If the patterns of information processing observed in the biological network are indeed a joint product of information and causal structure, as our results suggest, they may be regarded as an emergent property of topology and dynamics. The informational signatures of biological networks uncovered by our analysis appear strongly dependent on the controllability of the network -that is, that a few nodes regulate the function of the cell cycle network. Thus, in addition to scale free network topology, a necessary feature required to distinguish biological networks, based on their informational architecture, is the presence of a subset of nodes that can "control" the dynamics of the network on its attractor landscape. In this respect, biological information organization differs from other classes of collective behaviour commonly described in physics. In particular, the distribution of correlations indicates that we are not dealing with a critical phenomenon in the usual sense, where correlations at all length scales exist in a given physical system at the critical point, without substructure [40]. Instead, we find "sub---critical" collective behavior, where a few network nodes centralize correlations and regulate collective behavior through the global organization of information flows. The control kernel nodes were first discovered by pinning their values to that of the primary attractorthat is, by causal intervention (see e.g., [37]). However, causal intervention by an external agent does not occur "in the wild". We therefore posit that the network is organized such that information flowing through the control kernel performs an analogous function to an external causal intervention. Indeed, this is corroborated by our results demonstrating that the network transitions from being information "processing" to "storage" dominated: the control kernel nodes play the dominant role in information storage for all history lengths k and are the first nodes to transition to storage---dominated dynamics. We additionally note that the control kernel state takes on a distinct value in each of the network's attractor states, and thus is related to the distinguishability of these states, as recognized in Kim et al [13]. Based on our results presented here, a consistent interpretation of this feature is that the control kernel states provide a "coarse---graining" of the network state---space relevant to the network's function that is intrinsic to the network (not externally imposed by an observer). Storing information in control kernel nodes therefore provides a physical mechanism for the network to internally manage information about its own global state space. This interpretation is also consistent with Kim et al's observations that the size of the control kernel scales both with the number and size of attractors. In short, one interpretation of our results is that the network is organized such that information processing that occurs in early times "intervenes" on the state of the control kernel nodes, which in turn transition to storage---dominated dynamics that regulate the network's dynamics along the biologically functional trajectory. If true, biology may represent a new frontier in physics where information (via distributed correlation in space and time) is organized to direct causal flows. The observed scaling of transfer entropy may be a hallmark of this very organization and therefore a universal signature of regulation of function, and thus life. Data Accessibility There is no supplementary material for this manuscript. Size of network (Total number of nodes, inhibition and activation links) Same as the cell---cycle network Same as the cell---cycle network Nodes with a self---loop Same as the cell---cycle network Same as the cell---cycle network The number of activation and inhibition links for each node NOT the same as the cell---cycle network (è no structural bias) Same as the cell---cycle network (è Same degree distribution) Table 1. Constraints for constructing random network graphs that retain features of the causal structure of a reference biological network, which define the two null model network classes used in this study: Erdos---Renyi (ER) networks and Scale---Free (SF) networks. www.nature.com/scientificreports Figure 3. Scaling distribution of information processing among node pairs (measured with TE) and the information storage for individual nodes (measured with AI). Shown are results for the fission yeast cell cycle regulatory network (red) and ensembles of Erdos---Renyi (ER) random networks (green) and Scale Free (SF) random networks (blue). Left panel: Scaling of information processing for history length k= 2, shows that the biological network processes more information than either ER or SF networks on average. The y---axis and x---axis are the TE between a pair of nodes and relative rank, respectively. Ensemble statistics are taken over a sample of 1,000 networks. Figure adopted from Kim et al. [12]. Right Panel: AI for all nodes for history length k = 5. Ensemble statistics are taken over a sample of 500 networks. Nodes names correspond to regulatory proteins for the biological cell cycle network, with control kernel nodes highlighted in red and nodes with a self---loop in blue. For the random networks, the labels are retained to indicate nodes in the network relative to the cell cycle network, but do not correspond to real proteins as is the case for the cell cycle network. Although not statistically distinct since control kernel nodes and their analogs in the random networks store the most information, only in the biological network is this information storage associated with regulation of they dynamics of the attractor landscape. Values are calculated for every state each network can enter via its causal mechanisms. The data shows that the distribution of EI for biological network is not statistically distinct from random. Figure 5. Diagram illustrating the integrated information of each network states in the primary basin of attraction for the fission yeast cell---cycle regulatory network. Colours indicate the value of integrated information for each state. Large points represent states in the functioning (healthy) cell cycle sequence, which do no show significant differences in terms of their integration from other network states. Figure 6. Scaling of information transfer (as measured by TE) for every pair of nodes in the fission yeast cell---cycle network, shown for history lengths k = 1, 2, … 10 (results for k=2 are shown in Fig. 4 contrasted with ensembles of Erdos---Renyi and Scale Free null model networks). Information processing is high for short history lengths, but rapidly "dissipates". HbitsL nodes have the highest PE for any history length, and transition from being processing dominated (PE < 0) to storage dominated (PE>0) first. Figure 8. Information transfer between network integration, as quantifed by φ for individual states (the "Phi---node"), and individual nodes in the fission yeast cell cycle regulatory network. The orange line shows the TE from node → network state (individual nodes to the Phi---node) and the purple line shows TE from network state → node (Phi node to individual nodes). The network transfers 1.6 times more information from global to local scales than vice versa, indicative of collective behaviour, and this information transfer is asymmetrically distributed among nodes.
2015-11-09T04:41:24.000Z
2015-07-14T00:00:00.000
{ "year": 2016, "sha1": "93ff3d9730a5fd945c01f369a0f888f86d949c17", "oa_license": null, "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2015.0057", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "7a7fc5977a93ac02341fb4aa6b92c5e95823850d", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
214135970
pes2o/s2orc
v3-fos-license
Contrast-induced nephropathy: searching for new solutions to prevent its development Contrast-induced nephropathy (CIN) is the main cause of acute kidney injury and worsens the prognosis of chronic kidney disease. To evaluate the clinical risk score of CIN development, various medical calculators are proposed. The main criterion for assessing the possible development of CIN is the initial glomerular filtration rate presented by estimated glomerular filtration rate. Toxic effect of contrast substances is realized through the properties of the molecule of contrast itself (tubular cell damage) and induced ischemia with oxidative stress and vasoconstriction. Existing methods for preventing the development of CIN are based on reducing the toxic effect of a contrast agent and preventing hypoxic kidney shock. The drugs currently proposed are acetylcysteine, statins, and some other approaches as well as hemodialysis. However, the evidence base is the most informative for hydration, which should be used before the introduction of a contrast agent, along with the minimization of the dose of contrast. Nevertheless, no final solution has been found to prevent the development of CIN. We have proposed the use of edaravone, which has an evidence base for ischemic stroke, to prevent the development of CIN. Three patients with chronic kidney disease stage 3b were given 30 mg edaravone twice a day before contrast media infusion and during two days after contrast administration. In two patients, CIN was avoided. The proposed approach requires future research to evaluate its effectiveness. Contrast-induced nephropathy (CIN) is the major cause of acute kidney injury in chronic kidney disease (CKD) [1] and a third most common cause of acute kidney injury (AKI) in hospitalized patients following volume depletion and medication [2]. Contrast-induced nephropathy is defined as the impairment of renal function measured as either a 25% increase in serum creatinine from baseline or a 0.5 mg/dL (44 µmol/L) increase in absolute serum creatinine value within 48-72 hours after intravenous contrast administration [3]. Several guidelines provide CIN patient surveillance; one of the latest (2018) is represented by the European Society of Urogenital Radiology (ESUR) [5] that defines post-con-trast acute kidney injury as an increase in serum creatinine ≥ 0.3 mg/dl (or ≥ 26.5 µmol/l), or ≥ 1.5 times from baseline, within 48-72 h of intravascular administration of contrast medium (СМ). The key recommendations of this guideline are the following: Nephrotoxic medication In CKD patients receiving CM, optimal nephrology care involves minimizing the use of nephrotoxic drugs (level of evidence А). Angiotensin-converting enzyme inhibitors and angiotensin receptor blockers do not have to be stopped before CM administration (level of evidence D). There is insufficient evidence to recommend withholding nephrotoxic drugs such as nonsteroidal anti-inflammatory drugs, antimicrobial agents or chemotherapeutic agents before CM administration (level of evidence B). Hydration Preventive hydration should be used to reduce the incidence of post-contrast acute kidney injury in patients at risk (level of evidence B). Intravenous saline and bicarbonate protocols have similar efficacy for hydration (level of evidence A). For intravenous and intra-arterial CM administration with the second pass renal exposure, hydrate the patient with either: a) 3 mL/kg/h 1.4% bicarbonate (or 154 mmol/L solution) for 1 h before CM; or b) 1 mL/kg/h 0.9% saline for 3-4 h before and 4-6 h after CM (level of evidence D). For intra-arterial CM administration with the first pass renal exposure, hydrate the patient with either: a) 3 mL/kg/h 1.4% bicarbonate (or 154 mmol/L solution) for 1 h before CM followed by 1 mL/kg/h 1.4% bicarbonate (or 154 mmol/L) for 4-6 h after CM; or b) 1 mL/kg/h 0.9% saline for 3-4 h before and 4-6 h after CM (level of evidence D). Oral hydration as the sole means of prevention is not recommended (level of evidence D). No other drug recommendations (acetylcysteine, statins) and dialysis for CIN prevention were presented in the guidelines; it is associated with their poor evidential base. However, a positive effect of nebivolol that is used to reduce the risk of CIN and is prescribed before contrast study was not evaluated [6]. An important aspect is also the absence of the need to prescribe diuretics both in CIN and in acute kidney injury (see below) [7]. Sudden stop of diuresis: -acute kidney injury: diuretics are contraindicated, euvolemia maintenance is recommended. Chronic fluid retention: -chronic kidney disease: long-term diuretic therapy is indicated (loop diuretics and aldosterone antagonists), vascular hypovolemia should be avoided. Before deciding to perform PCI, our clinic has chosen the following algorithm (Table 1). Therefore, nowadays the only method of CIN prevention is hydration [5,8], and risks are determined by glome rular filtration rate and by the state of the cardiovascular system. Toxic effect of contrast substances is realized through: 1) properties of the molecule of contrast itself (tubular cell damage); 2) induced ischemia with oxidative stress and vasoconstriction [9]. Serum creatinine levels peaking 2-3 days after administration of contrast medium and returning to baseline within 7-10 days after administration are accompanied by ischemic changes [10]. While evaluating CIN pathogenesis, our attention was paid to another aspect about ischemia -the successful use of edaravone, an agent blocking the ischemic cascade; nowa days, it is used for treatment of acute ischemic stroke [11]. It is to be recalled some facts about edaravone: -this agent was involved in clinical studies with a high level of evidence that have been conducted in Japan since 2001; -every third patient having received the drug during the first 24 hours after onset of ischemia will have no consequence of stroke at all; -this is the first drug over the past 23 years that is approved by the Food and Drug Administration in 2017 for treatment of amyotrophic lateral sclerosis [12]. In our clinic, three patients with stage 3b CKD have received 30 mg of edaravone (Xavron, Ukraine) intravenously twice a day before PCI and two days after the procedure. Development of CIN was observed in one patient; two patients have shown less than a 1.5-fold increase in serum creatinine level during 5 days of monitoring. We believe that the initiation of use of edaravone for CIN prevention provides a clinical perspective. Nevertheless, future trials need to be done to prove the possibility of preventing CIN with edaravone. Conflicts of interests. The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
2019-12-05T09:10:06.452Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "cec2db12d50f0dd2163d40ccf7a80a2da98747c9", "oa_license": "CCBY", "oa_url": "http://kidneys.zaslavsky.com.ua/article/download/176450/177598", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f816860ceee7acdde3bc2f13c114a24e2164a420", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52829748
pes2o/s2orc
v3-fos-license
Ultrasensitive H2S gas sensors based on p-type WS2 hybrid materials Owing to their higher intrinsic electrical conductivity and chemical stability with respect to their oxide counterparts, nanostructured metal sulfides are expected to revive materials for resistive chemical sensor applications. Herein, we explore the gas sensing behavior of WS2 nanowire-nanoflake hybrid materials and demonstrate their excellent sensitivity (0.043 ppm-1) as well as high selectivity towards H2S relative to CO, NH3, H2, and NO (with corresponding sensitivities of 0.002, 0.0074, 0.0002, and 0.0046 ppm-1, respectively). Gas response measurements, complemented with the results of X-ray photoelectron spectroscopy analysis and first-principles calculations based on density functional theory, suggest that the intrinsic electronic properties of pristine WS2 alone are not sufficient to explain the observed high sensitivity towards H2S. A major role in this behavior is also played by O doping in the S sites of the WS2 lattice. The results of the present study open up new avenues for the use of transition metal disulfide nanomaterials as effective alternatives to metal oxides in future applications for industrial process control, security, and health and environmental safety. Introduction Gases with different properties, origins, and concentrations are pervasive in our environment. Some of these gases are highly toxic and hazardous, while others are essential for life or indicators of health status. Accordingly, sensors for gas detection and monitoring are needed in various sectors such as environmental protection, industrial process monitoring and safety, amenity, energy saving, health, and food industries [1]. Metal oxide semiconductors stand out as the most common active sensing materials used in practical devices. Grain size reduction to the nanometer range and relatively easy sensitization to various analytes (via either lattice doping with anions and cations or surface decoration with metals and metal oxides) have emerged as key strategies to improve their detection properties [2,3]. The main drawbacks of current metal oxide-based devices are associated with their typically high operating temperatures (several hundreds of degrees) and limited use in environments containing even traces of sulfur and sulfides, because of the easy poisoning of the catalytically active surfaces involved in the adsorption and sensing of analytes [4]. In principle, other types of gas sensors whose operation is based on, e.g., gas ionization [5] or changes in the optical properties of waveguides [6], may represent alternative solutions; however, their typical detection limits are at most a few parts per million of analyte. For this reason, and to further improve the existing sensing performance of resistive-type gas sensors, significant efforts are being directed towards developing new classes of materials that are less susceptible to poisoning, such as carbon nanotubes [7], graphene [8], conducting polymers [9], and transition metal dichalcogenide (TMD) nanostructures [10], to replace metal oxides. Among the new types of nanoscopic sensors being studied, layered transition metal dichalcogenide (MX 2 , M = Mo, W; X = S, Se) nanostructures have recently attracted significant interest. Often compared to graphene and other two-dimensional (2D) nanomaterials, their properties present distinct advantages for electronic, optical, and electrochemical sensors [11]. Recent studies, mostly focused on MoS 2 [12][13][14][15] and WS 2 [16,17], highlighted the gas sensing appli-cations associated with the large specific surface area of the TMD materials, particularly of those with a mono-or few-layered microstructure. Previous studies suggest that the working mechanism of TMD gas sensors involves charge transfer-based conductance modulation [11,13,18]. Since the electron density is dependent on the dimensions of the crystal lattice, it is reasonable to expect that the gas sensing properties are also influenced by the dimensions and microstructure of the sensing material. This motivated our very recent investigation of the gas sensing properties of nanowire-nanoflake hybrid WS 2 nanostructures [19]. Therefore, in this work, the gas sensing characteristics of WS 2 nanowire-nanoflake hybrid materials were investigated using a simple two-terminal Taguchi-type sensor arrangement, operating at moderate temperatures (30 and 200 °C ). The sensors were exposed to five different analytes (H 2 S, CO, NH 3 , H 2 , and NO buffered in air) and their response, sensitivity, and recovery properties were assessed. The goal was to obtain experimental evidence supporting previous gas sensitivity data predicted from modeling studies, and to compare the experimental measurements to more recent molecular dynamics simulation results. We show that the WS 2 nanowire-nanoflake hybrids have particularly high sensitivity towards H 2 S, allowing detection and quantification of analyte concentrations on the order of parts per billion, competing with other known materials such as Fe 2 O 3 nanochains and nanoparticles [20,21], CuO-SnO 2 [22], CuO nanoparticles [23] and nanosheets [24], mesoporous WO 3 [25], CeO 2 nanowires [26], and PbS quantum dots [27] (for a comprehensive list, see Table S1 in the Electronic Supplementary Material (ESM)). Experimental and modeling details The WS 2 nanohybrids were synthesized according to the procedures detailed in our previous report [19]. In brief, WO 3 nanowires were first synthesized by hydrothermal recrystallization of Na 2 WO 4 ·2H 2 O in acidic environment at 180 °C under autogenic pressure, and then sulfurized at 800 °C for 10 min in sulfur vapor to convert them to WS 2 nanowire-nanoflake hybrids. The microstructure of the as-obtained materials was determined by field emission scanning electron Nano Res. microscopy (FE-SEM, Zeiss Ultra Plus) and transmission electron microscopy (TEM, JEOL JEM-2200FS). The chemical surface composition of the nanomaterials was assessed by a Kratos Axis Ultra X-ray photoelectron spectrometer (with a monochromatic Al Kα source operated at 150 W) equipped with a delay line detector and a charge neutralizer. The spectra were processed with the Kratos software. Two-terminal Taguchi-type resistive gas sensors were fabricated on test chips with Pt/Ti (300 nm/45 nm) microelectrodes on a Si/SiO 2 substrate (a 300 nm-thick thermal oxide deposited on B-doped p + -Si). The electrode patterns were defined by optical lithography using the lift-off method. To prepare the sensor devices, a small amount (~ 20 mg) of the WS 2 nanowire/ nanoflake hybrid powder was dispersed in 10 mL acetone by ultrasonication, then deposited between the platinum electrodes by drop casting, and finally dried under ambient conditions for 1 day before the gas sensing measurements. The amount of drop-cast material was chosen in such a way to produce a reasonable conductive active layer, with typical resistance values between 3 and 100 MΩ at room temperature. The gas sensing measurements were performed by testing the chips in a Linkam TMS 94 gas flow chamber. The sample resistance was measured at a 1 V bias, using a computer-controlled Hewlett-Packard 3458 A multimeter. The sensor devices were tested at 30 and 200 °C on air-buffered analytes (H 2 S, CO, NH 3 , H 2 , and NO) with nominal concentrations between 1 and 600 ppm, using a MKS Type 247 mass-flow controller. Some of the sensors were also tested on sub-ppm concentrations (between 10 and 1,000 ppb) of H 2 S at 200 °C . In addition, we performed first-principles calculations based on density functional theory (DFT), using the generalized gradient approximation (GGA) exchange-correlation functional of Perdew, Burke, and Ernzerhof (PBE) [28], as implemented in the Quantum Espresso code [29]. The important role of van der Waals (vdW) interactions in the layered structure of WS 2 was accounted for by including the Grimme's semiempirical dispersion correction (D2), which describes the vdW interactions by a pairwise force field [30]. We used norm-conserving Goedecker-Hartwigsen-Hutter-Teter pseudopotentials [31], whereas the electronic wavefunctions were expanded in a plane-wave basis set, with energy cutoffs of 90 and 360 Ry for the wavefunctions and the charge density, respectively. A 10×10×1 Monkhorst-Pack [32] grid was used for k-point sampling within a single unit cell of bulk WS 2 . The total energy and the forces on each atom were converged to within 1 mRy/atom and 0.2 meV, respectively, thus allowing us to obtain a quantitative description of the sensing behavior of the 2D-based materials. Furthermore, 4×4 slabs were built to simulate the adsorption of an individual molecule of H 2 S on both single-layer (32 S and 16 W atoms) and bilayer (64 S and 32 W atoms) WS 2 , to take into account the possible intercalation of gas molecules between the adjacent layers. Due to the periodic boundary conditions, this geometry corresponds to 6.28 × 10 13 molecules/cm 2 in the monolayer. The distance between two neighboring gas molecules was thus larger than 12 Å. In order to avoid unphysical interactions between images along the non-periodic direction, the distance between monolayers was set to ~ 20 Å, with a dipole correction layer. Integrations over the Brillouin zone were carried out using a regular mesh of 3×3×1 k-points for the structural relaxation of the slab, and at the zone center for a single molecule. A denser mesh of 6×6×1 k-points was used for the density of states (DOS) and Bader charge transfer calculations [33]. The calculations were also extended to study the adsorption of H 2 , CO, NO, and NH 3 gases on the WS 2 monolayer; spin-polarized calculations were performed in the case of NO adsorption. Results and discussion The hydrothermally synthesized WO 3 samples were composed of precisely oriented nanowires, as revealed by the scanning and transmission electron microscopy measurements ( Fig. 1(b)). Based on the high-resolution transmission electron microscopy (HRTEM) image in Fig. 1(c) and the selected area electron diffraction (SAED) pattern, the spacing of the lattice fringes was found to be 0.39 nm, which can be indexed as the (001) plane of hexagonal WO 3 . This indicates that the growth of the nanowires occurred along the [001] Nano Res. direction. After sulfurization, the morphology and other characteristics of the as-formed WS 2 nanowire/ nanoflake structure ( Fig. 1(d)) were identical to those reported in our previous work. The hybrids consisted of elongated rod-shaped structures with partially peeled surfaces, thus forming nanoflakes ( Fig. 1(e)) that are an integral part of the original nanorod [19]. The length of the hybrid structures is around 1-3 μm. Their core has a typical diameter of 200-300 nm, whereas the size of the partially peeled flakes is ~ 200 nm in diameter and 10 nm in thickness (Fig. 1). The high-resolution image of a single-crystal 2D WS 2 nanoflake ( Fig. 1(f)) reveals crystal planes with a spacing of 0.27 nm, which can be indexed as the (100) plane of hexagonal WS 2 . Resistive gas sensor measurements of the WS 2 hybrid materials dispersed on the test chips indicated p-type semiconducting behavior, as the resistance increased for reducing gases (H 2 S, CO, NH 3 , and H 2 ) and decreased for nitric oxide, which is an oxidant (Fig. 2). In simple terms, the sensing mechanism is based on the localization of positive charge in the lattice by the surface-adsorbed electron donor molecules (i.e., on the decrease in conductance caused by reducing chemicals) and vice versa (i.e., the increased conductance caused by electron acceptors on the surface, which induce p-type doping in WS 2 ). The results are consistent with sensing mechanisms recently proposed for TMDs [11,13,18]. The sensing performances of the present materials, however, are superior to those measured for WS 2 thin films [17] and comparable to the performance of MoS 2 and WS 2 flakes [10][11][12][13][14]. It is worth noting here that the parent WO 3 nanowires are n-type semiconductors that display an opposite sensing response to the same gas molecules [34]. The sensors show a very rapid response to all gases examined. The typical time constants are between 1 and 2 min at 200 °C , and slightly higher (~ 3-6 min) at lower temperatures. Since the gas response reflects the gas-solid interfacial equilibrium at the surface of WS 2 , the faster response at higher temperatures can be expected, as a result of the increased reaction rates. A similar behavior was also found for the sensor recovery. The faster gas desorption and setting of the air-WS 2 equilibrium at the interface are the main drivers of the rapid sensor recovery at higher temperatures. The sensor response to the analytes shows a nearly linear dependence on the gas concentration, with a slight decay in sensitivity at higher concentrations. The sensitivity to H 2 S is particularly high (0.023 ppm -1 ) compared to the other analytes (Fig. 3), which prompted us to carry out further tests on the WS 2 -H 2 S system. Exposing the sensor to sub-ppm H 2 S levels revealed that the detection limit is as low as ~ 20 ppb at 200 °C , with a corresponding sensitivity of 0.043 ppm -1 . To rationalize the present experimental findings and understand how H 2 S interacts with WS 2 , we carried out ab initio calculations using a simplified model involving single-and bilayer pristine/undoped WS 2 . The calculations of the adsorption of H 2 , CO, NO, Fig. 2). The data point labeled with an asterisk denotes the sensitivity (0.043 ppm -1 ) measured at 20 ppb H 2 S. and NH 3 were based on the work of Zhou et al. [18], who used first-principles simulations to determine the most favorable geometries of these molecules on monolayer WS 2 . The same approach was followed in this work to estimate the adsorption energy of H 2 S on WS 2 (see details in Table S2 in the ESM). The relaxed structure of H 2 S adsorbed on the monolayer is shown in Fig. 4 , and E T [X] are the total energies of the supercell with the adsorbed molecule, the clean WS 2 slab, and the adsorbed molecule, respectively, in their optimized configuration. The calculated energies show that the adsorption of NO (-509.3 meV) is the most favorable among the systems investigated, followed by H 2 S (-181 meV) and NH 3 (-171.7 meV). The adsorption energies of H 2 (-57.4 meV) and CO (-84.7 meV) are considerably smaller. The adsorption energies thus follow a similar qualitative trend as the experimental sensitivities, with the exception of NO (see Fig. 3(b)). This trend is also in agreement with previous calculations based on the local density approximation (LDA) [18]. The negative sign in the calculated adsorption energies indicates that all adsorption processes are thermodynamically favorable; moreover, higher absolute values denote higher binding with the slab. The present adsorption energies show some correlation with our experimental results on multilayered WS 2 , which revealed good sensitivity towards H 2 S (2.3 × 10 -2 ppm -1 ), ammonia (7.4 × 10 -3 ppm -1 ), and NO (4.6 × 10 -3 ppm -1 ), along with moderate sensitivity towards CO (2 × 10 -3 ppm -1 ) and H 2 (2.3 × 10 -4 ppm -1 ). Furthermore, the calculated E ads of H 2 S on the WS 2 bilayer shows a slight increase (-246.2 meV) with respect to that measured for the monolayer. This may suggest a qualitatively higher sensitivity towards H 2 S in the case of the nanoflakes, compared with the monolayer. We also explored the possible intercalation of gas molecules between the adjacent WS 2 monolayers; however, our simple computational model predicts that the adsorption process is not energetically favorable in this case, due to the small interplanar distance between atoms in the adjacent layers (~ 3.1 Å) and the resulting repulsion between the S atoms belonging to the slab and the molecule. Nevertheless, we cannot rule out this possibility in the experiments, because structural defects in the material can provide perfect gates for the intercalation of gas molecules. Electronic structure calculations were then carried out to understand the effects of the gas molecules on the electronic properties of WS 2 . The band structure and density of states calculated for the H 2 S molecule adsorbed on a WS 2 monolayer, shown in Fig. 5 (the results for the other gas molecules are available in Figs. S6-S9 in the ESM), are consistent with those reported in Ref. [18]. As shown in Fig. 5(a), the H 2 S molecule has little influence on the band structure of WS 2 in the combined system, which remains almost the same as that of the pristine layer. The highest occupied molecular orbital (HOMO) of H 2 S (green line in Fig. 5(a)) is located on the 2p orbital of the S atom and lies ~ 0.4 eV below the top of the valence band, as evidenced by a peak in the DOS curve (green line in Fig. 5(b)). The lowest unoccupied molecular orbital (LUMO) of H 2 S lies approximately 2.9 eV above the top of the conduction band (not shown). Therefore, the H 2 S molecule does not provide donor or acceptor states within the band gap of the WS 2 layer. The same behavior is observed in the case of H 2 S adsorbed on bilayer WS 2 (Figs. 5(c) and 5(d)) and also for the other gases, with the exception of NO. In particular, our results show that only the NO molecule introduces three impurity levels within the band gap of WS 2 , behaving as an acceptor (Fig. S8 in the ESM), in good agreement with the experiments. Moreover, a Bader analysis was performed to study the local charge transfer associated with the adsorption of the gas molecules on the pristine WS 2 monolayer (Table S2 in the ESM). The results for H 2 , CO, NH 3 , and NO are in good agreement with previous data [18]. Accordingly, no sizable charge transfer is observed for H 2 and CO, and only a very small amount of charge is transferred in the case of NH 3 and NO. The NH 3 molecule acts as a donor by transferring 0.017 electrons to the monolayer, whereas the NO molecule acts as an acceptor, capturing 0.008 electrons. Similar to H 2 and CO, H 2 S also exhibits very limited charge transfer over the pristine WS 2 slab, which indicates that the excellent sensitivity of the hybrid material towards the H 2 S molecules observed in the experiments may have a different origin than the direct charge transfer within the clean monolayer. The minor amount of charge transfer observed for the most stable configuration is also compatible with the orbital mixing charge transfer theory. As discussed earlier, the mechanism of charge transfer between an adsorbate and a surface is partially governed by the mixing of the molecular HOMO and LUMO with the orbitals of the monolayer [35]. In the most favorable configuration of H 2 S on pristine WS 2 , the H atoms (LUMO) are oriented towards the monolayer, accepting electronic density from it. On the other hand, the HOMO of the molecule, located on the S atom, provides a slightly lower charge density to the slab because it is placed further away, resulting in a very small net charge transfer in the full system. We investigated the effect on the charge transfer of a H 2 S molecule with the opposite orientation, i.e., with the H pointing away from the slab [35]. As expected, the direction of the charge transfer was inverted, and the gas molecule could be considered as a donor. This possibility cannot be ruled out for some of the adsorbates under the actual experimental conditions. However, in the case of the pristine WS 2 monolayer this configuration is clearly energetically unfavorable. In order to identify any change in the surface chemistry of the materials during the gas sensing experiments that may affect their sensing response, we carried out X-ray photoelectron spectroscopy (XPS) measurements of the WS 2 hybrid nanostructures. Three samples were studied (Table 1): the original WS 2 hybrids (sample #1), the hybrid materials exposed to air at 200 °C for 1 h (sample #2), and those subsequently exposed also to H 2 S at 200 °C for 1 h, in 1 ppm air (sample #3). The S 2p 3/2 and O 1s peaks at around 162.3 and 530.7 eV (associated with WS 2 and WO 3 , respectively) indicate the partial substitution of surface sulfur with oxygen when the WS 2 hybrid is heated in air. In addition, a new peak simultaneously appearing at 168.9 eV suggests the additional formation of sulfates, e.g., by the oxidation of sulfide and the reduction of O 2 to WO 3 . On the other hand, when the sample is exposed again to H 2 S (1 ppm in air), the sulfur concentration of the surface recovers the original value, despite the very low concentration of analyte. According to a recent report, for 2D WS 2 nanoparticles the replacement of S with O occurs even at room temperature, resulting in the formation of some amorphous/crystalline WO 3 phase in the nanoflakes below/above 250 °C [36]. However, the very efficient and fast recovery of the WS 2 structure even after a short treatment in 1 ppm H 2 S is unexpected and has important consequences for the sensing of H 2 S in the presence of O 2 (e.g., in ambient air). As mentioned is an n-type semiconductor, whereas WS 2 displays p-type semiconducting behavior. Accordingly, S doping in the WO 3 lattice or O doping in WS 2 are expected to cause significant changes in the electronic band structure and consequently alter the carrier localization upon adsorption of analyte gas molecules on the surface. Therefore, the H 2 S sensing process may be interpreted as a sequence of reversible adsorption/desorption and redox reactions, in which oxygen and sulfur compete for the anionic sites in the WS 2 lattice, thus modulating its band structure and electrical behavior, as schematically illustrated in Fig. 6. In summary, our results indicate that the WS 2 nanohybrid possesses all key features (similar to other one-dimensional (1D) or 2D TMDs) to be applied as an effective gas sensor material, including a semiconducting nature and electrical transport properties adjustable via gas adsorption or (as observed in this study) lattice doping. Nevertheless, what renders the nanostructured hybrid probably superior to its 1D or 2D counterparts is the combination of the beneficial properties of nanowires and nanosheets. In particular, long 1D nanomaterials are preferred when electrically conductive networks need to be created between macroscopic electrodes. The long nanowires can establish percolation channels via a much lower number of interparticle connections than small zerodimensional (0D) nanoparticles or 2D sheets [34,37,38]. This is particularly important, since during the electrical measurements the interfacial transport might dominate and suppress the effects of gas adsorption on the nanoparticles. On the other hand, small and thin 2D nanosheets have the fascinating ability to adsorb a significant fraction of surface-bound analytes not only on the 2D crystal facets but also on their edges [39], which may contribute to enhance the sensitivity of the 2D crystals, relative to other nanoparticles lacking edge adsorption sites. Conclusions The WS 2 nanowire-nanoflake hybrid materials investigated in this study were found to possess electrical properties highly sensitive to H 2 S, along with moderate sensitivity to other analytes such as H 2 , CO, NH 3 , and NO in an air buffer. First-principles calculations based on the DFT-GGA approach with the PBE functional show that the intrinsic electronic properties of pristine WS 2 cannot fully explain its observed high sensitivity towards H 2 S, whose origin might also be attributed to defects and/or doping in the WS 2 lattice. To test this possibility, we carried out XPS measurements on pristine and modified WS 2 nanowire-nanoflake hybrids and found that O 2 present in the environment during the sensor measurements induce a partial (and reversible) substitution of S by O at the anionic sites of the lattice, which may explain the high sensitivity and selectivity of the WS 2 nanowire-nanoflake heterostructures towards H 2 S. Nano Res.
2018-09-23T02:00:29.154Z
2018-02-20T00:00:00.000
{ "year": 2018, "sha1": "427018fadedb40131a0b96fcdd0b4a6a25ea21b9", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12274-018-2009-9.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3e8332805d0b318b3bfba140276a880a2df2b998", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
238999002
pes2o/s2orc
v3-fos-license
INTERCOMPARISON OF GAMMA CELL 220 IRRADIATOR FACILITIES AND DR. MIRZAN T RAZZAK GAMMA IRRADIATORS USING HARWELL DOSIMETERS The gamma irradiator is a multi-purpose facility that possibly used to preserve food, sterilize medical equipment, and conduct genetic engineering and polymerization processes, during which the absorbed dose of the product is critical. The standardization of product quality assurance was regulated by the IAEA Technical Document Number 409 considering Dosimetry for Food Irradiation and ISO 14470 and 11137-3 on Food Irradiation, as well as the Guidance on Dosimetric Aspects of Development, Validation, and Routine Control, respectively. The absorbed dose was influenced by the movement of the product to the source, its position, the amount of radioactive activity in the facility, and the dose rate in the irradiation room. The dosimeter performance test and quality assurance of the system were conducted using the Facility Intercomparison Technique which tested the dosimeter (measuring instrument) at 2 different facilities to determine the performance of the measuring instrument.. In this study, 2 irradiation facilities were tested using a Harwell routine dosimeter in the dose range of 1 kGy to 30 kGy and 20 dose points. The results showed that the highest deviation reached 19% and 21% at the Gamma Cell 220 and the Dr. Mirzan T Razzak Gamma irradiator facilities. This elevated the performance of the dosimeters to determine the precision accuracy of the dosemeasuring instrument. INTRODUCTION There are 4 categories of gamma irradiators, including I, II, III, and IV [1]. Their nuclear installations use gamma radiation from a radioactive source, Cobalt-60 (Co-60), and upon targeting a material, they sterilize medical equipment and preserve food. In Indonesia, the use of gamma irradiation is regulated in the Head Regulation of Drug and Food Supervisory Agency on Food Packaging Supervision Number 3 of 2018 and an IAEA Technical Document Number 409 on Dosimetry for Food Irradiation that explains product standards, facilities, and quality assurance (irradiated dose dosimetry system). From the 1950s to the early 2000s, there was a rapid growth of irradiator development, and this is expected to happen again based on the great potential of the gamma irradiator benefits in 2020. Furthermore, irradiation needs are supported by FAO, which shows that 25% of agricultural and food products are damaged by insects, bacteria, and other animals after harvest, and 40% of fruit products before reaching the market. In the irradiation process, quality assurance of irradiated products is fundamental, and it is ensured by administering the correct radiation dose to each product/sample. Therefore, a verification of the dosimetry system in an irradiation facility is critical. A dosimeter is a tool that measures the irradiation response [1], and it can be grouped into primary, reference, transfer, or routine categories. In previous studies, the intercomparison process was done using primary and secondary standard dosimeters at an international reference laboratory scale. However, this study used a routine standard dosimeter and aimed to assess the performance of the irradiator facility. This was to study the standard routine dosimetry on the category I of gamma irradiator using a routine dosimeter to guarantee that every irradiation process complies with quality standards. THEORY Gamma irradiator is a nuclear installation used to provide radiation intentionally and measurably. It has four categories, including I, II, III, and IV [1]. In category I, the source is stored in a dry container made of solid material that does not move (idle) as the sample to be irradiated moves. Figure 1 shows category I of gamma irradiator. Cobalt-60 is utilized because it has a relatively long half-life (5.27 years) and is not easily soluble in water. Therefore, it is safe to store the Cobalt-60 source in a pool of water [6]. The International Organization of Standards (ISO) is a community/association of experts joined to set standards for a process applicable in various countries. ISO 14470 and 11137 on Food Irradiation Validation and Routine Control, and Irradiation Healthcare Products Validation and Routine Control, respectively, are applied to a facility, where each clause includes facility-physical, sourcesafety, security-monitoring, traceability, dose monitoring, and protection of irradiated products [2]. ISO 11137 also describes what is required to develop, validate, and run a routine control of the sterilization process of health products [3]. Its Part 3 outlines how a dose is measured, the maximum acceptable dose is determined, the sterile dose is established, installation and performance are assessed for qualification, and how a routine is monitored and controlled [5]. In the control process, a dosimeter is needed to measure the obtained dose. There are several levels of irradiation dosimetry systems according to different uncertainties [1]. The primary dosimeter is used by national standard laboratories, where the calorimeter and ionization chambers are the standards and only calibrated once during the instrument's lifetime, while the reference dosimeters are calibrated as a primary standard and are used to calibrate the lower ones. Moreover, the transfer dosimeter serves as a bridge between an accredited calibration laboratory and an irradiation facility and guides dosimeter traceability. In general, dosimeters are routinely used to map doses and run routine control. The irradiation dosimetry technique measures the absorbed dose of the product, which is the amount of ionizing radiation energy per unit mass of a particular material. In International Units (SI), it is expressed in J/kg, where the absorption value of 1 J/kg is equivalent to 1 gray (Gy). The density of the irradiated product/sample and the position of the maximum and minimum doses can be determined using a dosimeter experiment on a certain density test material or through computation. Although this method has the disadvantage of high nominal prices and takes a long time, its results are close to the actual ones. When using computation, the Monte-Carlo method is the most widely used [7]. METHOD Irradiation was conducted at 2 facilities, specifically Dr. Mirzan T Razzak Gamma Irradiator in Yogyakarta and Gamma Cell 220 in Jakarta, using a Harwell Amber and Red Perspex dosimeters. The dose rate, activity during irradiation, humidity, and room temperature readings were recorded. A total of 2 Harwell Amber and 1 Red Perspex dosimeters were placed in the same room called gamma phantom, which created the same conditions for all of them. Dosimeter measurements were read on a UV-Vis spectrometer. The Harwell Amber dosimeter was measured between a wavelength of 603 nm and 651 nm, while the Red Perspex one was at 640 nm. After taking the readings at each wavelength, the thickness of each dosimeter was measured in cm. By dividing the absorbance value by this value, specific absorbance was determined, and was converted to the absorbed dose value based on the calibration table. RESULTS AND DISCUSSION Irradiation occurred at 2 irradiator facilities, specifically the Gamma Cell 220 irradiator in Jakarta and Dr. Mirzan T Razzak in Yogyakarta, and used Harwell Amber and Red Perspex dosimeters made in England. The Harwell Amber dosimeter had a measurement range of 1-30 kGy, while the Red Perspex 5-50 kGy. Dosimeter measurements for Harwell Amber were read from a UV-Vis spectrometer at a wavelength of 603 nm in the dose range of 1-15 kGy, and 651 nm at 16-30 kGy. Moreover, Red Perspex used a wavelength of 640 nm at 5-50 kGy. In the first step, a specific absorbance response for dose analysis was performed, as shown by the graph in Figure 3. Figure 3, the regression value is R=1 for the 603 nm, R=0.9983 for 651 nm, and R=0.9597 nm for the 640 nm curves in the dose range of 1-30 kGy. This shows that the specific absorbance value is linear with the absorbed dose. A preliminary study was conducted on 27 August 2020 on the performance of the Harwell dosimeter at the Gamma Cell 220 Irradiator Facility in Jakarta, in which a dose Intercomparison of Gamma ... rate of 4134 Gy/Hour and source activity of Co-60 5544 Ci were used for irradiation. Figure 4 shows the experimental results of the absorbed dose-response dosimeter. Figure 4, the Harwell Amber dosimeter measurement readings at wavelengths of 603 nm and 651 nm had an error of 2% to 11% and the response at a 640 nm 1% to 5%. Based on the initial experiment results, an intercomparison was made at the Dr. Mirzan T Razzak facility in Yogyakarta on 17 September 2020 at 3400 Gy/Hour dose rate, Co-60 activity of 8224 Ci, 28 o C temperature, and 59% humidity. The comparisons used 2 Harwell Amber and 1 Red Perspex dosimeters. Furthermore, irradiation was conducted in a dose range of 1-30 kGy with 20 dose points determined using the calculation of the number of decades to obtain a logarithmic dose point, as shown in Table 2. Irradiation was carried out using gamma phantom to ensure temperature and humidity conditions remained homogenous. The results of the irradiation at Dr. Mirzan T Razzak's facility are shown in Figure 5. Based on the graph, 651 nm wavelength measurement erred in the range of 1% to 21%, and the large variations could have resulted from the distance between irradiation and measurement or humidity factors. The error ranges at the wavelength of 603 nm and 640nm were 3% to 11% and 16% to 27%, respectively. The second experiment with the same method was conducted at the Gamma Cell 220 Irradiator Facility in Jakarta, and Figure 6 shows the results. The facility dose rate was 4050 Gy/Hour with Co-60 activity of 5441 Ci on October 12, 2020, while the dosimeter measurement temperature and humidity were 28 o C and 59%, respectively. The results showed that the error ranges at a wavelength of 651 nm, 603 nm, and 640 nm were 11% to 31%, 7% to 21%, and 640 nm is 2% to 11%, respectively. The error variation could be influenced by the humidity, irradiation temperature, and the distance between the irradiation and the measurement. Harwell Amber and Red Perspex are clear/colorless PMMA (polymethyl methacrylate) dosimeters that capture radiation and change the absorbance value of the PMMA material. The result was used to obtain the specific absorbance value (absorbance per thickness (cm)), which possibly be affected by high humidity or temperature changes that cause condensation on the material. Furthermore, if the distance and time between irradiation and measurement are long, the radiation trapped in the PMMA material escapes, affecting the specific absorbance. The value was converted into the absorbed dose based on the calibration table, which is very specific because it has differences in each PMMA dosimeter production batch. CONCLUSION The experiment was conducted on 2 irradiator facilities using 2 Harwell Amber and 1 Red Perspex dosimeters. The Red Perspex dosimeter experimental results gave a more stable response in the 5-30 kGy dose range, which was caused by the 1 measurement wave (640 nm). Therefore, the consistency of the absorbed dose value was better than the Harwell Amber dosimeter. However, there is a need for further studies on the performance of the Harwell Amber and Red Perspex as routine dosimeters.
2021-10-15T16:28:10.321Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "c214bd41dfd89244f4400b5bbb84af98a8076241", "oa_license": null, "oa_url": "http://jurnal.batan.go.id/index.php/jfn/article/download/6284/5631", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "99651218edf12d1cd134d4d6e9640b7293581a66", "s2fieldsofstudy": [ "Engineering", "Medicine", "Physics", "Environmental Science" ], "extfieldsofstudy": [] }
261296883
pes2o/s2orc
v3-fos-license
Association of TP53 rs1042522 G > C, MDM2 rs2279744 T > G, and miR-34b/c rs4938723 T > C polymorphisms with aneuploidy pregnancy susceptibility Background Aneuploidy pregnancy is a severe major birth defect and causes about 50% spontaneous miscarriages with unknown etiology. To date, only a few epidemiological studies with small sample sizes have investigated the risk factors for aneuploidy pregnancy. TP53, MDM2, and miR-34b/c genes are implicated in tumorigenesis with aneuploidy, yet the function of their polymorphisms in aneuploidy pregnancy susceptibility needs to be clarified. Objective To elucidate the association of TP53 rs1042522 G > C, MDM2 rs2279744 309 T > G, and miR-34b/c rs4938723 T > C specific polymorphisms with aneuploidy pregnancy. Methods In the retrospective case-control study, 330 aneuploidies pregnancy women and 813 normal pregnancy controls were recruited between January 2018 and April 2022 at the First People’s Hospital of Yunnan Province, Kunming, China. Three functional polymorphisms, the TP53 rs1042522 G > C (Arg72Pro), MDM2 rs2279744 309 T > G, and miR-34b/c rs4938723 T > C, were genotyped using the snapshot method. Results The frequency distribution of three genotypic variants was not different between case and control pregnant women and was similar to with Hardy-Weinberg Equilibrium (HWE). However, in the younger subgroup (less than 35 years old), a significant difference was detected in allele and recessive model (p = 0.01). In the advanced age subgroup (more than or equal to 35 years old), G of MDM2 rs2279744 T > G revealed a significantly higher frequency in cases than controls (p = 0.045), and miR-34b/c rs4938723 T > C revealed a significant difference under the dominant model (p = 0.03), but no significant differences were observed in other models and in both younger and older subgroup (p > 0.05, respectively). These results suggest that individual polymorphisms were not associated with aneuploidy pregnancy, combined with age, they may serve as a risk factor for aneuploidy pregnancy. Conclusion Combination of TP53 rs1042522 G > C, MDM2 rs2279744 T > G, and miR-34b/c rs4938723 T > C polymorphisms with maternal age may be related to aneuploidy pregnancy susceptibility. These findings might elaborate on the genetic etiology of aneuploidy pregnancy. Supplementary Information The online version contains supplementary material available at 10.1186/s12884-023-05945-3. Introduction Embryonic/fetal chromosomal number abnormalities are detected in about 50% of early spontaneous abortions, and aneuploidy is the most common abnormality among the birth defects [1].However, an accurate understanding of the genetic etiology of aneuploidy pregnancy remains unknown, which could provide valuable information for medical management, reproductive genetic counseling, and supportive patient care [2].Maternal genetic background might partake in the regulation of the formation and continuous development of pregnancy, except for advanced maternal age during the procedure. TP53 protein product, p53, is an essential tumor suppressor regulating several fundamental cellular activities, such as cell cycle regulation, senescence, cellular stress, apoptosis, and DNA damage [3].TP53 gene downregulation triggers somatic chromosomal instability [4].However, the mechanism underlying this intolerance of nondiploid genomes of cell is yet unknown.A potential role of TP53 pathway gene mutations was as a risk factor for human aneuploidy; it protects diploid cells and limits the proliferation of aneuploidy in humans [4,5].TP53 plays a role in female reproduction by maintaining germcell integrity and various steps of mice and humans [6].Furthermore, the maintenance of spindle stability during human meiosis and embryonic development is involved in TP53 [7].A vital TP53 gene polymorphism c.215G > C (rs1042522) is related to the accumulation of aneuploidy cells.TP53 rs1042522 G > C is located at codon 72, leading to a transversion from arginine (Arg) to proline (Pro) bringing about the changed structure and function of TP53 [8], which impairs apoptosis [9] and may promotes the continual development of aneuploidy cells [5,7].Wild-type TP53 is regulated by mouse double minute 2 (MDM2) via a negative feedback regulatory loop, which plays a critical role in the expression level within cells that revealed differential outcomes [10,11].MDM2 rs2279744 T > G polymorphism, deriving of a thymine to guanine in the promoter region for MDM2 upregulates MDM2 mRNA and protein levels and affects the TP53 level.Accumulating evidence has shown that TP 53 regulates the expression of various microRNAs [12].miR-34 family members exhibit critical functions that have been widely studied.In mammals, the miR-34 family consists of three members: miR-34a, miR-34b, and miR-34c.miR-34a is encoded by a unique DNA sequence and miR-34b and miR-34c are derived from the same primary transcript (pri-miR-34b/c) [13].The common functional polymorphism rs4938723 T > C is located in the promoter of miR-34b/c.Its transition from T to C may alter the binding of the GATA-X transcription factor to the pri-miR-34b/c [14], and its contribution to cancer susceptibility has been illustrated in a few studies [15][16][17][18]. Concerning the pivotal roles in cancer, recurrent pregnancy loss susceptibility, and trisomy 21 [16][17][18][19][20][21], the three important functional nonsynonymous polymorphisms were selected: rs1042522 G > C (Arg72Pro), MDM2 rs2279744 T > G and rs4938723 T > C. Given the uncertain cause for aneuploidy pregnancy, we conducted a case-control study to evaluate the associations between TP53 c.215G > C (rs1042522), MDM2 rs2279744 T > G, and miR-34b/c rs4938723 T > C polymorphisms and their interaction with the aneuploidy pregnancy to explore the potential genetic etiology.Therefore, this study would establish a biomarker panel for prenatal prediction and prognosis model of aneuploidy pregnancy women. Subjects The case-control study enrolled 1143 women after a prenatal diagnosis procedure, including 330 pregnant women with a fetal chromosome aneuploidy and 813 age-matched normal female controls undergoing prenatal diagnosis in the Department of Medical Genetics at The First People's Hospital of Yunnan Province, Kunming, China, during January 2018 to April 2022.Written informed consent was obtained before participation in the study.The study protocol is approved by the ethics committees of the First People's Hospital of Yunnan Province.The study was performed in accordance with the Declaration of Helsinki. Fetal chromosome abnormality cases were diagnosed as aneuploidy by karyotype analysis and structural abnormality by copy number variation (CNV) method after standard prenatal diagnosis.When one case was diagnosed, 2-3 age-matched normal controls were selected concurrently.Women with an abnormal pregnancy history were excluded from the control group.For the cases, couples with numerical and structural abnormal chromosomes were excluded from the study. Genotyping (including DNA extraction) The residual peripheral blood was collected after the clinical test for genomic DNA extraction using DNA extraction kit (Trelief Hi-Pure Animal Genomic DNA Kit, Qingke), and then DNA was preserved at − 20 ℃ until test.The polymorphisms of TP53 rs1042522 G > C, MDM2 rs2279744 T > G, and miR-34b/c rs4938723 T > C were tested by SNaPshot SNP typing on a genetic analyzer (ABI 3730xl).The DNA was amplified by polymerase chain reaction (PCR) on a thermal cycler (LongGene A300).The SNaPshot PCR primers are listed in Table S1.The 5-µL reaction consisted of mix 2 µL, SNaPshot PCR primer1 µL, purified PCR products1 µL, and ddH 2 O 1 µL.The amplification in single-base extension system was according to the following conditions: initial denaturation at 96 ℃ for 1 min, 25 cycles of denaturation at 96 ℃ for 10 s, annealing at 50 ℃ for 5 s, and extension at 60 ℃ for 30 s, and maintained at 4 ℃.The electrophoresis sample reaction volume was 11 µL (SNaPshot PCR products:1 µL, Hi-Di formamide: 9.90 µL, and GeneScan LIZ-120: 0.10 µL); the reaction condition was as follows: denaturation at 95 ℃ for 5 min and rapid cooling at − 20 ℃ for 5 min.After spectrum calibration, the samples were separated via capillary electrophoresis.The draft data were analyzed using Gene Mapper software v4.1.The final genotypes were confirmed by Sanger sequencing for 20% repeated sampling and the results were completely in accordance with the SNaPshot. Sample size calculation PASS.11 was used to calculated the sample size.According to the frequencies of alleles analyzed in this study, TP53 rs1042522 G > C, MDM2 rs2279744 T > G, and miR-34b/c rs4938723 T > C, to set a 95% confidence interval (CI), odds ratios (ORs) as 1.6 based on the published reports, and a 5% margin of error, when the cases reached 330 and controls reached 813; the power of association can be achieved 0.92, then we stopped the sample collection and carried out genotyping. Statistical analysis A chi-squared test was used to test Hardy-Weinberg equilibrium (HWE) for all polymorphisms individually in cases and controls.The differences in continuous variables and nominal variables between case-control groups were compared using a Fisher's t-test and a two-sided chi-squared test, respectively.Binary logistic regression analysis was used to calculate ORs and 95% CIs.The association of three polymorphisms with aneuploidy pregnancy risk was assessed using ORs and 95% CIs, adjusting by significant difference variables of prenatal diagnosis pregnancy week and previous delivery times.Stratification analysis by age was also conducted to examine whether the effects of TP53 rs1042522 G > C, MDM2 rs2279744 T > G, and miR-34b/c rs4938723 T > C polymorphisms were heterogeneous.p < 0.05 indicated a statistically significant difference.All statistical analyses were carried out using SPSS version 19.0 (SPSS Inc., IL, USA). Demographic characteristics The demographic characteristics of fetal aneuploidy pregnant women and normal controls are listed in supporting information Table 1.The differences between cases and controls were assessed for mean age (p = 0.48), body mass index (BMI) (p = 0.13), prenatal diagnosis pregnancy week (p = 0.046), pregnancy times (p = 0.43), previous delivery times (p = 0.05), nationality (p = 0.37), and aneuploidy prenatal diagnosis results. Association of TP53 rs1042522 G > C, MDM2 rs2279744 T > G, and miR-34b/c rs4938723 T > C polymorphisms with fetal aneuploidy The genotype frequencies and the p-values of HWE of polymorphisms showed any significant association between case and control groups in our study population.Moreover, no significant association was observed between TP53 rs1042522 G > C, MDM2 rs2279744 T > G, and miR-34b/c rs4938723 T > C and fetal aneuploidy pregnancy risk under the dominant, recessive model, and allele analysis. Age-based stratification analysis of TP53 rs1042522 G > C, MDM2 rs2279744 T > G, and miR-34b/c rs4938723 T > C polymorphisms with fetal aneuploidy According to the significant factor risk of advanced maternal age, the participants were divided into two subgroups stratified by age 35 years.The results of the stratification analysis are summarized in Table 3.In the younger group, the C allele frequency was significantly higher in cases than controls (OR 1.25, 95% CI: 1.03-1.51,p = 0.03); a significant difference was detected found in the recessive model (GG + GC vs. CC: OR 1.54, 95% CI: 1.10-2.16,p = 0.01).For the older group, a significant difference was found in the recessive model (GG + GC vs. CC: OR 0.58, 95% CI: 0.35-0.94,p = 0.03).In the advanced age group, the allele G of MDM2 rs2279744 T > G revealed a significantly higher frequency distribution in cases than controls (G vs. T: OR 1.30, 95% CI: 1.00-1.68,p = 0.045); miR-34b/c rs4938723 T > C revealed a significant difference in the dominant model (TT vs. CC + TC: OR 1.61, 95% CI: 1.05-2.46,p = 0.03), but no significant differences were found in the other models and in the younger and older subgroup (p > 0.05, respectively). Discussion This case-control study explored the association of maternal TP53 rs1042522 G > C, MDM2 rs2279744 T > G, and miR-34b/c rs4938723 T > C polymorphisms with fetal chromosome aneuploidy.Our preliminary study demonstrated that maternal polymorphisms of TP53 rs1042522 G > C, MDM2 rs2279744 T > G, and miR-34b/c rs4938723 T > C were not significantly related to the risk of fetal aneuploidy in Chinese population.However, in the age stratification analysis, the frequency of C allele and CC genotype of TP53 rs1042522 G > C was increased in cases but was decreased in advanced-age women, suggesting its role in fetal chromosome numerical abnormality at different ages.The clinical significance of TP53 rs1042522 G > C polymorphism is uncertain at present, regarding the functions of missense, R72 is more efficient than P72 in inducing apoptosis.The dysfunction of this polymorphism might decrease the apoptotic regulation, which would eliminate chromosome numerical abnormality embryos or increase the tolerance to these chromosome instabilities in younger women with TP53 C allele carriers [4].TP53 rs1042522 G > C has also been associated with fetal trisomy 21.C allele and CC genotype are common in the mother with trisomy 21 offspring [5,19].This finding was in line with the results in the younger subgroup of our study but contrary to those in the advanced maternal age group perhaps for the coeffect with other environmental factors. As a benign clinical significance, MDM2 rs2279744 T > G polymorphism did not show a significant difference between case and control, but G allele is frequent in the advanced age cases subgroup.The result was similar to Salemi et al.; MDM2 rs2279744 T > G polymorphism showed no significant association with fetal aneuploidy between the cases and controls [15].For the initial correlation between MDM2 and TP53, and the latter exhibited a regulatory role in human aneuploidy, human reproduction, and maintenance of spindle stability during gamete meiosis [4,7,22].To date, seldom study has analyzed the MDM2 rs2279744 T > G polymorphism and fetal aneuploidy.Previous studies have shown a close association of MDM2 rs2279744 T > G with the human polycystic ovarian syndrome and human reproduction [5,23].Strikingly, MDM2 rs2279744 T > G polymorphism is the key element in maintaining the genomic stability of somatic cells [4,22], which was not revealed in female gametes and embryos. The miR-34 family is directly regulated by TP53 as a transcriptional factor.Some studies demonstrated that TP53 inhibits cell proliferation and growth after upregulating miR-34b/c [24].TP53 rs1042522 G > C and miR-34b/c rs4938723 T > C were associated with the risk of cancers, such as papillary thyroid carcinoma, primary hepatocellular carcinoma, and neuroblastoma [25][26][27][28].The clinical significance of miR-34b/c rs4938723 T > C is not reported in ClinVar database.No significant association was observed between miR-34b/c rs4938723 T > C and fetal aneuploidy, except for the advanced-age pregnant women in the dominant model in our study.TT genotype revealed an increased frequency distribution among cases, which might result from the small sample size of the subgroup. Typically, advanced maternal age is the sole risk factor for DS.However, the molecular mechanism of chromosome non-disjunction is yet unknown.A multifactorial etiology of chromosome non-disjunction was found in meiosis [29].Thus, the present study aimed to identify the putative risk factors and their association with advanced maternal age during oocyte formation and embryo development.Regarding the role of TP53 and its regulators MDM2 and miR-34b/c in the maintenance of spindle stability, the functional polymorphisms on these genes were investigated.The findings indicated that maternal TP53 rs1042522 G > C, MDM2 rs2279744 T > G, and miR-34b/c rs4938723 T > C polymorphisms combined with maternal age modified the susceptibility of chromosome non-disjunction allied to oocyte and embryo. Nevertheless, the present study has some limitations.(1) In the process of aneuploidy oocyte and embryo development, the interaction of many different gene polymorphisms is not simple.The three selected gene polymorphisms resulted in false-negative outcomes.(2) The small sample size might not be sufficient to evaluate an accurate association.(3) All the enrolled population including the normal controls were all recruited from the prenatal diagnosis pregnant women in accordance with the indication of prenatal diagnosis in China.Thus, some selection bias would affect the results.(4) Lack of functional studies about these polymorphisms limits the explanation of results.(5) The origin of fetal aneuploidy was not determined from either paternal or maternal source.(6) The interaction of genes' polymorphisms and stratification by aneuploidy type was not analyzed for weak associations and a small sample size. In summary, the current results revealed that the association among TP53 rs1042522 G > C, MDM2 rs2279744 T > G, and miR-34b/c rs4938723 T > C polymorphisms with fetal aneuploidy susceptibility was observed in an age-dependent manner.Future studies with a larger sample size, diverse ethnicities, geographic regions, and aneuploidy types populations are essential to confirm the results and comprehensively assess the potential function of TP53 rs1042522 G > C, MDM2 rs2279744 T > G, and miR-34b/c rs4938723 T > polymorphisms in the risk of fetal aneuploidy pregnancy. and miR-34b/c rs4938723 T > C are shown in Table 2. Neither of the three polymorphisms revealed a significant bias in the HWE in cases (p = 0.60 for TP53 rs1042522 G > C, p = 0.42 for MDM2 rs2279744 T > G, and p = 0.16 for miR-34b/c rs4938723 T > C) and controls (p = 0.86 for TP53 rs1042522 G > C, p = 0.16 for MDM2 rs2279744 T > G, and p = 0.89 for miR-34b/c rs4938723 T > C).The genotype frequency of TP53 rs1042522 G > C, MDM2 rs2279744 T > G and miR-34b/c rs4938723 T > C in 330 cases and 813 normal control.None of these Table 1 Baseline characteristics of study participants Table 2 Analysis of alleles, genotypes, and genetic models of TP53 rs1042522, MDM2 rs2279744, and miR-34b/c rs4938723 between cases and controls p-values are adjusted by prenatal diagnosis pregnancy week and previous delivery times.* The cutoff of p value was 0.166 (0.05/3) after the Bonferroni correction. Table 3 Stratification analysis of alleles, genotypes, and genetic models of TP53 rs1042522, MDM2 rs2279744, and miR-34b/c rs4938723 between cases and controls
2023-08-30T14:01:41.721Z
2023-08-30T00:00:00.000
{ "year": 2023, "sha1": "a2f1a18ce2447b500a1dfb37af8a193ba72867c7", "oa_license": "CCBY", "oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/counter/pdf/10.1186/s12884-023-05945-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "717884b98d13b54d0dd975429e3fd0c9751f948a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
207469638
pes2o/s2orc
v3-fos-license
An attempt to stabilize tanshinone IIA solid dispersion by the use of ternary systems with nano-CaCO3 and poloxamer 188 Background: Tanshinone IIA (TSIIA) on solid dispersions (SDs) has thermodynamical instability of amorphous drug. Ternary solid dispersions (tSDs) can extend the stability of the amorphous form of drug. Poloxamer 188 was used as a SD carrier. Nano-CaCO3 played an important role in adsorption of biomolecules and is being developed for a host of biotechnological applications. Objective: The aim of the present study was to investigate the dissolution behavior and accelerated stability of TSIIA on solid dispersions (SDs) by the use of ternary systems with nano-CaCO3 and poloxamer 188. Materials and Methods: The TSIIA tSDs were prepared by a spray-drying method. First, the effect of combination of poloxamer 188 and nano-CaCO3 on TSIIA dissolution was studied. Subsequently, a set of complementary techniques (DSC, XRPD, SEM and FTIR) was used to monitor the physical changes of TSIIA in the SDs. Finally, stability test was carried out under the conditions 40°C/75% RH for 6 months. Results: The characterization of tSDs by differential scanning calorimetry analysis (DSC) and X-ray powder diffraction (XRPD) showed that TSIIA was present in its amorphous form. Fourier transforms infrared spectroscopy (FTIR) suggested the presence of interactions between TSIIA and carriers in tSDs. Improvement in the dissolution rate was observed for all SDs. The stability study conducted on SDs with nano-CaCO3 showed stable drug content and dissolution behavior, over the period of 6 months as compared with freshly prepared SDs. Conclusion: SDs preparation with nano-CaCO3 and poloxamer 188 may be a promising approach to enhance the dissolution and stability of TSIIA. INTRODUCTION Tanshinone IIA (TSIIA), the major liposoluble bioactive ingredient extracted from the root of Salvia miltiorrhiza Bunge, exhibits a variety of cardiovascular activities including vasorelaxation and a cardio-protective effect. [1][2][3] However, TSIIA shows poor solubility in water [4] and insufficient dissolution rate, [5][6][7] which can give rise to incomplete and/or unpredictable bioavailability. Currently, sodium TSIIA sulfonate, a water soluble derivative, has been used in clinical practice to treat patients with cardiac metabolic disorders. [8] However, because ionic compounds cannot penetrate the blood-brain barrier, sodium TSIIA sulfonate is not effective for cerebrovascular disease. [9] Among the numerous techniques purpose for improving the dissolution properties of poorly water-soluble drugs and hence, possibly, their bioavailability, solid dispersions (SDs) have attracted considerable interest and have been successfully applied. [10][11][12][13][14] Dissolution rates can be improved by particle size reduction, increased wettability through mixing with highly soluble carriers, and maintenance of the drug in the amorphous form. [15][16][17] Although the application of SDs has been reported regularly in the pharmaceutical literature, only a few commercial products rely on the SDs strategy. One of the main reasons for this discrepancy is the possible thermodynamical instability of amorphous drug [18,19] that have the tendency to change to a more stable state under recrystallization on storage, [20][21][22] A B S T R A C T inevitably results in decreased solubility and the dissolution rate. In particular, recent investigations have shown that formulation of ternary solid dispersions (tSDs) by using suitable carrier combinations or adding an appropriate third component can extend the stability of the amorphous form of drug with respect to the corresponding binary systems. Obviously, drug stability of SDs largely depends on the properties of carriers. However, the amount of carriers that can be used to keep the stability of drug is still rather limited. Therefore, more complex carriers remain to be explored. Poloxamer 188, as 80% of its weight is polyoxyethylene (PEO) groups with a lower melting point (52°C), [23,24] was used as a SD carrier of poorly water-soluble drugs with two roles, one as a polymeric carrier and the other as surface active agent. The polymeric carrier with surface active properties significantly enhances dissolution of poor water soluble drugs. [25][26][27] However, high amount of hydrophilic polymer in SDs may also increase the availability of moisture, which may promote drug migration and crystallization. Similarly, tackiness and stickiness are imparted by poloxamer 188 causes processing problems. However, the previous report that stable free flowing SDs of glibenclamide were obtained using polyglycolized glycerides carriers with the aid of silicon dioxide. [28] Nano-CaCO 3 had received much more attention because of its satisfying biocompatibility, non-toxicity, small size and high specific surface area. [29,30] Moreover, it also represented the highest output and probably the lowest cost of commercial nanoparticles in the world because of their widespread applications. Nano-CaCO 3 played an important role in the adsorption of biomolecules due to their large specific surface area and high surface energy. Additionally, it is being developed for a host of biotechnological applications such as cancer therapy and drug delivery. [31,32] In this context, we report here, for the first time, nano-CaCO3 was combined with surfactant poloxamer 188 for the preparation of TSIIA SDs. Firstly, the effect of combination of poloxamer 188 and nano-CaCO 3 on TSIIA dissolution was studied. Subsequently, a set of complementary techniques (DSC, XRPD, SEM and FTIR) was used to monitor the physical changes of TSIIA in the SDs. Finally, stability test was carried out under the conditions of 40°C/75% RH for 6 months. Materials TSIIA standard was obtained from the National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China). TSIIA was supplied by the Nanjing Zelang Medical Technology Co. Ltd. (Nanjing, China) and purity was greater than 98%. Nano-CaCO 3 with average particle size of 60 nm was supplied by Shanxi Ruicheng Huaxin Nano Material Co. Ltd. (Shanxi, China). All reagents were of analytical grade except methanol of chromatographic grade. Preparation of solid dispersions and physical mixtures Binary dispersions (bSDs) of TSIIA and nano-CaCO 3 or poloxamer 188 and TSIIA/nano-CaCO 3 /poloxamer 188 ternary dispersions (tSDs) were prepared using a spray dryer (SD-06 Labplant, Labplant UK Limited, North Yorkshire, Britain) [ Table 1]. A fixed set of adjustable parameters of the equipment (the inlet and outlet temperatures of the drying chamber were maintained at 75 and 38°C, respectively, feeding rate: 8 mL/min) were used throughout. The resultant powders were stored in a desiccator for further investigation. PMs was prepared by blending the components in a mortar [ Table 1]. In vitro dissolution study HPLC analysis of TSIIA The concentration of TSIIA in the dissolution medium was determined by high pressure liquid chromatography (HPLC, Agilent 1200) equipped with a Diamonsil TM RP-C18 column (250 × 4.6 mm, 5 µm). The mobile phase of methanol and water (85:15, v: v) was used at a flow rate of 1.0 mL/min. The UV detector was set at 270 nm to analyze the column effluent and the column temperature was 30°C. The entire solution was filtered through a 0.45 µm membrane filter (Millipore Corp.) and degassed prior to use. The injection volume was 10 µL. The recovery rates for TSIIA were in the range of 99-102%, and the RSD were less than 2%. Intra-day and inter-day precisions for TSIIA were below 2%. In vitro dissolution studies The pharmaceutical performance of pure TSIIA, its SDs and PMs were evaluated using in vitro dissolution studies. The tests were carried out according to the USP 24 method 2 (paddle method) in a Beiyang SR8 and dissolution apparatus (D-800LS Precise Dissolution Apparatus, Tianjin University Co., Ltd., Tianjin, China). The dissolution test was performed using 900 ml of distilled water contained 0.5% sodium dodecyl sulfate at 37 ± 0.5°C and 50 rpm for 2 h. Samples equivalent to 5 mg of TSIIA were taken for dissolution studies. Five-milliliter samples were taken at 5, 10, 15, 30, 45, 60, 120 and 180 min and immediately replaced with fresh dissolution medium at the same temperature. These samples were filtered with a membrane filter (pore size 0.45 µm). The first 2 mm was discarded and remainder was analyzed by HPLC for TSIIA as described above. Characterization of TSIIA solid dispersion Differential scanning calorimetry Thermal analysis was performed with differential scanning calorimeter (204A/G Phoenix ® instrument, Netzsch, Germany). The samples were heated under nitrogen atmosphere on an aluminum pan at a heating rate of 10°C/min over the temperature range of 25 and 350°C. All the DSC measurements were conducted in a nitrogen atmosphere, and the flow rate was 50 mL/min. Scanning electron microscopy (SEM) The surface morphology of TSIIA and tSDs were examined using a scanning electron microscope (S-3000N, Hitachi, Japan). X-ray powder diffraction XRPD patterns of samples were performed on a X-ray diffractometer (X-pro Pan analytical, Phillips, Mumbai, India). Data were collected using primary monochromated radiation (Cu Kα1, λ =1.5406 Å), over a 2θ range of 0-70 o at a step size of 0.04 and a dwell time of 10 s per step. Fourier transform infrared spectroscopy FTIR measurements were carried out with an infrared spectrophotometer (Victor22 BRUKER) at room temperature. Samples of TSIIA, combined carriers, PMs and SDs were previously ground and mixed thoroughly with potassium bromide (0.5% (w/w) of the sample). The scanning range was 400 to 4000 cm -1 and the resolution was 1 cm -1 . Stability test The accelerated stability study of prepared solid dispersion was conducted at 40°C/75% RH for 6 months according to related literature. The samples were evaluated for drug content and in vitro drug dissolution. In vitro dissolution studies The dissolution behaviors of TSIIA from various tSDs compared with pure TSIIA and PMs were shown in Figure 1. Both pure TSIIA [ Figure 1a] and PMs of TSIIA/ nano-CaCO 3 /poloxamer 188 [ Figure 1b] presented a poor dissolution rate of less than 30% after 120 min, which could be related to its crystalline form shown by DSC and XRPD. PMs including poloxamer 188 [ Figure 1b] exhibited a slight improvement in the dissolution rate compared with pure TSIIA, which was most likely attributed to the hydrophilic and weak solubilizer effect of poloxamer 188. Also as shown in Figure 1, it was evident that bSDs of TSIIA/poloxamer 188 [ Figure 1e and f] exhibited faster dissolution rates than TSIIA. At 60 min, the 1/3 and 1/9 bSDs with poloxamer 188, approximately 53.6% and 95.2% of the TSIIA were dissolved, respectively. It was demonstrated that the higher the carrier content, the faster the dissolution rate of TSIIA in SDs. Poloxamer 188 may generate a high surfactant concentration, which could enhance the solubility of dissolved drug and prevent agglomeration of drug into large globules or particles in the aqueous environment, and thus effectively inhibited crystallization during dissolution. [33,34] An improved dissolution rate of TSIIA also was observed in bSDs with nano-CaCO 3 [ Figure 1c and d]. At TSIIA/ nano-CaCO 3 ratio of 1/9, 65% of TSIIA was dissolved within 60 min [ Figure 1d]. This improvement in the drug dissolution might be due to the significant reduction in particle size during the formation of SDs as well as the adsorption of TSIIA onto the large surface area of nano-CaCO 3 . As reported by Monkhouse and Lach, [35] adsorption onto insoluble, non-porous, high surface-area carriers was a well-known technique to enhance drug dissolution and was already described for silica-based excipients in the early 1970s. Judging from the dissolution results of tSDs, increase TSIIA dissolution from tSDs [ Figure 1g-i] was also observed. The polymeric carrier with surface active properties and adsorption of TSIIA onto the large surface area of nano-CaCO 3 might be responsible for the observed dissolution behavior of TSIIA. This improved drug dissolution could also be attributed to the presence of amorphous TSIIA, as confirmed by DSC and XRPD studies. Scanning electron microscopy The scanning electron microscopy images of pure TSIIA and tSDs at TSIIA/nano-CaCO 3 /poloxamer 188 ratio of 1/5/4 were shown in Figure 2. Crystal morphology of pure TSIIA [ Figure 2a] exhibited flat broken needles of different sizes, with well-developed edges. However, the tSDs [ Figure 2b] appeared as irregular particles. The original morphology of TSIIA was not visibly observed in the photomicrographs, suggesting that TSIIA dispersed uniformly into the carrier. Therefore, it was possible that the reduced particle size and increased surface area might be responsible for the enhanced drug dissolution of the SDs. Differential scanning calorimetry The DSC thermograms of TSIIA, PMs of nano-CaCO 3 / poloxamer 188, PMs composed of TSIIA/nano-CaCO 3 / poloxamer188, and the corresponding tSDs were presented in Figure 4. The DSC curve of TSIIA [ Figure 3a] exhibited a sharp melting peak with onset temperatures of 208.3°C, indicating its crystalline nature, and followed by an exothermic peak at 223.6°C, which may be attributed to the decomposition of TSIIA. Concerning PMs of carriers [ Figure 3b], endothermic peak of poloxamer 188 at 55.8°C was observed. Similar results have been reported by Li et al. [17] and Zhao et al. [15] During scanning of TSIIA/nano-CaCO 3 /poloxamer PMs [ Figure 3c], the drug endothermic peak was found at 208.3°C, indicating that the absence of interaction between TSIIA and carriers in PMs and TSIIA existed in a virgin form in the system. In DSC spectra of tSDs [ Figure 3d], the characteristic peak of TSIIA had completely disappeared, suggesting that the drug may be present in tSDs as amorphous state, which was responsible for the enhancement of drug dissolution. Additionally, the decomposition exothermic peak also vanished, which suggested stabilization effect of TSIIA by combined carriers (nano-CaCO 3 /poloxamer 188) solid dispersion. X-ray powder diffraction The X-ray powder diffraction of TSIIA, PMs of nano-CaCO 3 /poloxamer 188, PMs of TSIIA/ nano-CaCO 3 /poloxamer and the corresponding tSDs were shown in Figure 4. TSIIA showed prominent diffraction peaks in the range of 5-30°, showing a typical crystalline pattern [ Figure 4a]. Broad peaks at 18 and 66° were observed in the spectra of combined carriers [ Figure 4b]. All major characteristic crystalline peaks of TSIIA were observed clearly in PMs of TSIIA/nano-CaCO3/ poloxamer diffractograms [ Figure 4c]. However, for the tSDs [ Figure 4d], the discriminative peaks of TSIIA had obviously disappeared compared with the corresponding physical mixtures, indicating that TSIIA was no longer present in the crystalline state but was converted to amorphous form, which is consistent with the DSC results. Fourier transform infrared In order to further ascertain if TSIIA occurred a polymorphic change during the preparation of solid dispersion and to test the possibility of intermolecular interactions between TSIIA and carriers in the SDs, FTIR was carried out and the results were presented in Figure 5. FTIR spectrum of pure TSIIA [ Figure 5a] showed characteristic carbonyl-stretching vibrating absorption of peaks at 1667 cm -1 . Similar As we all know, moisture and other factors like molecular mobility may increase drug migration and promote drug crystallization in solid dispersions, [36] resulting in poor stability, decreased solubility and dissolution rate. Thus, in the current study, the stability of the SDs was estimated by tests on drug content and in vitro dissolution under the storage conditions of 75% RH and 40°C for 6 months. The results are listed in Table 2. Drug content in both bSDs and tSDs stored for 6 months was unchanged remarkably compared with freshly prepared samples. After 6 months of storage, the dissolution (60 min) of bSDs with poloxamer 188 decreased by about 17%. On the other hand, SDs consists of nano-CaCO 3 which showed similar dissolution profiles compared with the freshly prepared samples within 60 min. These phenomena indicated that the nano-CaCO 3 had a strongly stabilizing effect on the meta-stable drug in SDs and inhibited recrystallization of TSIIA. The improved stability of SDs probably be due to the interactions between drug and carriers, as well as the dispersion and adsorption on the surface of nano-CaCO 3 , which could slow down molecular mobility of amorphous drug. [37] The fabrication of nano-CaCO 3 is simple, scalable, observations have been reported by Zhao et al. [15] The FTIR spectrum of carriers showed the peaks at 3487 cm -1 , which corresponded to -OH stretching vibration mode. This could be attributed to the presence of hydroxyl groups and (or) adsorbed water on the surface of nano-CaCO 3 particles. [28] The FTIR spectra of PMs [ Figure 5c] were almost equivalent to the addition spectrum of TSIIA and carriers [Figures 5a and b]. This result suggested that there is only a physical effect and there was no chemical interaction between combined carriers and TSIIA in PMs. When scanned from 4000-400 cm -1 in the tSDs [ Figure 5d], stretching vibrations of carbonyl group (1667 cm -1 ) in curcumin appeared at lower wave number 1649 cm -1 , and the peak at 3478 cm -1 of -OH stretching vibration was weaken and simultaneously broadened, which suggested that TSIIA interacted with carriers, presumably by hydrogen bonds. b a c d cost-effective, and controllable. Thus, we believe that nano-CaCO 3 will have potential application in the filed of SDs as a specific dispersion carrier. CONCLUSIONS In this study, tSDs of TSIIA (poorly water-soluble drug) were successfully prepared by spray drying technique using poloxamer 188 with the aid of nano-CaCO 3 as dispersing carrier. All SDs exhibited improvement in drug dissolution. As compared to the use of poloxamer 188 alone, very slight decrease in dissolution was observed in SDs with nano-CaCO 3 during stability study, which suggested nano-CaCO 3 had strongly stabilizing effect on the amorphous TSIIA in SDs. Thus, the experimental results demonstrated the high potential of spray drying technique for obtaining stable free flowing SDs of TSIIA using poloxamer 188 with the aid of nano-CaCO 3 as dispersing carriers.
2018-04-03T03:09:06.231Z
2014-05-01T00:00:00.000
{ "year": 2014, "sha1": "e08e370920f9dbd5c7181a2b734385056eb6b1ab", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc4078353", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "b6667c305c712e1fdd9cd3ab522600c94d8dca52", "s2fieldsofstudy": [ "Medicine", "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
256899305
pes2o/s2orc
v3-fos-license
Folic acid deficiency exacerbates the inflammatory response of astrocytes after ischemia‐reperfusion by enhancing the interaction between IL‐6 and JAK‐1/pSTAT3 Abstract Aim To demonstrate the role of IL‐6 and pSTAT3 in the inflammatory response to cerebral ischemia/reperfusion following folic acid deficiency (FD). Methods The middle cerebral artery occlusion/reperfusion (MCAO/R) model was established in adult male Sprague‐Dawley rats in vivo, and cultured primary astrocytes were exposed to oxygen‐glucose deprivation/reoxygenation (OGD/R) to emulate ischemia/reperfusion injury in vitro. Results Glial fibrillary acidic protein (GFAP) expression significantly increased in astrocytes of the brain cortex in the MCAO group compared to the SHAM group. Nevertheless, FD did not further promote GFAP expression in astrocytes of rat brain tissue after MCAO. This result was further confirmed in the OGD/R cellular model. In addition, FD did not promote the expressions of TNF‐α and IL‐1β but raised IL‐6 (Peak at 12 h after MCAO) and pSTAT3 (Peak at 24 h after MCAO) levels in the affected cortices of MCAO rats. In the in vitro model, the levels of IL‐6 and pSTAT3 in astrocytes were significantly reduced by treatment with Filgotinib (JAK‐1 inhibitor) but not AG490 (JAK‐2 inhibitor). Moreover, the suppression of IL‐6 expression reduced FD‐induced increases in pSTAT3 and pJAK‐1. In turn, inhibited pSTAT3 expression also depressed the FD‐mediated increase in IL‐6 expression. Conclusions FD led to the overproduction of IL‐6 and subsequently increased pSTAT3 levels via JAK‐1 but not JAK‐2, which further promoted increased IL‐6 expression, thereby exacerbating the inflammatory response of primary astrocytes. | INTRODUC TI ON Cerebral ischemia/reperfusion (I/R), caused by the restoration of blood supply to ischemic brain tissue, is a pathological injury that occurs during the treatment of ischemic stroke and is accompanied by high morbidity and mortality. 1 There are no specific drugs available to treat I/R injury. 2 Thus, in such a case, dietary supplements with low side effects may be considered to assist in promoting neurological recovery if supported by substantial scientific evidence. 3 Folic acid (FA), an essential nutrient in the regular human diet, is strongly associated with neuroinflammation. 4,5 Research has shown that folic acid deficiency (FD) triggers the activation of the neuroinflammatory cascade in Alzheimer's disease (AD). 6 In addition, Guest et al. observed a negative correlation between cerebrospinal fluid folate and levels of inflammation within the central nervous system (CNS) in the healthy population. 7 However, the exact mechanisms underlying the effects of FD on neuroinflammation following cerebral ischemia-reperfusion have not been fully elucidated. Our previous work suggests that FD may enhance the expression of inflammatory mediators following cerebral hypoxia-ischemia by activating microglia. 8 Although astrocytes and microglia are known to be critical regulators of the inflammatory response in the CNS, 9 the mechanisms by which astrocytes are involved in the effects of FD on stroke recovery require further investigation. Astrocytes, the most common glial cells in the brain, are key regulators of the inflammatory response in the CNS. 10 For instance, in the early stages of AD, astrocytes become activated and release interleukins and nitric oxide, exacerbating the neuroinflammatory response. 11 In experimental autoimmune encephalomyelitis mice, astrocytes produce lactosylceramide, which promotes transcriptional levels of pro-inflammatory factors such as IL-1β and nitric oxide synthase in an autocrine manner. 12 Additionally, astrocyte proliferation is an important pathological feature of stroke. Reactive astrocytes can release pro-inflammatory cytokines in response to acute ischemia, especially IL-6, thereby triggering the production of secondary mediators, which may lead to persistent and neurotoxic effects. 13 Given that FD induces neuroinflammation in CNS disorders, FD may promote inflammatory responses in astrocytes following ischemia-reperfusion. Interleukin-6 (IL-6)/signal transduction and transcription activator of 3 (STAT3) is an essential intracellular pathway that mediates inflammatory signaling and is a vital signaling component in reactive astrocytes. 14 As a core upstream regulator of the inflammatory response, IL-6 promotes inflammatory response waterfalls and simultaneously activates STAT3 via Janus kinases (JAKs). Subsequently, aberrant activation of STAT3 promotes transcriptions and expressions of many genes encoding pro-inflammatory mediators. 15 Here, we hypothesize that FD may exacerbate astrocyte injury through IL-6/pSTAT3 interactions. In this present study, both the rat middle cerebral artery occlusion/reperfusion (MCAO/R) model and oxygen-glucose deprivation/reoxygenation (OGD/R)-treated primary astrocytes were used to observe FD's effects on astrocytes and further explore the underlying molecular mechanisms. The study shows for the first time that FD triggers an inflammatory response in astrocytes after ischemia-reperfusion through the IL-6/JAK-1/pSTAT3 pathway and exacerbates inflammation through the interaction between IL-6 and pSTAT3. This work will provide new insights into how FD leads to astrocyte injury after ischemic stroke. | Surgical procedures The MCAO rats were induced by the intraluminal filament technique, as described previously. 16 After 1 h of MCAO-induced focal cerebral ischemia, the line was carefully withdrawn to establish reperfusion. The rats were then allowed to recover from anesthesia at 37°C and were sacrificed at 12 h and 24 h after reperfusion for the following experiments. To imitate the cerebral I/R model in vivo, the cells were induced by OGD/R. The normal medium (containing 10% FBS and 4.5 g/L glucose) was replaced by glucose-free DMEM (Gibco). Then, the cells were exposed to a three-gas incubator at 37°C containing 1.0% O 2 to initiate hypoxia for 1 h, followed by 3 h re-oxygenation in a normoxia incubator. Normal control cells were incubated in a regular cell culture incubator under normoxic conditions. | Immunofluorescence Immunofluorescence staining of the rat brain sections was performed as previously described. 22 In brief, the sections were dewaxed and hydrated to dispose of 3% H 2 O 2 for 10 min at room temperature, repaired by citric acid antigen, and blocked with goat serum for 1 h at 37°C. Then, they were incubated overnight 4°C with the primary antibodies (mouse anti-IL-6, rabbit anti-TNFα, rab- | Western blot Western blot was performed as previously described. 22 Then, the proteins were detected by chemiluminescence reagents (Millipore) and observed using a ChemiDoc™ XRS+ Imaging System (Bio-RAD, Hercules, USA). The protein levels were quantified by densitometry using Image J 1.4.3.67. | Statistical analysis SPSS V.20 and GraphPad Prism V.9.0 were used for the statistical analysis. All quantitative data were expressed as mean ± standard deviation (x±s). All data were tested for normality using the Shapiro-Wilk test. One-way ANOVA was used to assess the statistical significance of the differences among different experimental groups, followed by Student-Newman-Keuls multiple-range tests. p < 0.05 was assumed statistically significant. | Folic acid deficiency does not further promote GFAP expression raised by ischemic injury in vitro and in vivo Several lines of evidence support that in response to stroke, astrocytes convert to a reactive phenotype chiefly characterized by up-regulation of GFAP and cellular hypertrophy. 23 To determine the effect of FD on the reactive astrocytes, GFAP protein expression was detected in the MCAO rat brain and cultured primary astrocytes by immunohistochemical staining and western blot. The results showed an evident increase of GFAP expression at 12 h of reperfusion compared to the SHAM group, and further increased by 24 h (p < 0.05; Figure 1A, B). This result was further confirmed in in vitro OGD/R cellular model (p < 0.05; Figure 1C, D). However, FD did not significantly alter GFAP expression compared to the MCAO/R (or OGD/R) group. | Folic acid deficiency promotes IL-6 but not TNFα and IL-1β expressions in astrocytes following ischemic injury Astrocyte-derived neuroinflammation has been identified as a potential contributor to brain injury. 24 To determine whether FD could modulate astrocyte-mediated neuroinflammation, three proinflammatory cytokines, TNFα, IL-1β, and IL-6, were detected by immunofluorescence double-labeling and western blot analysis. As shown in Figure 2 In line with what was observed in vivo, FD promoted IL-6, but not TNFα or IL-1β levels in primary astrocytes exposed to OGD/R compared to the OGD/R alone (p < 0.05; Figure 2C, D). | Folic acid deficiency results in an increase in pSTAT3 expression in the astrocytes following ischemic injury Accumulated evidence suggested that activation of STAT3 plays an important role in IL-6-mediated inflammation. 25 The effect of FD on pSTAT3 expression in astrocytes was examined. The results indicated that pSTAT3 expression did not change significantly at 12 h after reperfusion but increased significantly after 24 h reperfusion compared to the SHAM group. FD further increased the number of GFAP/pSTAT3 double-positive cells in the ischemic brain compared with the MCAO/R group. Similarly, FD promoted pSTAT3 expression raised by OGD/R in primary astrocytes (p < 0.05; Figure 3 D-E). | Folic acid deficiency increases the level of pSTAT3 through JAK-1 but not JAK-2 In inflammatory diseases, STAT3 is usually activated by phosphorylation through the activation of non-receptor protein tyrosine kinases JAKs. 15 To elucidate whether FD upregulated pSTAT3 expression in a JAK-dependent manner, the expression of pSTAT3 was detected. As shown in Figure 4, Filgotinib administration significantly reduced the levels of IL-6 and pSTAT3, but AG490 treatment did not reveal any significant changes in the expression of IL-6 or pSTAT3. Our results proved that FD increased the level of pSTAT3 through JAK-1 instead of JAK-2. | Interaction between IL-6 and pSTAT3 in hypoxic and glucose-deficient astrocytes after folic acid deficiency STAT3, a key transcription factor, is involved in mediating acute inflammatory response activities located downstream of IL-6. 25 To explore the potential correlation between IL-6 and pJAK-1/pSTAT3, the cells were first treated with LMT-28. The Western blot results in Figure 5 showed that treatment with IL-6 inhibitor significantly inhibited both pSTAT3 and pJAK-1 expressions after OGD/R treatment in astrocytes (p < 0.05; Figure 5A-H). Then, whether pSTAT3 affected IL-6 expression was assessed by adding C188-9 to OGD/Rtreated astrocytes. As shown in Figure 5 I-M, the expression of IL-6 was also inhibited after adding STAT3 inhibitor (p < 0.05). Briefly, the results showed that inhibiting IL-6 expression reduces pSTAT3 levels, while pSTAT3 inhibition also decreases IL-6 expression, suggesting a positive feedback loop between these factors. | DISCUSS ION Inadequate levels of folic acid are associated with an increased risk of neurodegenerative diseases and cerebrovascular disease. 26 However, the exact mechanisms still need to be determined. Previous F I G U R E 4 Folic acid deficiency regulates the expression of pSTAT3 in primary astrocytes exposed to hypoxia and glucose deficiency via the JAK-1 pathway. (A-E) The cells were harvested after incubating with Filgotinib (JAK-1 inhibitor). The protein expressions of IL-6 (A), pSTAT3, and STAT3 (B) were detected by western blot. Bar graphs show the relative levels of IL-6 (normalized to β-Actin) (C), pSTAT3 (normalized to STAT3) (D), and STAT3 (normalized to β-Actin) (E). (F-J) The cells were harvested after incubating with AG490 (JAK-2 inhibitor). The protein expressions of IL-6 (F), pSTAT3 and STAT3 (G) were detected by western blot. Bar graphs show the relative level of IL-6 (normalized to β-Actin) (H), pSTAT3 (normalized to STAT3) (I) and STAT3 (normalized to β-Actin) (J). Data are shown as mean ± SEM (n = 4). a p <0.05: Compared to OGD/R group. b p <0.05: Compared to FD + OGD/R group. astrocytes produce and release pro-inflammatory mediators, which may lead to neuronal death and infarct progression. 27 In the present study, we focused on astrocytes in order to gain insight into novel mechanisms by which FD affects neurological function. This is the first evidence that the IL-6/JAK-1/pSTAT3 pathway triggered the inflammatory response of astrocytes in the presence of FD. Notably, FD leads to the overproduction of IL-6 in the astrocytes, which next activates pSTAT3, leading to more IL-6 production and release. This interaction between IL-6 and pSTAT3 may amplify neuroinflammatory responses, leading to secondary brain damage. There is strong experimental evidence that folic acid affects inflammation in the central nervous system; it also suggests intricate mechanisms by which this occurs. For instance, folic acid reduces hippocampal myeloperoxidase activity to alleviate neuroinflammation and improve memory impairment in sepsis-induced rats. 28 Another in vitro study indicated that lipopolysaccharide-activated microglia respond less inflammatory to folic acid because it inhibits the activation of NF-kB and JNK and upregulates p38 MAPK phosphorylation. 4 Besides, our previous work has shown that FD enhanced microglia immune responses via the Notch1/nuclear factor kappa B p65 pathway to increase brain injury. 8 The current study investigated the effect of FD on the astrocytes under ischemiareperfusion. We revealed that FD promoted the inflammatory response of astrocytes by exacerbating the interaction between IL-6 and JAK-1/pSTAT3. Multiple signaling molecules may be involved in FD's activation of neuroinflammation, which may vary depending on different cell types or disease conditions. Both JAK1 and JAK2 have been proven to be associated with the IL-6 activation of STAT3 pathway. 29 However, those two Janus kinases are known to each have different roles in different pathological and physiological processes. For instance, Yang et al. demonstrated that the release of IL-6 activated the JAK2/STAT3 pathway to aggravate neuronal degeneration in mice with Parkinson's disease. 30 Whereas, increased IL-6 expression exacerbates the inflammatory response of macrophages through the JAK1/STAT3 pathway in mouse models of ulcerative colitis. 31 To elucidate the exact pathway by which FD upregulates pSTAT3 expression, we blocked the activation of JAK-1 and JAK-2 using Filgotinib and AG490, respectively. The results demonstrate that FD-induced pSTAT3 expression was significantly inhibited in OGD/R-treated astrocytes after blocking the activation of JAK-1 but not JAK-2. Although different JAKs may have overlapping roles, each has an important role in mediating signaling. It has been shown that JAK1 is a central protein in the inflammatory response cytokine network and can produce pro-inflammatory activity. 32 Nevertheless, JAK-2 is mainly involved in processes such as mitotic reorganization and histone modification and is essential for bone marrow and platelet production. 33 These support our findings that FD exacerbates the inflammatory response in astrocytes via the IL-6/JAK-1/pSTAT3 pathway after ischemia-reperfusion. There is a complex regulatory relationship between IL-6 and pSTAT3. As a transcription factor, STAT3 is involved in mediating the acute inflammatory response to the genes associated downstream of IL-6. 34 Binding of IL-6 to its receptor activates the phosphorylation of STAT3. pSTAT3 then binds to DNA and increases the expression of cytokine genes, resulting in the production of more interleukins. This vicious cycle leads to persistent nervous system inflammation unless effectively controlled. 35 This is consistent with our results that there may be an interaction between IL-6 and pSTAT3 expressions in folic acid deficient OGD/R astrocytes and that the malignant feedback between them may play an essential role in FD-mediated astrocyte injury. In general, STAT3 is a vital player in the proliferative response of reactive astrocytes. 23 Also, STAT3 is one of the transcription factors of GFAP and the increase of GFAP expression tends to be accompanied by STAT3 activation. 36 A noteworthy point to ponder is that FD promoted p-STAT3 expression but not GFAP activation in our study. This is possible because astrocyte activation is finely regulated by many intracellular and extracellular signaling molecules, such as TGFβ, NF-κB, and STAT3. [37][38][39] However, some regulatory factors, such as the FGF signaling pathway, inhibit the activation of astrocytes. 40 Therefore, we speculate that, in the case of FD, the activation of some inhibitory factors may be involved and thus FD did not further activate GFAP. Additionally, Takumi Takizawa et al. proved that abnormal methylation of the STAT3 binding element in the GFAP promoter in astrocytes prevents the binding of STAT3, thereby inhibiting GFAP transcription. 41 Besides, the AP-1 transcription factor is essential for promoting the upregulation of GFAP genes in response to injury. 42 Folic acid is involved in DNA synthesis and methylation and thus plays a crucial role in maintaining genomic stability. 43 Firstly, the present study focused on the early molecular changes caused by FD at the onset of cerebral infarction. Considering that post-stroke neuroinflammation is a highly dynamic and complex adaptive process, 44 long-term FD intervention may be necessary for further behavioral observation and the exploration of molecular mechanisms at the later stage of disease in the future study. Secondly, both astrocytes and microglia mediate inflammatory responses through related molecules in response to the stress of ischemic brain injury. 45 Further evidence supported that there are reciprocal interactions between microglia and astrocytes during neuroinflammation. 46 Our previous and present studies respectively verified that FD exacerbates the inflammatory response of microglia and astrocytes after ischemia-reperfusion. 8 However, in the light of the existing experiment data, we are unable to determine whether microglia or astrocytes play a more critical role during the regulation of FD on neuroinflammation, and whether FD affects the interaction between the two types of glials or not. In conclusion, this study found that in the context of ischemiareperfusion, folic acid deficiency may trigger astrocytes' inflammatory response via the IL-6/JAK-1/pSTAT3 pathway. Furthermore, the interaction between IL-6 and pSTAT3 may amplify the | 1545 CHENG et al. neuroinflammatory response, leading to secondary brain injury. Therefore, specific inhibition of the IL-6/JAK-1/pSTAT3 pathway in astrocytes is a potential therapeutic approach to alleviate the progression of ischemic stroke caused by folic acid deficiency. This also suggests that folic acid supplementation is a potential preventive and therapeutic strategy to reduce brain damage in ischemic stroke. ACK N O WLE D G E M ENTS This work was supported by the National Natural Science Foundation of China (Grant numbers 82173519 and 81874262). CO N FLI C T O F I NTER E S T S TATEM ENT The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. DATA AVA I L A B I L I T Y S TAT E M E N T The data sets and/or analyzed during the current study are available from the corresponding author on request.
2023-02-17T06:17:33.641Z
2023-02-16T00:00:00.000
{ "year": 2023, "sha1": "b45aa35e50afedbec5f910b61e0397186eb7c73c", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cns.14116", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "11522bf76d15b7bba75c0c31d86097de8751c3cd", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
209414701
pes2o/s2orc
v3-fos-license
Balanced Metrics and Chow Stability of Projective Bundles over K\"ahler Manifolds In 1980, I. Morrison proved that slope stability of a vector bundle of rank 2 over a compact Riemann surface implies Chow stability of the projectivization of the bundle with respect to certain polarizations. Using the notion of balanced metrics and recent work of Donaldson, Wang, and Phong-Sturm, we show that the statement holds for higher rank vector bundles over compact algebraic manifolds of arbitrary dimension that admit constant scalar curvature metric and have discrete automorphism group. Introduction A central notion in geometric invariant theory (GIT) is the concept of stability. Stability plays a significant role in forming quotient spaces of projective varieties for which geometric invariant theory was invented. One can define Mumford-Takemoto slope stability for holomorphic vector bundles, and also there is a notion of Gieseker stability which is more in the realm of geometric invariant theory. It is wellknown that over algebraic curves, all of these different notions coincide. It was known from the work of Narasimhan and Seshadri that a holomorphic vector bundle over a compact Riemann surface is poly-stable if and only the bundle admits a projectively flat connection. The picture became complete with the later work of Donaldson,Uhlenbeck and Yau ([D1], [D2] [UY]). They proved that over a compact Kähler manifold, a holomorphic vector bundle is poly-stable if and only if it admits a Hermitian-Einstein metric. This is known as the Hitchin-Kobayashi correspondence. By a conjecture of Yau, one would also expect such a correspondence for polarized algebraic manifolds. In other words, the existence of extremal metrics on such a manifold should be equivalent to being stable in some GIT sense. In [Zh], Zhang introduced the concept of balanced embedding and proved that the existence of balanced embedding of a polarized algebraic variety is equivalent to stability of Chow point of the variety. Zhang's result has been reproven by Lu in [L] and Phong and Sturm in [PS1]. The same correspondence was proven for vector bundles by Wang in [W1]. Later in [D3], Donaldson proved that the existence of constant scalar curvature Kähler metrics implies existence of balanced metrics and hence asymptotic Chow stability. The converse is not yet known. Earlier, in [M], Morrison proved that for the projectivization of a rank two holomorphic vector bundle over a compact Riemann surface, Chow stability is equivalent to the stability of the bundle. Using ideas from the recent research discussed above, in this article we generalize one direction of Morrison's result for higher rank vector bundles over compact algebraic manifolds of arbitrary dimension that admit constant scalar curvature metric and have discrete automorphism group. To state the precise result, let X be a compact complex manifold of dimension m and π : E → X a holomorphic vector bundle of rank r with dual bundle E * . This gives a holomorphic fibre bundle PE * over X with fibre P r−1 . One can pull back the vector bundle E to PE * . We denote the tautological line bundle on PE * by O PE * (−1) and its dual by O PE * (1). Let L → X be an ample line bundle on X and ω ∈ 2πc 1 (L) be a Kähler form. Since L is ample, there is an integer k 0 so that for The theorem we shall prove is the following: Theorem 1.1. Suppose that Aut(X) is discrete and X admits a constant scalar curvature Kähler metric in the class of 2πc 1 (L). If E is Mumford stable, then there exists k 0 such that is Chow stable for k ≥ k 0 . One of the earliest results in this spirit is the work of Burns and De Bartolomeis in [BD]. They construct a ruled surface which does not admit any extremal metric in certain cohomology class. In [H1], Hong proved that there are constant scalar curvature Kähler metrics on the projectivization of stable bundles over curves. In [H2] and [H3], he generalizes this result to higher dimensions with some extra assumptions. Combining Hong's results with Donaldson's, (PE * , O PE * m (n)) is Chow stable for m, n ≫ 0 when the bundle E is stable. Note that it differs from our result, since it implies the Chow stability of (PE * , O PE * m (n)) for n big enough. In [RT], Ross and Thomas developed the notion of slope stability for polarized algebraic manifolds. As one of the applications of their theory, they proved that if (PE * , O PE * (1) ⊗ π * L k ) is slope semi-stable for k ≫ 0, then E is a slope semistable bundle and (X, L) is a slope semistable manifold.Again note that they look at stability of PE * with respect to polarizations O PE * m (n) for n big enough. For the case of one dimensional base, however they showed stronger results. In this case they proved that if (PE * , O PE * (1)⊗π * L) is slope (semi, poly) stable for any ample line bundle L, then E is a slope (semi, poly) stable bundle. In order to prove Theorem 1.1 we use the concept of balanced metrics (See Definition 2.1). Combining the results of Luo, Phong, Sturm and Zhang on the relation between balanced metrics and stability, it suffices to prove the following Theorem 1.2. Let X be a compact complex manifold and L → X be an ample line bundle. Suppose that X admits a constant scalar curvature Kähler metric in the class of 2πc 1 (L) and Aut(X) is discrete. Let E → X be a holomorphic vector bundle on X. If E is Mumford stable, then O PE * (1) ⊗ π * L k admits balanced metrics for k ≫ 0. The balanced condition may be formulated in terms of Bergman kernels. First, we show that there exists an asymptotic expansion for the Bergman kernel of (PE * , O PE * (1) ⊗ π * L k ). Fix a positive hermitian metric σ on L such that Ric(σ) = ω. For any hermitian metric g on O PE * (1), we define the sequence of volume forms dµ g,k on PE * as follows where ω g = Ric(g). Let ρ k (g, ω) be the Bergman kernel of H 0 (PE * , O PE * (1)⊗π * L k ) with respect to the L 2 -inner product L 2 (g ⊗ σ ⊗k , dµ k,g ). We prove the following Theorem 1.3. For any hermitian metric h on E and Kähler form ω ∈ 2πc 1 (L), there exist smooth endomorphisms B k (h, ω) such that (1) There exist smooth endomorphisms A i (h, ω) ∈ Γ(X, E) such that the following asymptotic expansion holds as k −→ ∞, (2) In particular where Λ is the trace operator acting on (1, 1)-forms with respect to the Kähler form ω and F (E,h) is the curvature of (E, h) and S(ω) is the scalar curvature of ω. (3) The asymptotic expansion holds in C ∞ . More precisely, for any positive integers a and p, there exists a positive constant K a,p,ω,h such that Moreover the expansion is uniform in the sense that there exists a positive integer s such that if h and ω run in a bounded family in C s topology and ω is bounded from below, then the constants K a,p,ω,h are bounded by a constant depending only on a and p. Finding balanced metrics on O PE * (1) ⊗ π * L k is basically the same as finding solutions to the equations ρ k (g, ω) = Constant. Therefore in order to prove Theorem 1.2, we need to solve the equations ρ k (g, ω) = Constant for k ≫ 0. Now if ω has constant scalar curvature and h satisfies the Hermitian-Einstein equation is constant. Notice that in order to make A 1 constant, existence of Hermitian-Einstein metric is not enough. We need the existence of constant scalar curvature Kähler metric as well. Next, the crucial fact is that the linearization of A 1 at (h, ω) is surjective. This enables us to construct formal solutions as power series in k −1 for the equation ρ k (g, ω) = Constant. Therefore, for any positive integer q, we can construct a sequence of metrics g k on O PE * (1)⊗π * L k and bases s The next step is to perturb these almost balanced metrics to get balanced metrics. As pointed out by Donaldson, the problem of finding balanced metric can be viewed also as a finite dimensional moment map problem solving the equation M k = 0. Indeed, Donaldson shows that M k is the value of a moment map µ D on the space of ordered bases with the obvious action of SU(N). Now, the problem is to show that if for some ordered basis s, the value of moment map is very small, then we can find a basis at which moment map is zero. The standard technique is flowing down s under the gradient flow of |µ D | 2 to reach a zero of µ D . We need a Lojasiewicz type inequality to guarantee that the flow converges to a zero of the moment map. We do this in Section 3 by adapting Phong-Sturm proof to our situation. Here is the outline of the paper: In Section 2, we review Donaldson's moment map setup. We follow Phong and Sturm treatment from ( [PS2]). In Section 3, we obtain a lower bound for the derivative of the moment map by adapting the argument in ( [PS2]) to our setting. In Section 4, we show how to perturb almost balanced metrics to obtain balanced metrics in the general setting of Section 3. In order to do that, we use the estimate obtained in Theorem 3.2 to apply the Donaldson's version of inverse function theorem(Proposition 2.2). In Section 5, we prove the existence of an asymptotic expansion for the Bergman kernel of O PE * (1) ⊗ π * L k using results of Catlin and Zelditch. Section 6 is devoted to constructing almost balanced metrics on O PE * (1) ⊗ π * L k using the asymptotic expansion obtained in Section 5. Acknowledgements: I am sincerely grateful to Richard Wentworth for introducing me the subject and many helpful discussions and suggestions on the subject and his continuous help, support and encouragement. I would also like to thank Bo Berndtsson, Hamid Hezari, Duong Hong Phong , Julius Ross and Steve Zelditch for many helpful discussions and suggestions. Moment Map Setup In this section, we review Donaldson's moment map setup. We follow the notation of [PS2]. Let (Y, ω 0 ) be a compact Kähler manifold of dimension n and O(1) → Y be a very ample line bundle on Y equipped with a Hermitian metric g 0 such that Ric(g 0 ) = ω 0 . Since O(1) is very ample, using global sections of O(1), we can embed Y into P(H 0 (Y, O(1)) * ). A choice of ordered basis s = (s 1 , ..., s N ) of H 0 (Y, O(1)) gives an isomorphism between P(H 0 (Y, O(1)) * ) and P N −1 . Hence for any such s, we have an embedding ι s : Y ֒→ P N −1 such that ι * s O P N (1) = O(1). Using ι s , we can pull back the Fubini-Study metric and Kähler form of the projective space to O(1) and Y respectively. where V = Y ω n 0 n! . A hermitian metric(respectively a Kähler form) is called balanced if it is the pull back ι * s h FS (respectively ι * s ω FS ) where ι s is a balanced embedding. There is an action of SL(N) on the space of ordered bases of H 0 (Y, O(1)). Donaldson defines a symplectic form on the space of ordered bases of H 0 (Y, O(1)) which is invariant under the action of SU(N). So there exists an equivariant moment map on this space such that its zeros are exactly balanced bases. More precisely we define where h s is the L 2 -inner product with respect to the pull back of Fubini-Study metric and Fubini-Study Kähler form via the embedding ι s . Also we identify su(N) * with su(N) using the invariant inner product on su(N), where su(N) is the Lie algebra of the group SU(N) and su(N) * is its dual. (For construction of Ω D and more details see ( [D3]) and ( [PS2]) .) Using Deligne's pairing, Phong and Sturm construct another symplectic form on Z as follows: Let (1)). One obtains a holomorphic fibration Y → Z where every fibre is isomorphic to Y . Let p : Y → P N −1 be the projection on the first factor. Then define a hermitian line bundle M on Z by which is the Deligne's pairing of (n + 1) copies of p * O P N−1 (1). Denote the curvature of this hermitian line bundle by Ω M . It follows from properties of Deligne's pairing that Since SU(N) is semisimple, there is a unique equivariant moment map µ M : Z → su(N) for the action of SU(N) on (Z, Ω M ). Let ξ be an element of the Lie algebra su(N). Since SU(N) acts on Z, the infinitesimal action of ξ defines a vector field σ Z (ξ) on Z. Fixing a point z ∈ Z, we have a linear map σ z : su(N) → T z Z. Let σ * z be its adjoint with respect to the metric on T Z and the invariant metric on su(N). Then we get the operator z as the smallest eigenvalue of Q z . In [D3], Donaldson proves the following. Eigenvalue Estimates In this section, we obtain a lower bound for the derivative of the moment map µ D . This is equivalent to an upper bound for the quantity Λ z introduced in the previous section. In order to do this, we adapt the argument of Phong and Sturm to our setting. The main result is Theorem 3.2. Let (Y, ω 0 ) and O(1) → Y be as in the previous section. Let (L, h ∞ ) be a Hermitian line bundle over Y such that ω ∞ = Ric(h ∞ ) is a semi positive (1, 1)-form on Y . Define ω 0 = ω 0 + kω ∞ . For the rest of this section and next section let m be the smallest integer such that ω m+1 ∞ = 0. Also assume that ω n−m 0 ∧ ω m ∞ is a volume form and there exist positive constant n 1 and n 2 such that The case important for this paper is the following: Example 3.1. Let (X, ω ∞ ) be a compact Kähler manifold of dimension m and L be a very ample holomorphic line bundle on X such that ω ∞ ∈ 2πc 1 (L). Let E be a holomorphic vector bundle on X of rank r such that the line bundle O PE * (1) → Y = PE * is an ample line bundle. We denote the pull back of ω ∞ to PE * by ω ∞ . Then ω m+1 ∞ = 0 and by Riemann-Roch formula we have The following lemma is clear. where D (k) is a scalar and M (k) is a trace-free hermitian matrix. Then where the constants n 1 and n 2 are defined by (3.1) and (3.2). We start with the notion of R-boundedness introduced originally by Donaldson in [D3]. Definition 3.2. Let R be a real number with R > 1 and a ≥ 4 be a fixed integer and let s = (s 1 , ..., s N ) be an ordered basis for H 0 (Y, O(1)⊗L k ). We say s has R-bounded geometry if the Kähler form ω = ι * s ω FS satisfies the following conditions Recall the definition of Λ z from the previous section. The main result of this section is the following. Theorem 3.2. Assume Y does not have any nonzero holomorphic vector fields. For any R > 1, there are positive constants C and ǫ ≤ n 2 /10n 1 such that, for any k, if the basis s = (s 1 , ..., The rest of this section is devoted to the proof of Theorem 3.2. Notice that the estimate Λ z ≤ Ck 2m+2 is equivalent to the estimate On the other hand (2.1) and Theorem 2.1 imply that Hence, in order to establish Theorem 3.2, we need to estimate the quantity Y ι Y ξ ,Y ξ ω n+1 FS from below. For the rest of this section, fix an ordered basis For any ξ ∈ su(N), we have a vector field Y ξ on P N −1 generated by the infinitesimal action of ξ. Every tangent vector to P N −1 is given by pairs (z, v) modulo an equivalence relation ∼ . This relation is defined as follows: For a tangent vector [(z, v)], the Fubini-Study metric is given by Since the vector field Y ξ is given by [z, ξz], we have We have the following exact sequence of vector bundles over Y Let N ⊂ ι * T P N −1 be the orthogonal complement of T Y . Then as smooth vector bundles, we have We denote the projections onto the first and second component by π T and π N respectively. Define Direct computation shows that The following is straightforward. FS Therefore, the estimate in Theorem 3.2 will follow from: We will prove (3.8) in Proposition 3.6 and (3.9) in Proposition 3.9. Assuming these, we give the Proof of Theorem 3.2. Proof of Theorem 3.2. By (3.4), we have Applying Proposition 3.3, we get Thus, in order to prove Theorem 3.2, we need to show that By (3.8), we have Hence (3.9) implies that Lemma 3.4. There exists a positive constant c independent of k such that for any Proof. In the proof of this Lemma, we put ω k = ω 0 + kω ∞ and α = On the other hand Y g 2 j α n = 1 which implies that the sequence g j is bounded in L 2 1 (α n ). Hence, g j has a subsequence which converges in L 2 (α n ) and converges weakly in L 2 1 (α n ) to a function g ∈ L 2 1 (α n ). Without loss of generality, we can assume that the whole sequence converges. Since Y |∂g j | 2 α α n → 0 as k → ∞, it can be easily seen that g is a constant function. We have Since g is a constant function and Y ω n where n 2 is defined by (3.2). On the other hand It is a contradiction since ||g j || = 1 and g j → g in L 2 (α n ). The proof of the following lemma can be found in ( [PS2,p. 704]). For the sake of completeness, we give the details. Lemma 3.5. There exists a positive constant c R independent of k such that for any Kähler form ω ∈ c 1 (O(1)⊗L k ) having R-bounded geometry and any Proof. Since ω has R-bounded geometry, we have Therefore, On the other hand, there exists a unique function φ such that ω − ω 0 = ∂∂φ and Y φ ω n 0 = 0. Hence, We have, So, we get Hence, Proposition 3.6. There exists a positive constant c R such that for any ξ ∈ su(N), we have ||ξ|| 2 ≤ c R k m ||Y ξ || 2 , where ||.|| in the right hand side denotes the L 2 -norm with respect to the Kähler form ω on Y and Fubini-Study metric on the fibres. Since D (k) → n 2 /n 1 as k → ∞, there exists a positive constant c such that On the other hand Now applying Lemma 3.5, we get This implies Since ||M (k) || op ≤ ǫ and ǫ is small enough, there exists a positive constant c such that We know that ∂φ| Y = ι π T Y ξ ω which implies Lemma 3.7. For k ≫ 0, we have where S is the scalar curvature. Proof. We have for some smooth nonnegative functions f j on Y . The function f m is positive, since ω n−m 0 ∧ ω m ∞ is a volume form. Therefore there exists a positive constant l such that f m ≥ l > 0. We define We have Hence there exists a positive constant C such that Fix a point p ∈ Y and a holomorphic local coordinate z 1 , ..., z n around p such that where λ i 's are some nonnegative real numbers. Therefore, we have for k ≫ 0. Proposition 3.8. For any holomorphic vector field V on P N −1 , we have Proof. The following is from ( [PS2,). For the sake of completeness, we give the details of the proof. Fix x ∈ Y . Let e 1 , ..., e n , f 1 , ..., f m be a local holomorphic frame for ι * T P N −1 around x such that (1) e 1 (x), ..., e n (x), f 1 (x), ..., f m (x) form an orthonormal basis. (2) e 1 , ..., e n is a local holomorphic basis for T Y . Then there exist holomorphic functions a j and b j 's such that where φ ij 's are smooth functions. Since e 1 (x), ..., e n (x), f 1 (x), ..., f m (x) form an orthonormal basis, we have φ ij (x) = 0. Then It implies that So in order to establish 3.8, we need to prove that Using the Cauchy-Schwartz inequality, it suffices to prove where C 2 = C 2 (R) is independent of k (depends on R.) Now the matrix A * = (∂φ ij ) is the dual of the second fundamental form A of T Y in ι * T P N −1 . Let F ι * T P N−1 be the curvature tensor of the bundle ι * T P N −1 with respect to the Fubini-Study metric. . Also let F T Y be the curvature tensor of the bundle T Y with respect to the pulled back Fubini-Study metric ω = ι * ω FS . Now by computations in [PS2,5.28 where Λ e ω is the contraction with the Kähler form ω. The formula [PS2,5.33] gives Λ e ω T r π T • (F ι * T P N−1 T Y ) = n + 1. On the other hand Λ e ω T r(F T Y ) is the scalar curvature of the metric ω on Y . Since ω has R-bounded geometry, we have The only thing we need in addition is the following Proposition 3.9. Assume that there are no nonzero holomorphic vector fields on Y . Then there exists a constant c ′ R such that for any ξ ∈ su(N), we have Proof. We define α = ω 0 + ω ∞ . Since there are no holomorphic vector fields on Y , for any smooth smooth vector field W on Y , we have . Hence, there exists a positive constant c depends on R and independent of k, such that for any ω 0 having R-bounded geometry, we have . On the other hand which implies the desired inequality. Perturbing To A Balanced Metric We continue with the notation of the previous section. The goal of this section is to prove Theorem 4.6 which gives a condition for when an almost balanced metric can be perturbed to a balanced one. In order to do this, first we need to establish Theorem 4.5. We need the following estimate. Proposition 4.1. There exist positive real numbers K j depends only on h 0 , g ∞ and j such that for any s ∈ H 0 (Y, O(1) ⊗ L k ), we have In order to prove Proposition 4.1, we start with some complex analysis. Let ϕ be a strictly plurisubharmonic function and ψ be a plurisubharmonic function on B = B(2) ⊂ C n such that ϕ(0) = ψ(0) = 0. We can find a coordinate on B(2) such that Theorem 4.3. There exist positive real numbers c j depends only on j, ϕ, ψ and dµ such that for any holomorphic function u : where dµ is a fixed volume form on B. Proof. Applying Cauchy estimate to u (k) , we get since e − P (λ i +1)|z i | 2 is bounded from below by a positive constant on the unit ball. Using the change of variable w = z √ k we get On the other hand, we have for some constant c depending only on ψ and ϕ. Hence Hence, |u| 2 e −ϕ−kψ dµ. Proof of Proposition 4.1. Fix a point p in Y and a geodesic ball B ⊂ Y centered at p. Let e L be a holomorphic frame for L on B and e be a holomorphic frame for O(1) such that ||e L ||(p) = ||e||(p) = 1. Any s ∈ H 0 (Y, O(1) ⊗ L k ) can be written as s = ue ⊗ e ⊗k L for some holomorphic function u : B → C. We have Therefore, Applying Theorem 4.3 concluds the proof. For the rest of this section, we fix a positive integer q. We continue with the notation (Y, ω ∞ , ω 0 , ω 0 ) of section 3. In the rest of this section, we fix the reference metric ω 0 on Y and recall the Definition 3.2. We state the following lemma without proof. The proof is a straightforward calculation. Assume that there exist a sequence of almost balanced metrics h k of order q and bases s k = (s N ) for H 0 (Y, O(1) ⊗ L k ) which satisfies (4.1). As before ω k = Ric(h k ). Then Lemma 4.4 implies that for k ≫ 0, ω k has R-bounded geometry. Fix k and let B ∈ isu(N k ). Without loss of generality, we can assume that B is the diagonal matrix diag(λ i ), where λ i ∈ R and λ i = 0. There exists a unique hermitian metric h B on O PE * (1) ⊗ L k such that Let ω B = Ric(h B ). In the next theorem, we will prove that there exist a constant c and open balls U k ⊂ isu(N k ) around the origin of radius ck −(n+a+2) so that if B ∈ U k , then h B is R-bounded. More precisely, Theorem 4.5. Suppose that (4.1) holds. • There exist c > 0 and k 0 > 0 such that if k ≥ k 0 and B ∈ isu(N k ) satisfies If ||B|| op is small enough, there exists C > 0 so that and therefore Proposition 4.1 implies that In order to show that ω B is R-bounded, we need to prove the following: To prove (4.4), (4.1) and (4.2) imply that for k ≫ 0 To prove (4.5), applying Lemma 4.4 with ǫ = R−1 2R gives and therefore (4.3) implies In order to prove the second part, by a unitary change of basis, we may assume without loss of generality that the matrix M B is diagonal. By definition We have Therefore, Therefore, Theorem 4.6. Suppose that the sequence of metrics h k on O(1) ⊗ L k and bases s k = (s k 1 , ..., s k N ) for H 0 (Y, O(1) ⊗ L k ) is almost balanced of order q. Suppose that (4.1) holds for If q > 5m 2 + n + a + 5, then (Y, O(1) ⊗ L k )admits balanced metric for k ≫ 0. Asymptotic Expansion The goal of this section is to prove Theorem 1.3. Theorem 1.3 gives an asymptotic expansion for the Bergman kernel of (PE * , O PE * (1) ⊗ π * L k ). We obtain such an expansion by using the Bergman kernel asymptotic expansion proved in ( [C], [Z]). Also we compute the first nontrivial coefficient of the expansion. In the next section, we use this to construct sequence of almost balanced metrics. We start with some linear algebra. Let V be a hermitian vector space of dimension r. The projective space PV * can be identified with the space of hyperplanes in V via If f = 0 then V f will be a hyperplane. There is a natural isomorphism between V and H 0 (PV * , O PV * (1)) which sends v ∈ V tov ∈ H 0 (PV * , O PV * (1)) such that for any f ∈ V * ,v(f ) = f (v). Now we can see that the inner product on V induces an inner product on V * and then a metric on O PV * (1). For v, w ∈ V and f ∈ V * we define Definition 5.1. For any inner product h on V , We denote the induced metric on O PV * (1) by h. The following is a straight forward computation. . Definition 5.2. For any v ∈ V , we define an endomorphism of V by Let (X, ω) be a Kähler manifold of dimension m and E be a holomorphic vector bundle on X of rank r. Let L be an ample line bundle on X endowed with a Hermitian metric σ such that Ric(σ) = ω. For any hermitian metric h on E, we define the volume form where g = h ,ω g = Ric(g) = Ric( h) and π : PE * → X is the projection map. The goal is to find an asymptotic expansion for the Bergman kernel of O PE * (1) ⊗ L k → PE * with respect to the L 2 -metric defined on H 0 (PE * , O PE * (1) ⊗ π * L k ). We define the L 2 -metric using the fibre metric g ⊗ σ ⊗k and the volume form dµ g,k defined as follows In order to do that, we reduce the problem to the problem of Bergman kernel asymptotics on E ⊗ L k → X. The first step is to use the volume form dµ g which is a product volume form instead of the more complicated one dµ g,k . So, we replace the volume form dµ g,k with dµ g and the fibre metric g ⊗ σ k with g(k) ⊗ σ k , where the metrics g(k) are defined on O PE * (1) by Clearly the L 2 -inner products L 2 (g ⊗ σ k , dµ g,k ) and In order to do this we somehow push forward the metric g(k) to get a metric g(k) on E (See Definition 5.5). Then we can apply the result on the asymptotics of the Bergman kernel on E. The last step is to use this to get the result. Definition 5.3. Let s k 1 , ...., s k N be an orthonormal basis for H 0 (PE * , O PE * (1)⊗ π * L k ) w.r.t. L 2 (g ⊗ σ k , dµ k,g ). We define Definition 5.4. For any (j, j)-form α on X, we define the contraction Λ j ω α of α with respect to the Kähler form ω by In this section we fix the Kähler form ω on X and therefore simply denote Λ j ω α by Λ j α. Lemma 5.2. Let ν 0 be a fixed Kähler form on X. For any positive integer p there exists a constant C such that for any (j, j)-form γ, we have Proof. Let γ be a (j, j)-form. By definition, we have Therefore for any positive integer p, we have Applying Leibnitz rule, we get Thus there exists a positive constant C ′ so that On the other hand there exists constant c p,j such that for any any 0 ≤ j ≤ m − 1, Hence there exists a constant C such that Definition 5.5. For any hermitian form g on O PE * (1), we define a hermitian form g on E as follow Notice that if g = h for some hermitian metric h on E, Proposition 5.1 implies that g = h. Define hermitian metrics g j 's on E by for s, t ∈ E x . Also we define Ψ j ∈ End(E) by (5.8) g j = Ψ j h. Proposition 5.3. Let ν 0 be a fixed Kähler form on X as in Lemma 5.2. For any positive numbers l and l ′ and any positive integer p, there exists a positive number C l,l ′ ,p such that if Proof. Fix a point p ∈ X. Let e 1 , ..., e r be a local holomorphic frame for E around p such that Let λ 1 , ..., λ r be the homogeneous coordinates on the fibre. At the fixed point p, we have Therefore, Therefore, Simple calculation gives when j 1 + · · · + j r = j and 1 ≤ α ≤ r. Hence From theory of symmetric functions, one can see that there exist polynomials P i (x 1 , . . . , x j ) of degree i such that where c i (h) is the i th chern form of h. Since ||h|| C p+2 (ν 0 ) ≤ l, there exists a positive constant c ′ such that ||F j h + · · · + P j (c 1 (h), . . . , c j (h))|| C p (ν 0 ) ≤ c ′ (1 + l) j . Therefore Lemma 5.2 implies that On the other hand . Now we can conclude the proof by induction on p. Lemma 5.4. We have the following (1) Ψ m = I E . Proof. The first part is an immediate consequence of Proposition 5.1 and the definition of Ψ m . For the second part, we use the notation used in the proof of Proposition 5.3. It is easy to see that for α = β, we get g m−1 (e α , e β ) = 0. On the other hand by plugging j = 1 in (5.9), we get g m−1 (e α , e α ) = 1 (r + 1) (T r(ΛF ) + Λω α ). The following lemmas are straightforward. Lemma 5.6. Let s 1 , ..., s N be a basis for H 0 (X, E). Then Let B k (h(k), ω) be the Bergman kernel of E ⊗ L k with respect to the L 2 -metric defined by the hermitian metric h(k) ⊗ σ k on E ⊗ L k and the volume form ω m m! on X. Therefore, if s 1 , ..., s N is an orthonormal basis Let s 1 , ...., s N be the corresponding basis for H 0 (PE * , O PE * (1) ⊗ L k ). Hence, Therefore 1 √ Cr s 1 , ...., 1 √ Cr s N is an orthonormal basis for H 0 (PE * , O PE * (1)⊗ L k ) with respect to L 2 (g ⊗ σ k , dµ k,g ). Hence Lemma 5.6 implies Now, in order to conclude the proof, it suffices to show that there exist smooth endomorphisms A i ∈ Γ(X, E) such that Let B k (h, ω) be the Bergman kernel of E ⊗ L k with respect to the L 2 (h ⊗ σ k ). A fundamental result on the asymptotics of the Bergman kernel ( [C], [Z]) states that there exists an asymptotic expansion (See also [BBS], [W2].) Moreover this expansion holds uniformly for any h in a bounded family. Therefore, we can Taylor expand the coefficients B i (h)'s. We conclude that for endomorphisms Φ 1 , ..., Φ M , Note that B 1 (h) in the above expansion does not depend on Φ i 's and is given as before by On the other hand Therefore, Notice that Proposition 5.3 implies that if h and ω vary in a bounded family and ω is bounded from below, then Ψ 1 , .., Ψ m vary in a bounded family. Therefore the asymptotic expansion that we obtained for B k (h, ω) is uniform as long as h and ω vary in a bounded family and ω is bounded from below. Proposition 5.7. Suppose that ω ∞ ∈ 2πc 1 (L) be a Kähler form with constant scalar curvature and h HE be a Hermitian-Einstein metric on E, i.e. Constructing Almost Balanced Metrics Let h ∞ be a hermitian metric on L such that ω ∞ = Ric(h ∞ ) be a Kähler form with constant scalar curvature and h HE be the corresponding Hermitian-Einstein metric on E, i.e. where µ is the slope of the bundle E. Let ω 0 = Ric( h HE ). After tensoring by high power of L, we can assume without loss of generality that ω 0 is a Kähler form on PE * . We fix an integer a ≥ 4. In order to prove the following, we use ideas introduced by Donaldson in ([D3, Theorem 26]) Theorem 6.1. Suppose Aut(X, L) is discrete. There exist smooth functions η 1 , η 2 , ... on X and smooth endomorphisms Φ 1 , Φ 2 , ... of E such that for any positive integer q if Proof. The error term in the asymptotic expansion is uniformly bounded in C a+2 for all h and ω in a bounded family. Therefore there exists a positive integer s depends only on p and q such that , where A p,j are homogeneous polynomials of degree j , depending on h and ω, in Φ and η and its covariant derivatives. Let Φ 1 , ..., Φ q be smooth endomorphisms of E and η 1 , ..., η q be smooth functions on X. We have where b p,j 's are multi linear expression on Φ i 's and η i 's. We need to choose Φ j and η j such that coefficients of k m , ...k m−q in the right hand side of (6.4) are constant. Donaldson's key observation is that η p and φ p only appear in the coefficient of k m−p in the form of A 1,1 (φ p , η p ). Hence, we can do this inductively. Assume that we choose η 1 , η 2 , ...η p−1 and Φ 1 , Φ 2 , ..., Φ p−1 so that the coefficients of k m , ...k m−p+1 are constant. Now we need to choose η p and Φ p such that the coefficient of k m−p is constant. This means that we need to solve the equation (6.5) A 1,1 (Φ p , η p ) − c p I E = P p−1 , for Φ p , η p and the constant c p . In this equation P p−1 is determined by Φ 1 , ..., Φ p−1 and η 1 , ..., η p−1 . Corollary 5.8 implies that we can always solve the equation (6.5). Proof of the main theorem In this section, we prove Theorem 1.2. In order to do that, we want to apply Theorem 4.6. Hence, we need to construct a sequence of almost balanced metrics on PE * , O PE * (1) ⊗ L ⊗k . Also, we need to show that PE * has no nontrivial holomorphic vector fields. Proposition 7.1. Let E be a holomorphic vector bundle over a compact Kähler manifold X. Suppose that X has no nonzero holomorphic vector fields. If E is stable, then PE * has no nontrivial holomorphic vector fields. Proof of Theorem 1.2. Since Chow stability is equivalent to the existence of balanced metric, it suffices to show that (PE * , O PE * (1)⊗π * L k ) admits balanced metric for k ≫ 0. Fix a positive integer q. From now on we drop all indexes q for simplicity. Let σ k = σ k,q be a metric on L such that Ric(σ k ) = ν k , where ν k = ν k,q is the one in the statement of Theorem 2.1. Let t 1 , ..., t N be an orthonormal basis for H 0 (PE * , O PE * (1)⊗L k ) w.r.t. L 2 (g k ⊗σ ⊗k k , (ωg k +kν k ) m+r−1 (m+r−1)! ). Thus, Corollary 6.2 implies This implies that the metric g ′ k is the Fubini-Study metric on O PE * (1)⊗ L k induced by the embedding ι t : PE * → P N −1 , where t = (t 1 , ..., t N ). We prove that this sequence of embedding is almost balanced of order q, i.e where M (k) = [M ij ] is a trace free hermitian matrix, D (k) → C r as k → ∞ and ||M (k) || op = O(k −q−1 ). By a unitary change of basis, we may assume without loss of generality that the matrix M (k) is diagonal. Thus On the other hand, Therefore,
2009-05-06T17:52:24.000Z
2009-05-06T00:00:00.000
{ "year": 2009, "sha1": "09e2a59b2436e08a1f0d21d8c84a1df7f189d2c4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0905.0879", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c1988260d561f53ee16ec239b142491f02bd805b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
29390989
pes2o/s2orc
v3-fos-license
An unusual foreign body in the infratemporal fossa Infratemporal fossa injuries are uncommon and often go undetected presenting later with complications. We present a case of an infratemporal fossa penetrating injury with a ball point spring following a vehicular accident. Post-traumatic trismus even following supposedly trivial injury in the area should raise suspicion of possible injury in this location. INTRODUCTION R ecognition of injuries to the infratemporal fossa is of extreme importance in emergency situations as these can lead to life-threatening complications if undetected. We report an unusual cause of infratemporal fossa penetrating injury caused by a spring from a ball point pen which was initially missed and detected by the presence of post-traumatic trismus. CASE REPORT A 54-year-old man presented to our emergency room after being involved in a motor vehicle accident in which the van he was travelling in overturned. He was conscious and oriented, and his only complaint was bleeding from the right ear and pain in the area when he tried to open his mouth. On examination, he was found to have swelling and tenderness in front of the right ear. There was a 1 cm long vertical laceration just medial to the right tragus [ Figure 1]. There was no active bleeding. His mouth opening was a full range but accompanied by pain in the right pre-auricular area. His dental occlusion was normal. The laceration was sutured under local anaesthesia, and a screening X-ray of the skull was ordered. The X-ray showed a spring in the right infratemporal fossa [ Figure 2]. This was initially thought to be an artifact but on questioning the patient, he recalled pulling out a ballpoint pen, which had impaled him in his right ear, immediately after the accident. He had been carrying the pen in the right breast pocket of his shirt when the accident occurred. A computed tomography (CT) scan was carried out which showed a 3 cm long spring in his right infratemporal fossa [ Figure 3]. He was taken up for exploration of the wound under general anaesthesia with the plan to approach the infratemporal fossa through the intraoral route, using the assistance of an image intensifier, if required. The wound was gently probed using a medium artery clamp with a finger in the mouth acting as a guide. The tract was 8 cm long extending anteromedially into the infratemporal fossa. Multiple fragments of black plastic and a 3 cm long spring were removed, obviating the need for an open procedure. After thoroughly irrigating the wound, it was loosely closed with 5-0 nylon sutures. His post-operative period was uneventful. DISCUSSION There have been a number of reports of foreign bodies having been extracted from the infratemporal fossa secondary to external trauma or following complications of oral surgery. A number of these have gone unrecognised at the time of initial injury and have had to be removed later following complications. [1][2][3][4][5][6] A majority of these patients have presented with pain and trismus; however, the diagnosis has been missed in a number of patients, and they have been diagnosed once complications have occurred. In all the reported cases, except for one where the foreign body was vegetative material, the diagnosis was confirmed on a CT scan. [5] The infratemporal fossa is a wedge-shaped space between the ramus of the mandible laterally and the wall of the pharynx medially. [7] It has a roof, medial, lateral and anterior walls and posteroinferiorly opens into the neck. There are varied descriptions as to the medial wall. Gray's Anatomy describes the medial boundary as the lateral pterygoid plate, the pharynx and the tensor and levator veli palatini. However, it has also been described in other texts as the sphenoid pterygoid process, lateral portion of the clivus, first cervical vertebra and inferior surface of the petrous portion of the temporal bone. [8] A number of neurovascular structures traverse to and from the brain and brain stem through the infratemporal fossa, and it is of great importance to skull base surgeons. The major contents of the infratemporal fossa include: • The sphenomandibular ligament • Medial and lateral pterygoid muscles • The maxillary artery • The mandibular nerve, branches of the facial nerve and the glossopharyngeal nerve (IX) • The pterygoid plexus of veins. A number of benign or malignant tumours may extend into space either from surrounding areas such as the paranasal sinuses, middle cranial fossa, nasopharynx, parotid gland, external ear canal or occasionally primary tumours may arise from structures within the space. Metastatic lesions in this space are rare. [9] Retained foreign material has been reported to be responsible for a number of late complications such as delayed aneurysm, [10] foreign body granulomas and reactions and migration with erosion of vessel walls. [11] Other reported late complications include cellulitis, abscess formation and discharging sinus. The infratemporal fossa lies in a central position between the tissue spaces of the face and the tissue spaces of the neck. Infections involving the infratemporal fossa could potentially spread through the head and neck particularly around the pharynx and compromise the airway, which could be life-threatening. In addition, given the theoretical possibility of migration and erosion into the maxillary artery the early removal of these foreign bodies in this space would be advised. Traditionally, the major surgical approaches to the infratemporal fossa have been described as anterior (transfacial, transmaxillary, transoral and transpalatal), lateral (transzygomatic and lateral infratemporal) or inferior (transmandibular and transcervical) or a combination of the three. These give a wide surgical exposure but require considerable tissue dissection and violation of major anatomical structures and may even require osteotomies. [12] On reviewing literature, the need for a major surgical procedure to remove a foreign body in the area has not been required, as there has been a need for only limited surgical exposure compared to that required for tumour surgery [ Table 1]. With the advent of image-guided endoscopic procedures [13] the number of open procedures to retrieve foreign bodies in the region is likely to go down. CONCLUSION Almost all patients with foreign bodies in the infratemporal fossa, in the literature reviewed, presented with pain and trismus. It would be prudent to say that one should be aware of a possible foreign body if there is post-traumatic trismus even following supposedly trivial injury in the area and rule out the diagnosis only after appropriate radiological studies are carried out. Financial support and sponsorship Nil.
2018-04-03T01:22:50.170Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "7be1c3d01f3e9b4d80d9438addeff607a08e620a", "oa_license": "CCBYNCSA", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.4103/0970-0358.191312.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7be1c3d01f3e9b4d80d9438addeff607a08e620a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
139396608
pes2o/s2orc
v3-fos-license
2D Analysis of Concrete Composition in the Context of Determining Air Pore Structure Parameters The concrete air pore structure quality evaluation is conducted according to the EN 480-11 standard. One of the basic parameters required for the concrete air pore structure quality evaluation is the P/A (paste/air) ratio. In practice, the paste’s volume is determined based on the concrete mixture composition. In actual conditions, fluctuations in the air content can be observed, which directly affect the concrete composition and thus the P/A ratio, thereby ultimately affecting the results of the spacing factor L ¯ . The aim of these studies was to determine the impact of various air contents and concrete mixture component proportions on the air pore structure parameters. The testing concerned surface concrete with the air content of approx. 5% and paste volume of P=26%. Standard tests for three samples with the air content fluctuating from 3.58 to 7.98% were conducted. The second stage of testing included 2D measurements of samples with the use of an innovative method of sample lighting, which allowed determining the contents of particular phases (aggregate, air, paste). The results of the standard tests were compared with the results obtained from the 2D analysis. It was found that the use of the 1D and 2D analyses leads to the determination of the values of the spacing factor amounting from 0.003 to 0.016 mm. Introduction The air pore structure parameter determination is conducted based on the EN 480-11 standard [1]. The tests include the use of two samples with the dimensions of 100x150x40 mm. One sample is measured across the traverse of 1,200 mm. Lines away from one another by 6 mm are distributed on the entire sample surface: 4 lines each on the sample's bottom and top part, and other ones in the middle. The determination of the air pore structure parameters requires the following data: total length of the measurement line (T tot ), total length of the measurement line passing through the air pores (T a ), paste volume fraction (P) and total number of measured chords (N). The quality of the testing methods and their impact on the evaluation of the air pore structure characteristics was the subject of many studies [2][3][4][5][6][7][8][9][10][11][12][13]. In practice, calculations often utilise the paste (P) volume based on the mixture composition obtained from preliminary testing. This approach can lead to erroneous calculations if the composition is not corrected depending on the change in the air content. The dependency between the composition and density of the concrete mixture can be presented as follows: Assuming that the W/C ratio = const. and the aggregate granulation (g k = const.) are not changing, the concrete density g b depends on the air content zp. Changing the air content zp changes the concrete density g b and thus the cement content (paste volume). For example, let us assume that in a mixture with the bulk density g b1 = 2490 kg/m 3 the cement content amounts to C = 360 kg/m 3 and the air content zp is 1 = 4%. If the W/C ratio = 0.45 and aggregate granulation do not change, the change in the mixture's bulk density results directly from the change in the air content. If the air content increases to zp 2 = 8%, then the density will amount to g b2 = 2375 kg/m 3 . In effect, the paste density and air content ratio will decrease from (P/A) 1 = 6.95 to (P/A) 2 = 3.325. The above values affect the results of the spacing factor calculations. If the paste/air ratio amounts to P/A > 4.342, then the spacing factor is calculated according to the following formula: If the paste/air ratio amounts to P/A > 4.342, then the spacing factor is calculated according to the following formula: The determination of the actual content of paste in the concrete will be possible thanks to 2D measurements. The test results of the paste (P) volume based on microscopic measurements should however be treated with reservation. Then can lead to an erroneous estimation of the concrete's air content. According to Pleau, et al. [11], measurements often lead to underestimation of the concrete's air content. Based on own research, they point out that the calculated paste density is approx. 12% higher than demonstrated from microscopic measurements according to ASTM C457 [14]. The aim of these studies is to attempt to determine the impact of various air contents and concrete mixture component proportions on the air pore structure parameters results. These parameters were determined using the standard method (1D) and based on 2D analyses of the concrete sample surfaces. Materials and Methods The subject of the testing was surface concrete with the air content of approx. 5% and paste volume of P=26%. Standard tests for three samples with the air content fluctuating from 3.58 to 7.98% were conducted. The second stage of testing included 2D measurements of samples with the use of an innovative method of sample lighting, which allowed determining the contents of particular phases (aggregate, air, paste). The scope of testing of hardened concrete included the determination of: • air pore structure characteristics (A, A 300 , L , α) according to PN-EN 480-11:2008 [1], • content of paste P 2D , air content A 2D and spacing factor L 2D in 2D measurements. The automatic image analysis was conducted with the use of the set-up (Figure 1) composed of a stereo microscope, a CCD camera, the motorized stage and computer with Image-Pro Plus [15] software enabling determination of the tested parameters. The first stage of testing featured preparation of the samples and determination of the air pore structure characteristics with the use of the chord count method according to the PN-EN 480-11:2008 standard. In the second stage, the samples were prepared for the 2D analysis. A fluorescent powder was applied to the sample surface prepared for testing ( Figure 2). Then, a series of photographs were taken for each sample. The sample surface was photographed with two various types of lighting. The UV lighting was used to highlight the influence of the fluorescent powder and extract air pores (Figure 3a), whereas the LED lighting was used to capture aggregate grains (fine and coarse) (Figure 3b). The air pore as well as fine and coarse aggregate images were subjected to processing, among others, in the form of removal of pores from aggregate grains, reproduction of air pores imprecisely filled out by the fluorescent powder, reconstruction of the shape of air pores or aggregate grains damaged during grinding or polishing, etc. Table 1 presents the test results of the air pore structure parameters obtained from the 1D analysis (standard method) and 2D analysis. For the standard calculations, paste density of P=26% determined based on the concrete mixture composition was assumed. The air content A in the tested samples amounted from 3.58 to 7.98% and the content of pores with the diameter of up to 300µm A 300 -from 2.25 to 3.40%. The values of the spacing factor L amount to 0.139-0.218 mm. Based on the 2D analyses of the tested sample surfaces, the content of aggregate, air A 2D and paste P 2D were determined. The actual paste volume P 2D determined in the 2D measurements amounts from 24.28 to 27.79% and the air content A 2D amounts from 3.69 to 9.74%. The values of the spacing factor L 2D amounted from 0.142 to 0.228 mm. The difference in the obtained values of the spacing factor from 1D and 2D analysis amounts from 0.003 to 0.016 mm. The difference in the air content amounts from 0.05 to 1.76% and results, among others, from the presence of large air pores which are easy to extract in the 2D method. The volume of air pores with a diameter higher than 2 mm in the analysed concretes B1, B2, B3 (2D analysis) amounted to 0.26%, 1.68% and 3.59%, respectively. The biggest differences were observed in samples with the largest air content. In the standard method, air pores with diameter of up to 4 mm were taken into consideration. Pores larger than 4 mm present on the sample surfaces were also extracted in the 2D analysis. Figure 4 presents the comparison of the test results of air content obtained from the 1D and 2D analyses. Figure 5 presents the comparison of the test results of the spacing factor determined in the 1D analysis ( L ) with spacing factor determined in the 2D analysis ( L 2D ). Conclusions Based on the analysis of the test results of three concrete samples with various air content it was found that the use of the 1D and 2D analyses leads to the determination of the values of the spacing factor L amounting from 0.003 to 0.016 mm. The difference in the air content amounts from 0.05 to 1.76% and can be explained by the presence of large air pores which are not considered in the standard method (1D). In the case of the standard method (1D), air pores with a diameter exceeding 4 mm are not taken into consideration. However, such pores exist in some areas of the tested samples and are easily extracted in the 2D method. The biggest difference is visible in samples with the largest air content. The obtained test results the air pore structure using the 2D analysis are promising and worth being continued according to the authors. The remaining problem is the determination of the content of air pores with the diameter of up to 300µm A 300 . It is very effective to use in the 2D analysis an innovative method of sample lighting, which allows determining the contents of particular phases (aggregate, air, paste).
2019-04-30T13:09:20.399Z
2019-02-23T00:00:00.000
{ "year": 2019, "sha1": "c99e5ffaf72026de656f665975e843dd1748c4b0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/471/3/032022", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "efe4b97cc859323a73dbbc7376c61139ec09f9b2", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
10243532
pes2o/s2orc
v3-fos-license
Achievable Rates for Noisy Channels with Synchronization Errors We develop several lower bounds on the capacity of binary input symmetric output channels with synchronization errors which also suffer from other types of impairments such as substitutions, erasures, additive white Gaussian noise (AWGN) etc. More precisely, we show that if the channel with synchronization errors can be decomposed into a cascade of two channels where only the first one suffers from synchronization errors and the second one is a memoryless channel, a lower bound on the capacity of the original channel in terms of the capacity of the synchronization error-only channel can be derived. To accomplish this, we derive lower bounds on the mutual information rate between the transmitted and received sequences (for the original channel) for an arbitrary input distribution, and then relate this result to the channel capacity. The results apply without the knowledge of the exact capacity achieving input distributions. A primary application of our results is that we can employ any lower bound derived on the capacity of the first channel (synchronization error channel in the decomposition) to find lower bounds on the capacity of the (original) noisy channel with synchronization errors. We apply the general ideas to several specific classes of channels such as synchronization error channels with erasures and substitutions, with symmetric q-ary outputs and with AWGN explicitly, and obtain easy-to-compute bounds. We illustrate that, with our approach, it is possible to derive tighter capacity lower bounds compared to the currently available bounds in the literature for certain classes of channels, e.g., deletion/substitution channels and deletion/AWGN channels (for certain signal to noise ratio (SNR) ranges). I. INTRODUCTION Depending on the transmitting medium and the particular design, different limiting factors degrade the performance of a general communication system. For instance, imperfect alignment of the transmitter and receiver clocks may be one such factor resulting in a synchronization error channel modeled typically through insertion and/or deletion of symbols. Other factors include the effects of additive noise at the receiver etc. The main objective of this paper is to study the combined effects of the synchronization errors and additive noise type impairments and in particular to "decouple" the effects of the synchronization errors from the other parameters and obtain expressions relating the channel capacity of the combined model and the synchronization error-only channel. We focus on achievable rates for channels which can be considered as a concatenation of two independent channels where the first one is a binary channel suffering only from synchronization errors and the second one is either a memoryless binary input symmetric q-ary output channel (BSQC) or a binary input AWGN (BI-AWGN) channel. For instance, the first channel can be a binary insertion/deletion channel and the second one can be a binary symmetric channel (BSC) or a substitution/erasure channel (a ternary output channel q = 3). Our development starts with the ternary (q = 3) and quaternary (q = 4) output cases, respectively, then we generalize the results to a general q-ary output case. Specifically, we obtain achievable rates for the concatenated channel in terms of the capacity of the synchronization error channel by lower bounding the information rate of the concatenated channel for input distributions which achieve the capacity of the synchronization error-only channel and the parameters of the memoryless channel. The lower bounds are derived without the use of the exact capacity achieving input distribution of the synchronization error channel, hence any existing lower bound on the capacity (of the synchronization erroronly channel) can be employed to obtain an achievable rate characterization for the original channel model of interest. By channels with synchronization errors we refer to the binary memoryless channels with synchronization errors as described by Dobrushin in [1] where every transmitted bit is independently replaced with a random number of symbols (possibly empty string, i.e. a deletion event is also allowed), and the transmitter and receiver have no information about the position and/or the pattern of the insertions/deletions. Different specific models on channels with synchronization errors are considered in the literature. Insertion/deletion channels are used as common models for channels with synchronization errors, e.g., the Gallager insertion/deletion channel [2], the sticky channel [3] and the segmented insertion/deletion channel [4]. Dobrushin [1] proved that Shannon's theorem applies for a memoryless channel with synchronization errors by demonstrating that information stability holds for memoryless channels with synchronization errors. That is, for the capacity of the synchronization error channel, C s we can write C s = lim N →∞ max P (X) 1 N I(X; Y ), where X and Y are the transmitted and received sequences, respectively, and N is the length of the transmitted sequence. Therefore, the information and transmission capacities of the memoryless channels with synchronization errors are equal and we can employ any lower bound on the information capacity as a lower bound on the transmission capacity of a channel with synchronization errors. There are many papers deriving upper and/or lower bounds on the capacity of the insertion/deletion channels, e.g., see [5,6,7,8,9,10]; however, only a very few results exist for insertion/deletion channels with substitution errors, e.g. [2,11] or in the presence of AWGN, e.g. [12,13]. Our interest is on the latter, in fact, on more general models incorporating erasures as well as q-ary channel outputs. Let us review some of the existing relevant results on insertion/deletion channels in a bit more detail. In [2], Gallager considers a channel model with substitution and insertion/deletion errors (sub/ins/del) where each bit gets deleted with probability p d , replaced by two random bits with probability p i , correctly received with probability , and changed with probability p f = (1 − p d − p i )p s , and derives a lower bound on the channel capacity (in bits/use) given by where log(.) denotes logarithm in base 2. Fertonani and Duman in [11] develop several upper and lower bounds on the capacity of the ins/del/sub channel, where they employ a genie-aided decoder that is supplied with side information about some suitably selected random processes. Therefore, an auxiliary memoryless channel is obtained in such a way that the Blahut-Arimoto algorithm (BAA) can be employed to obtain upper bounds on the capacity of the original channel. Furthermore, it is shown that by subtracting some quantity from the derived upper bounds which is, roughly speaking, more than extra information provided by the side information, lower bounds on the capacity can also be derived. In [13], Monte Carlo simulation based results are used to estimate information rates of different insertion and/or deletion channels in the absence or presence of intersymbol interference (ISI) in addition to AWGN with independent uniformly distributed (i.u.d.) input sequences. In [12], the synchronization errors are modeled as a Markov process and simulation results are used to compute achievable information rates of an ISI channel with synchronization errors in the presence of AWGN. In [10], Rahmati The paper is organized as follows. In Section II, we formally give the models for binary input symmetric qary output channels with synchronization errors and BI-AWGN channels with synchronization errors. In III, we give two lemmas and one proposition which will be useful in the proof of the result on BSQC channels with synchronization errors. In Section IV, we initially focus on a substitution/erasure/synchronization error channel Synchronization Error Chanel BSQC Fig. 1. Binary input symmetric q-ary output channel with synchronization errors. (abbreviated as sub/ers/synch channel) which is a binary input symmetric ternary output channel, and then on a binary input symmetric quaternary output channel. After that we extend the results to the case of more general symmetric q-ary output channels. In Section V, we lower bound the capacity of a synchronization error channel with AWGN (abbreviated as AWGN/synch channel) in terms of the capacity of the underlying synchronization error only channel. More precisely, we generalize the results on BSQC channels with synchronization errors when q goes to infinity. We present several numerical examples illustrating the derived results in Section VI. Finally, we conclude the paper in Section VII. II. CHANNEL MODELS A general memoryless channel with synchronization errors [1] is defined via a stochastic matrix {p(y i |x i ), y i ∈ Y, x i ∈ X } where X is the input alphabet (e.g., for a binary input channel X = {0, 1}), and Y is (possibly empty) the set of output symbols, 0 ≤ p(y i |x i ) ≤ 1, and yi∈Y p(y i |x) = 1. As a particular instance of this channel, if p(y i = ∅|x i ) = p d (∅ denoting the null string) and p(y i = x i ) = 1 − p i , we obtain an i.i.d. deletion channel. A. Binary Input Symmetric q-ary Output Channel with Synchronization Errors By a binary input symmetric q-ary output channel (BSQC) with synchronization errors, we refer to a channel which can be considered as a concatenation of two independent channels, depicted in Fig 1, such that the first one is a channel with only synchronization errors with input sequence X and output sequence Y , and the second one is a BSQC with input sequence Y and output sequence Y (q) , where by a symmetric channel we refer to the definition given in [14, p. 94]. In other words, a channel is symmetric if by dividing the columns of the transition matrix into sub-matrices, in each sub-matrix, each row is a permutation of any other row and each column is a permutation of any other column. For example, a channel with independent substitution, erasure and synchronization errors (sub/ers/synch channel) can be considered as a concatenation of a channel with only synchronization errors with input sequence X and output sequence Y and a substitution/erasure channel (binary input ternary output channel) with input sequence Y and output sequence Y (3) . In a substitution/erasure channel, each bit is independently flipped with probability p s or erased with probability p e , as illustrated in Fig. 2. (a). Another example is a binary input symmetric quaternary output channel with synchronization errors which can be decomposed into two independent channels such that the first one is a memoryless synchronization error channel and the second one is a memoryless binary input symmetric quaternary output channel shown in Fig. 2. (b). Synchronization Error Chanel BI-AWGN Channel B. BI-AWGN Channels with Synchronization Errors In a BI-AWGN channel with synchronization errors, bits are transmitted using binary phase shift keying (BPSK) and the received signal contains AWGN in addition to the synchronization errors. As illustrated in Fig. 3, this channel can be considered as a cascade of two independent channels where the first one is a synchronization error channel and the second one is a BI-AWGN channel. We useX to denote the input sequence to the first channel which is a BPSK modulated version of the binary input sequence X, i.e.,X i = 1 − 2X i andȲ to denote the output sequence of the first channel and input to the second one. Y is the output sequence of the second channel that is the noisy version ofȲ , i.e., where Z i 's are i.i.d. zero mean Gaussian random variables with variance σ 2 , and Y d i andȲ d i are the i th received and transmitted bits of the second channel, respectively. C. Simple Example of a Synchronization Error Channel Decomposition into Two Independent Channels The procedure in finding the capacity bounds used in this paper can be employed for any channel which can be decomposed into two independent channels such that the first one is a memoryless synchronization error channel and the second one is a symmetric memoryless channel with no effect on the length of the input sequence. Therefore, if we can decompose a given synchronization error channel into two channels with described properties, we can derive lower bounds on the capacity of the synchronization error channel. The advantage of this decomposition is that decomposing the original synchronization error channel into a well characterized synchronization error channel and a memoryless channel could be done in such a way that lower bounding the capacity of the new synchronization error channel be simpler than the capacity analysis of the original synchronization error channel. In Table I, we provide an example of a hypothetical channel with synchronization errors that can be decomposed into a different synchronization error channel and a memoryless binary symmetric channel (BSC). In Table II, the two channels used in the decomposition are given. III. ENTROPY BOUNDS FOR BINARY INPUT q-ARY OUTPUT CHANNELS WITH SYNCHRONIZATION ERRORS In the following two lemmas, we provide a lower bound on the output entropy and an upper bound on the conditional output entropy of the binary input q-ary output channel in terms of the the corresponding output entropies of the synchronization error channel, respectively. We then give a proposition that will be useful in the proof of the result on BSQC channels with synchronization errors (note that the following two lemmas hold for any binary input q-ary output channels with synchronization errors regardless of any symmetry). Lemma 1. In any binary input q-ary output channel with synchronization errors and for all non-negative integer values of q, we have where M is the random variable denoting the length of the received sequence, Y denotes the output sequence of the synchronization error channel and the input sequence of the binary input q-ary output channel, and Y (q) denotes the output sequence of the binary input q-ary output channel. Proof: By using two different expansions of H(Y (q) , M ), we have Hence, we can write where we used the fact that by knowing Y (q) , random variable M is also known, i.e. H(M |Y (q) ) = 0. By using the same approach for H(Y ), we have Finally, we can write where p(m) = P (M = m). On the other hand, due to the definition of the entropy, we can write where E Z {.} denotes the expected value with respect to the random variable Z. Now due to the fact that − log(x) is a convex function of x, we apply Jensen's inequality to write By substituting this result into (6), the proof follows. Lemma 2. In any binary input q-ary output channel with synchronization errors and for any input distribution, we have where Y j denotes the j-th output bit of the synchronization error channel and j-th input bit of the binary input q-ary output channel and Y (q) j denotes the output symbol of the binary input q-ary output channel corresponding to the input bit Y j . Proof: For the conditional output entropy, we can write where the last equality follows since X → Y → Y (q) form a Markov chain. Therefore, On the other hand, by using the fact that by knowing Y , M is also known, we have Furthermore, since the second channel is memoryless, we obtain which concludes the proof. By combining the results of Lemmas 1 and 2, we obtain which gives a lower bound on the mutual information between the transmitted and received sequences of the concatenated channel I(X; Y q ) in terms of the mutual information between the transmitted and received sequences of the synchronization error channel I(X; Y ). where A is a constant, then the capacity of the channels X → Y (q) (C X→Y (q) ) and X → Y (C X→Y ) satisfy Proof: Using the input distribution which achieves the capacity of the channel X → Y , P (X), we can write Hence, for the capacity of the channel X → Y (q) , we have Due to the result in (13) and the result of Proposition 1, the capacity of the concatenated channel can be lower bounded in terms of the capacity of the synchronization error channel and the parameters of the second (memoryless) channel. SYNCHRONIZATION ERRORS In this section, we focus on BSQC channels with synchronization errors (as introduced in Section II-A) and provide lower bounds on their capacity. We first develop the results for sub/ers/synch channel and binary input symmetric quaternary output channel, respectively. Then give the results for general (odd and even) q, respectively. A. Substitution/Erasure Channels with Synchronization Errors The following theorem gives a lower bound on the capacity of the sub/ers/synch channel with respect to the capacity of the synchronization error channel. In a sub/ers channel, every transmitted bit is either flipped with probability of p s , or erased with probability of p e or received correctly with probability of 1 − p s − p e independent of each other. Theorem 1. The capacity of the sub/ers/synch channel C ses can be lower bounded by where C s denotes the capacity of the synchronization error channel, r = lim n→∞ E{M } n , n and m denote the length of the transmitted and received sequences, respectively. Before giving the proof of Theorem 1, we consider some special cases of this result. Since we have considered the general synchronization error channel model of Dobrushin [1], the lower bound (17) holds for many different models on channels with synchronization errors. A popular model for channels with synchronization errors is the Gallager's ins/del model 1 in which every transmitted bit is either deleted with probability of p d or replaced with two random bits with probability of p i or received correctly with probability of 1 − p d − p i independent of each other while neither the transmitter nor the receiver have any information about the insertion and/or deletion errors. If we employ the Gallager's model in deriving the lower bounds, for the parameter r, we have where |s j | denotes the length of the output sequence in one use of the ins/del channel, and the equality results since the channel is memoryless. By utilizing the result of (18) in (17), we obtain the following two corollaries. Corollary 1. The capacity of the sub/ers/ins/del channel C seid is lower bounded by where C id denotes the capacity of an insertion/deletion channel with parameters p d and p i . Taking p e = 0 in this channel model gives the ins/del/sub channel, hence we have the following corollary. Corollary 2. The capacity of the ins/del/sub channel C ids can be lower bounded by To prove Theorem 1, we need the following two lemmas. In the first one we give a lower bound on the output entropy of the sub/ers/synch channel related to the output entropy of the insertion/deletion channel, while in the second one we give an upper bound on the conditional output entropy of the sub/ers/synch channel, related to the conditional output entropy of the insertion/deletion channel. Lemma 3. For a sub/ers/synch channel, for any input distribution, we have where Y denotes the output sequence of the synchronization error channel and input sequence of the substitu-tion/erasure channel, and Y (3) denotes the output sequence of the substitution/erasure channel. Proof: Using the result of Lemma 1, we only need to obtain an upper bound on for all values of m. On the other hand for p(y (3) |y, M = m), we have j 's are the flipped versions of Y j , therefore we can write Note that in deriving the inequality in (7), the summation is taken over the values of y with p(y) = 0. However, in (23) the summation is taken over all possible values of y of length m (over all m-tuples), i.e. p(y) = 0 or p(y) = 0, which results in the lower bound in (23). Furthermore, by using the fact that the probability of having j 1 erasures in a sequence of length m is equal to m j1 p j1 e (1 − p e ) m−j1 , we obtain By substituting this result into (2), we arrive at concluding the proof. It is also worth noting that any capacity achieving input distribution over a discrete memoryless channel results in (23) and (24) can be thought as equalities for these cases. Lemma 4. In any sub/ers/synch channel and for any input distribution, we have Proof: Due to the result of Lemma 2 and the fact that in a substitution/erasure channel, regardless of the distribution of Y j , we can write hence the proof follows. We can now complete the proof of the main theorem. Proof of Theorem 1 : By substituting the results of Lemmas 3 and 4 into the definition of mutual information, for the same input distribution given to both synchronization error and sub/ers/synch channels, we obtain By using the result of Proposition 1, the proof is completed. B. Binary Input Symmetric Quaternary Output Channels with Synchronization Errors In this subsection, we consider a binary input symmetric quaternary output channel with synchronization errors as described in Section II. Theorem 2. The capacity of the binary input symmetric quaternary output channel with synchronization errors C sq can be lower bounded by where C s denotes the capacity of the synchronization error only channel, and r is as defined in (17). Note that, the presented lower bound is true for all memoryless synchronization error channel models. Therefore, similar to the sub/ers/synch channel we can specialize the results to the Gallager insertion/deletion channel as given in the following corollary. Corollary 3. The capacity of binary input symmetric quaternary output channel with insertion/deletion errors (following Gallager's model) C qid is lower bounded by To prove Theorem 2, we need the two lemmas below where the first one gives a lower bound on the output entropy of the binary input quaternary output channel with synchronization errors related to the output entropy of the synchronization error channel, and the second one gives an upper bound on the conditional output entropy of the binary input quaternary output channel with synchronization errors, related to the conditional output entropy of the synchronization error channel. Lemma 5. In any binary input quaternary output channel with synchronization errors and for any input distribution, we have where Y denotes the output sequence of the synchronization error channel and input sequence of the binary input quaternary output channel, and Y (4) denotes the output sequence of the binary input quaternary output channel corresponding to the input sequence Y . Proof: Similar to the proof of Lemma 3, we use the result of Lemma 1 by taking the summation over all possible sequences of length m, i.e., regardless of p(y) = 0 or p(y) = 0, which results into a looser lower bound. On the other hand, for p(y (4) |y, M = m), we have where j 1 denotes the number of transitions 0 → 0 − or 1 → 1 − , j 2 denotes the number of transitions 0 → 0 + or 1 → 1 + , and j 3 denotes the number of transitions 0 possibilities among all m-tuples (for y) such that d(y, y (4) ) 0→0 − = i 1 , d(y, y (4) ) 0→0 + = i 2 , d(y, y (4) ) 0→1 − = i 3 and d(y, y (4) ) 0→1 + = i 4 . By defining m − (y (4) ) = #{t ≤ m|y By taking the summation over all possible output sequences of length m, and using the fact that the probability of having the output y (4) with length m containing By substituting the result of (34) into the result of Lemma 1, we obtain which concludes the proof. Lemma 6. For a binary input quaternary output channel with synchronization errors, for any input distribution, we have Proof: Substituting the straightforward result H(Y We can now complete the proof of Theorem 2. Proof of Theorem 2 : Using the results of Lemmas 5 and 6, we obtain Hence, due the result in Proposition 1, the proof is complete. C. Binary Input Symmetric q-ary Output Channel with Synchronization Errors (Odd q Case) In this subsection, we consider a binary input symmetric q-ary output channel with synchronization errors for an arbitrary odd value of q, where we represent the transition probability values P Y Table III shows transition probabilities for a binary input 5-ary output channel. The main result on the BSQC channel with synchronization errors with odd q is a generalized version of the result in Theorem 1. Theorem 3. The capacity of the BSQC channel with synchronization errors C Qs for an odd q can be lower bounded by where C s denotes the capacity of the binary input synchronization error channel. Proof: The proof of the theorem is given in Appendix A. D. Binary Input Symmetric q-ary Output Channel with Synchronization Errors (Even q Case) We now consider the generalization of the result of Theorem 2 for even q. For the transition probabilities of the binary input q-ary output channel, we define P Y Table IV shows transition probabilities for a binary input 6-ary output channel. The main result on the BSQC channel with synchronization errors for any q is given in the following theorem. Theorem 4. Capacity of the BSQC channel with synchronization errors C Qs , for any even q can be lower bounded by where C s denotes the capacity of the binary input synchronization error channel. Proof: The proof of Theorem 4 is given in Appendix B. V. ACHIEVABLE RATES OVER BI-AWGN CHANNELS WITH SYNCHRONIZATION ERRORS In this section, a binary synchronization error channel in the presence of AWGN is considered as defined in Section II-B. We present two different lower bounds on the capacity of the AWGN/synch channel. Before giving the main results on AWGN/synch channel, we would like to make some comments on its information stability. A. Information Stability of Memoryless Discrete Input Continuous Output Channels with Synchronization Errors It is shown in [15] that the Shannon's theorem holds in any information stable channel. In [1], the information stability of the memoryless discrete input discrete output channels with synchronization errors is proved which shows that the Shannon's theorem holds in such a channel. It can be observed that the proofs used in [1] can be also generalized to the continuous output case as discussed in this section. To prove the information stability, it is sufficient to prove the existence of the limit which is the information capacity of the channel, and the existence of an information stable sequence of two random variables X, Y , which achieves the capacity of the channel. The only difference between the channel considered here with the channel considered by Dobrushin in [1], is that in the continuous output case the output symbols belong to an infinite set. However, this difference does not have any effect on the steps of proofs. The existence of the limit in [1, Section IV] is proved based on the memoryless property of the channel which also holds in the continuous output case. In the case of the existence of an information stable sequence achieving the capacity ([1, Section V]), there is no need to condition on the discrete output symbol values, and all the reasoning hold for the continuous output case as well. The key point in the proof is that the channel is stationary which also holds for the continuous output case, such that the same genie-aided channel as the one considered for the discrete output channel can also be considered for our case. The genie-aided channel is obtained by inserting markers through the transmission after transmitting each block of length k, where the entire length of transmission is K = gk + l (l < k). The other point in the proof is the number of possibilities in converting the output of the original channel Y into the output of the genie-aided channel Y Since for the continuous output case we still have lim g→∞ max Y |f −1 ( Y )| g → 0, the proof holds. Since, both capacity convergence and existence of an information stable sequence which achieves the capacity remain valid in the continuous output case as well, we can conclude that the memoryless discrete input continuous output channels with synchronization errors are also information stable and, as a result, the Shannon's theorem applies in such a channel. B. Capacity Lower Bounds for AWGN/Synch Channels Here, we present two results on the capacity of an AWGN/synch channel. Both results are generalizations of results for the discrete output channels when the number of quantization levels goes to infinity. The first result is obtained by employing a uniform quantizer while in deriving the second result a non-uniform quantizer is employed which provides a tighter lower bound. Theorem 5. Capacity of the AWGN/synch channel C As can be lower bounded by where C s denotes the capacity of the synchronization error channel. We give an outline of the proof and defer its details to Appendix C. We consider a quantized version of the output symbols via a 2M -levels uniform quantizer by quantization intervals of length ∆ with M going to infinity and ∆ going to zero. Therefore, for p m (m = {−M, · · · , −1, 1, · · · , M }) which denotes the probability that the continuous output symbol, Y j , being quantized to the bm-th quantization level (b ∈ {−1, 1}) conditioned onX j = b being transmitted, we obtain where Q(.) is the right tail probability of the standard normal distribution. By substituting (42) into the result of Theorem 4, we can write Finally, by using the fact that when M → ∞ and ∆ → 0, we have after some algebra (detailed in Appendix C), we obtain the result in (41). This result is obtained as a straightforward generalization of the discrete output channel results by employing a Fig. 4. Symmetric non-uniform quantizer step sizes. symmetric uniform quantizer, but the result may not be tight. For instance, for σ = 0, i.e. the noiseless scenario, the result does not match with the trivial result which is C As = C s for σ = 0. We expect that if we apply an appropriate non-uniform quantizer on the output symbols of the AWGN/synch channel, we can achieve a tighter lower bound on its capacity (which also agrees with the trivial result for C As = C s for σ = 0). By using this idea, we present our main result on the capacity of an AWGN/synch channel in the following theorem by using a symmetric non-uniform quantizer. Theorem 6. Let C s denote the capacity of the synchronization error channel, then for the capacity of the AWGN/synch channel C As , we obtain Proof: To prove the theorem, we first define an appropriate symmetric non-uniform quantizer with 2M quantization levels. Then, by letting M go to infinity and employing the result of Theorem 4, we complete the proof. In general, by utilizing any symmetric quantizer with 2M quantization levels on the output symbols Y j , for the transition probabilities of the resulting binary input symmetric 2M -ary output channel, we have where t −m = −t m , t 0 = 0 and t m−1 < t m for m = {1, · · · , M }. We choose the quantization step sizes, i.e., ∆ m = t m − t m−1 for m = {1, · · · , M }, to satisfy p 1 = p 2 = · · · = p M . Note that due to symmetry of the quantizer ∆ −m = ∆ m (as illustrated in Fig. 4). On the other hand, by defining P = Q( 1 σ ), we have Using the results of (52) and (50), we obtain are no existing results on lower bounding the capacity of the ins/del/sub/ers and ins/del/AWGN channels, therefore, our results will provide a benchmark for these general cases. A. Insertion/Deletion/Substitution Channel In Table V, we compare the lower bound on the capacity of the ins/del/sub channel (20) with the existing lower bounds in [2,11] for several values of p d , p i and p s . We employ the lower bound derived in [6] as the lower bound on the capacity of the deletion channel and the lower bound in [11] as the lower bound on the capacity of the ins/del channel in (20). Note that the Gallager's model in [2] i.e., the information rate of the overall channel is lower bounded for the input distribution which results in the tightest lower bound on the capacity of the ins/del channel. We observe that for p i = 0, a fixed p d and small values of p s , the lower bound (20) improves the lower bound given in [11]. This is not unexpected, because for small values of p s the input distribution achieving the capacity of the i.i.d. deletion channel is not far from the optimal input distribution of the del/sub channel. We also observe that the lower bound (20) outperforms the lower bound given in [2]. However, for the case p i = 0 it does not improve the lower bound given in [11], since as the lower bound on the capacity of ins/del channel we used the result in [11] and lower bounded further to achieve lower bound on the capacity of the overall channel. [13]. We observe from Fig. 5 that the lower bound (44) is far away from the simulation based results of [13] for large σ 2 values and small deletion probabilities. This is not unexpected, because in [13], the achievable information rate for i.u.d. input sequences are obtained (through lengthy Monte-Carlo simulations) and i.u.d. inputs are close to optimal. However, the procedure employed in [13] is only useful for computing capacity lower bounds for small values of deletion probabilities, e.g. p d ≤ 0.1, while the lower bound in (29) holds for the entire range of deletion probabilities by employing any lower bound on the capacity of the deletion channel in lower bounding the capacity of the deletion/AWGN channel. We also observe that, since in deriving the lower bound (44) on the capacity of the deletion/AWGN channel, we employ the tightest lower bound presented on the capacity of the deletion channel, for small values of σ 2 , the lower bound (44) improves the lower bound given in [13]. VII. SUMMARY AND CONCLUSIONS In this paper, we presented several lower bounds on the capacity of binary input symmetric output channels with synchronization errors in addition to substitutions, erasures or AWGN. We showed that the capacity of any channel with synchronization errors which can be considered as a cascade of two channels (where only the first one suffers from synchronization errors and the second one is a memoryless channel) can be lower bounded in terms of the capacity of the first channel and the parameters of the second channel. We considered two classes of channels: binary input symmetric q-ary output channels (e.g., for q = 3 a binary input channel with substitutions and erasures) with synchronization errors and BI-AWGN channels with synchronization errors. We gave the first lower bound on the capacity of substitution/erasure channel with synchronization errors and the first analytical result on the capacity of BI-AWGN channel with synchronization errors. We also demonstrated that the lower bounds developed on the capacity of the del/AWGN channel for small σ 2 values and the del/sub channel for small values of p s improve the existing results. APPENDIX A PROOF OF THEOREM 3 We first give a lower bound on the output entropy of the binary input q-ary output channel with synchronization errors related to the output entropy of the binary synchronization error channel, then give an upper bound on the conditional output entropy of the binary input q-ary output channel with synchronization errors related to the conditional output entropy of the binary synchronization error channel. Lemma 7. For a binary input q-ary output channel with synchronization errors, for any input distribution and any odd q, we have where Y denotes the output sequence of the synchronization error channel and input sequence of the binary input symmetric q-ary output channel, and Y (q) denotes the output sequence of the binary input symmetric q-ary output channel. Proof: For p(y (q) |y, M = m), we have where j k denotes the number of transitions b → k b . E.g., in a binary input 5-ary output channel we have p(−1102|1111) = p −1 p 1 p 0 p 2 . Therefore, for a fixed output sequence y (q) of length m with j k symbols of k, since there are the proof of Theorem 3 is complete. APPENDIX B PROOF OF THEOREM 4 We need the following two lemmas to proof Theorem 4. In the first one, a lower bound on the output entropy of the binary input q-ary output channel with synchronization errors is derived relating with the output entropy of the binary synchronization error channel. In the second one, we give an upper bound on the conditional output entropy of the binary input q-ary output channel with synchronization errors related to the conditional output entropy of the binary synchronization error channel. By employing the result of two following lemmas and using the same approach as in the proof of Theorem 2, Theorem 4 is proved. Lemma 9. For a binary input q-ary output channel with synchronization errors, for any input distribution and any even q, we have where Y denotes the output sequence of the synchronization error channel and input sequence of the binary input symmetric q-ary output channel, and Y (q) denotes the output sequence of the binary input q-ary output channel. where j k denotes the number of transitions b → k b . For instance, in a binary input 6-ary output channel we have p(−11−32|1111) = p −1 p 1 p −3 p 2 . On the other hand, for a fixed output sequence y (q) of length m with j k symbols of k, there are q 2 k=1 jk ik j−k i−k possibilities for y such that d(y, y (q) ) b→ k b = i k . By defining m k (y (q) ) = #{t ≤ m|y Furthermore, by taking the summation over all the possibilities of y (q) in (64), we obtain y (q) p(y (q) |M = m) y,p(y) =0 p(y (q) |y, M = m) ≤ y (q) p(y (q) |M = m) which concludes the proof. Proof: The proof is similar to the proof of Lemma 8. For large M , we have p m ∼ = f (1 − m∆)∆ with the understanding that the approximation becomes exact as
2012-03-28T23:16:56.000Z
2012-03-28T00:00:00.000
{ "year": 2014, "sha1": "404e00d90cd9be27fa755456e38c40b7e44d602e", "oa_license": null, "oa_url": "http://repository.bilkent.edu.tr/bitstream/11693/12733/1/8340.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6399a0d8f927d88c8e712aee2ea9fb63b5a416d9", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
14346449
pes2o/s2orc
v3-fos-license
The evolutionary landscape of intergenic trans-splicing events in insects To explore the landscape of intergenic trans-splicing events and characterize their functions and evolutionary dynamics, we conduct a mega-data study of a phylogeny containing eight species across five orders of class Insecta, a model system spanning 400 million years of evolution. A total of 1,627 trans-splicing events involving 2,199 genes are identified, accounting for 1.58% of the total genes. Homology analysis reveals that mod(mdg4)-like trans-splicing is the only conserved event that is consistently observed in multiple species across two orders, which represents a unique case of functional diversification involving trans-splicing. Thus, evolutionarily its potential for generating proteins with novel function is not broadly utilized by insects. Furthermore, 146 non-mod trans-spliced transcripts are found to resemble canonical genes from different species. Trans-splicing preserving the function of ‘breakup' genes may serve as a general mechanism for relaxing the constraints on gene structure, with profound implications for the evolution of genes and genomes. I n eukaryotes, to form mature messenger RNA, the genetic code stored in non-contiguous units (exons) on nuclear DNA must be joined through a process called splicing. The splicing mechanism carries great potential for expanding the diversity and complexity of the eukaryotic transcriptome and proteome 1,2 . Compared with the commonly observed intra-molecular cis-splicing, intergenic trans-splicing, in which exons from two independent primary transcripts are joined, is less well studied. Its functional significance remains largely unknown, despite numerous cases documented in organisms from lower (such as, Caenorhabditis elegans 3,4 ) to higher eukaryotes, including Drosophila 5,6 and mammalian species 7,8 . In addition to the fact that many trans-splicing events in mammals are associated with malignant tissues, data from pilot ENCODE studies and other works have revealed the prevalence of chimeric RNAs in both normal tissues and transformed cell lines 9 . Other analyses also reported the involvement of an unexpected number of genes in ostensible trans-splicing events in mammalian cells 10,11 . One possible interpretation of these data is that these genes contributed to the existence of widespread trans-splicing activity in mammalian cells. Whether this phenomenon is functionally significant or represents background 'splicing noise' has yet to be determined. Nonetheless, spliceosome-mediated pre-mRNA trans-splicing has been utilized to repair endogenous RNA species for the treatment of inherited and acquired diseases as a therapeutic application 12 . Compared with trypanosomes and nematodes, in which frequent trans-splicing events involve a common short splicing leader 3,[13][14][15] , intergenic trans-splicing events in insects are rare. The two best documented cases are mod(mdg4) (short for 'modifier of mdg4') and lola (longitudinals lacking) in Drosophila 5,6,16,17 . Mod(mdg4) produces at least 31 splicing isoforms that share a common 5 0 -exon encoding the N-terminal BTB domain 17,18 . The 3 0 -variable region comes from independent primary transcripts, some located on the strand opposite the 5 0 -exons. The C2H2-type zinc-finger motif, called a FLYWCH domain, is found in most 3 0 -variable regions 18 . Mod(mdg4) has been implicated in position effect variegation, establishment of chromatin boundaries, nerve pathfinding, meiotic chromosome pairing and apoptosis 18 . Lola is an example of inter-allelic trans-splicing in Drosophila, as demonstrated by complementation tests and fly hybrid analysis using different allelic markers 19 . In addition, a recent study employing high-throughput transcriptome analysis identified additional inter-allelic trans-splicing events in flies, revealing more frequent inter-allelic events 16 . The advance of high-throughput RNA-sequencing (RNA-seq) technology 2,20-22 has enabled genome-wide analyses of transsplicing events and has helped to reveal novel events in the silkworm Bombyx mori 23 and in a variety of human cell types 9,24 . Furthermore, multi-lab collaborative projects such as FANTOM 25 and modENCODE 26 , and public data cache such as NCBI SRA 27 have generated and shared unprecedented resources, making largescale evolutionary studies possible. To explore the landscape of trans-splicing events and to characterize their evolutionary dynamics and functions, we assemble a mega-data study of a phylogeny of eight species across five orders of class Insecta, representing a model system spanning 400 million years of evolution. Our goals are to address several essential questions concerning intergenic trans-splicing: (i) whether intergenic transsplicing is functionally significant or merely represents splicing noise; (ii) to what extent intergenic trans-splicing events have impacted the transcriptome and proteome of an organism; (iii) how trans-splicing events originate and evolve; and (iv) whether and how trans-splicing is engaged in diversifying gene functions. With the availability of reference genome and transcriptome sequencing resources, we are able to perform a comprehensive screening of trans-splicing events across multiple insect orders and species. We have identified a total of 1,627 intergenic trans-splicing events, and many of those found in Drosophila melanogaster and Danaus plexippus are randomly selected for experimental validation. By systematically characterizing trans-splicing events in related insect species, we reveal the global profile and proteomic impact of these events, and also obtain crucial evidence about the origin and functions of intergenic trans-splicing events in insects. These findings deepen our understanding of the effect of trans-splicing on the molecular evolution of genes. Results Genome-wide identification of intergenic trans-splicing. To characterize the landscape of intergenic trans-splicing events and to understand their evolution, we perform evolutionary analysis on well-defined insect lineages that offer some unique advantages for our study. Following careful research, we make use of a collection of eight representative species across five orders of class Insecta, including the Dipterans D. melanogaster and Aedes aegypti, the Lepidopterans B. mori and D. plexippus, the Coleopteran Tribolium castaneum, the Hymenopterans Apis mellifera and Camponotus floridanus, and the Hemipteran Acyrthosiphon pisum (see Methods for details and Fig. 1). In addition to their relationships within a clearly defined time-frame (refer to TIMETREE 28 (Fig. 1)), a well-annotated genome and highthroughput RNA-seq data have been made available for each of these species. Thus, this phylogeny forms an ideal system for the systematic characterization of trans-splicing events. To identify candidate events, we first build an integrated pipeline modified from our previous work 23 (see Methods for details). To eliminate false positives, we specify stringent criteria in our screening. For example, a trans-splicing event requires the support of both sequence reads covering the junction site and paired-end (PE) reads bridging exons of two different genes. A total of 1,627 transsplicing candidate events are identified from the phylogeny, ranging from 20 in C. floridanus to 554 in A. pisum ( Fig. 1 and Supplementary Data 1). Originally being parts of canonical gene models, the exonic elements involved in trans-splicing events present an intriguing question as to whether related splice sites are used simultaneously for both cis-and trans-splicing events. Notably, we find that elements in 1,268 out of 1,627 (77.93%) trans-splicing events are also involved in cis-splicing. In 859 events, both arms flanking junction sites are found in cis-spliced products at the same time, whereas in the other 409 events one of the arms is involved in cis-splicing (Supplementary Data 1). To validate candidate trans-splicing events and estimate the success rate of our identification process, we scrutinize the subsets from D. melanogaster, D. plexippus and B. mori. First, we compare our identified events in D. melanogaster and B. mori with those confirmed by previous studies 5,18,23 . All nine mod(mdg4) trans-spliced isoforms joining exons located on opposite strands in D. melanogaster 5,18 are identified by our pipeline (Supplementary Table 1). However, the 22 mod(mdg4) isoforms joining exons on the same strand are correctly not included in our set, as overlapping or neighbouring genes on the same strands are excluded by our filter. In B. mori, 7 of 9 mod(mdg4) and 9 of 12 non-mod trans-spliced products identified by Shao et al. 23 are also found in our set. We look further into the two mod and three non-mod events previously reported for B. mori that have been missed by our pipeline. They are found to be supported by fewer than two chimeric reads and thus are not called in our search because of the more stringent thresholds instituted by our study compared with earlier work. Second, we take an experimental approach to examine our trans-splicing events in the fly D. melanogaster and the butterfly D. plexippus by reverse transcription-PCR (RT-PCR) and Sanger sequencing. It has been reported previously that template switching in reverse transcription can produce artefacts mistaken for trans-spliced sequences, which are mostly non-canonical and reverse transcriptase (RTase) dependent 29 . To mitigate the issue caused by template switching, we employ RT-PCR using Moloney Murine Leukemia Virus (MMLV)-derived RTase and Avian Myeloblastosis Virus (AMV)-derived RTase in parallel experiments. Experiments with different RTases were shown to be an effective diagnostic tool for template switching artefacts, and the two-RTase routine was found to produce results consistent with the RNase protection assay 24 . We first calibrate our protocol using the six D. melanogaster mod(mdg4) transspliced isoforms as positive controls (see Methods for details). Our method is able to amplify and sequence-verify all six isoforms from mixed fly samples ( Supplementary Fig. 1). Then, we randomly pick 17 additional trans-splicing events from our D. melanogaster subset for experimental validation, 11 (64.7%) of which are found to produce the predicted PCR-amplification products verified by Sanger sequencing (Supplementary Fig. 1; Supplementary Tables 2 and 3). In D. plexippus, 14 of 18 (77.8%) predicted mod(mdg4) trans-splicing isoforms, and 10 of 10 (100%) non-mod trans-splicing products are validated from D. plexippus mixed tissue samples (Supplementary Table 2 and Supplementary Fig. 1). Notably, our validation rates are in line with those for B. mori 23 , and all experimentally validated D. melanogaster and D. plexippus trans-splicing events are found to ligate at the precise splicing-junction sites that are confirmed by Sanger sequencing, demonstrating the efficiency and consistency of our approach and strengthening confidence in our analysis pipeline. Global profile of intergenic trans-splicing events. The 1,627 identified trans-splicing events involve 2,199 of 139,446 total genes from the eight insect species. The fraction (B1.58%) of genes involved indicates a low trans-splicing activity. To characterize the trans-splicing events and their distribution pattern, the identified events are analysed using their respective reference genomes. In a trans-splicing event, the donor gene and the acceptor gene are defined as the gene that 'donates' the upstream exon and the gene that provides the downstream exon to be joined to ('accept') the donor, respectively. Based on the chromosomal or scaffold locations and transcript orientations of the donor and acceptor genes, the 1,627 events are grouped into four types (Fig. 2a). To detect possible mix-up between canonical cis-splicing genes and types SmDA and DfChr, the distances between exons in cis-splicing are compared with the distances between donors and acceptors in types SmDA and DfChr. Note the distances between donors and acceptors in type DfChr are computed for those on different scaffolds of unknown chromosome location (refer to Fig. 2a legend). The median distance between donor and acceptor genes (B46.4 kb for SmDA and B606 kb for DfChr) is significantly greater than that between canonical exons (B0.21 kb; Fig. 2b and Supplementary Fig. 2). This result indicates that the likelihood of false positives due to incomplete genome assembly data is very low. Notably, the greater distance bridged by the trans-splicing mechanism, as quantified by our statistics, represents an intriguing phenomenon in RNA processing. The sizes of the trans-spliced transcripts are analysed and compared with those of cis-spliced transcripts. The median size of the trans-splicing products is significantly larger, mostly due to the size of the 3 0 -acceptor (Fig. 2c). These attributes of the transspliced transcripts are consistent among the eight investigated species ( Supplementary Fig. 3). Upon comparing the exons of the cis-spliced and trans-spliced products, we find a higher number of exons in the trans-spliced products than in cis-spliced (Fig. 2d). Moreover, the last exon of the donor and the first exon of the acceptor are the least retained in trans-spliced products. The nucleotide composition surrounding the trans-splicing sites is summarized in Fig. 2e, with a conserved pattern with a signature GU-AG consensus 30 similar to that in cis-splicing. The common pattern supports that both trans-and cis-splicing utilize the same splicing machinery 12,31 . To evaluate the impact on the proteome of the formation of chimeric protein species through trans-splicing events, we analyse the proteins translated from the trans-spliced transcripts. Overall, 63% (1,027/1,627) of the trans-splicing events contribute to the formation of new chimeric proteins ( Fig. 2f and Supplementary Data 2). Assuming a single protein for each canonical gene, these 1,027 new chimeric proteins amount to B0.73% of all encoded protein species. Notably, 67.3% (691/1,027) of the chimeric proteins are translated in the same frame as both the donor and acceptor genes. The bias towards joining two existing CDSs is significant (w 2 test: P-value o2. 2E-16). This global profile of intergenic trans-splicing events, which to our knowledge is the first completed across multiple orders of insects, illustrates both similarities to and differences from cis-splicing. Conserved trans-splicing events are rare except mod(mdg4). To understand how the trans-splicing events are related and how they evolved, we build a homology matrix via pairwise comparison using BLASTP among the 1,027 trans-spliced coding transcripts (see Methods for details). By definition, a conserved trans-splicing event requires both peptides encoded by its donor and acceptor segment to match those of events from different species. Our stringent criteria exclude cases of partial matching by either the donor or the acceptor segment alone. The matrix is then screened for conserved events, and surprisingly, with the exception of mod(mdg4), no other events are found to meet our criteria (Supplementary Table 4). For the 23 conserved mod(mdg4)-like events, 9 were previously reported for B. mori and D. melanogaster (Supplementary Table 1), whereas the remaining 14 are new mod(mdg4)-like events discovered in A. aegypti and D. plexippus. events represent a unique case of intergenic trans-splicing conserved in insects. Therefore, it is imperative that we characterize their activity more broadly for class Insecta to address the question of how mod(mdg4)-like trans-splicing events originated and evolved. To this end, we add two more species, Plutella xylostella and Anopheles gambiae, to our analysis. The structure of mod(mdg4)-like gene loci can be divided into two groups based on configuration. In D. melanogaster, B. mori, D. plexippus and possibly P. xylostella, the donor and acceptor genes are on the same chromosome, but the acceptors' exons could be found on both DNA strands, either on the same strand or opposite to the donor exon (Fig. 3a). In contrast, A. gambiae exhibits cases in which both the donor and acceptor exon of mod(mdg4) were located on the same DNA strand 17 . The A. gambiae mod(mdg4) gene was likely to undergo trans-splicing because of the similar donor-acceptor proximity and exon-intron structure between the A. gambiae and D. melanogaster mod(mdg4) loci 5,17 . A. aegypti, another mosquito species, may have the same configuration, assuming that its mod(mdg4) donor and acceptors are located on the same chromosome (Fig. 3a). The distances between the donor and the acceptors range from 950 kb to 1.280 Mb. The observation of two distinct patterns at mod(mdg4) loci indicates that a structural rearrangement occurred within the Dipteran lineage after the divergence of flies and mosquitoes. Mod(mdg4)-like trans-splicing in D. plexippus is an intriguing and novel case. It presents at least 40 different isoforms stemming from alternative exon combinations from 24 acceptor genes located on scaffold DPSCF300079 distributed over a locus of 550 kb (Fig. 3b). In total, 7 and 33 isoforms took the acceptor exons from the same strand or opposite strand to the donor exons, respectively. However, notably different from the other species, D. plexippus has a special acceptor gene, DPOGS208298, located on the opposite strand upstream of the donor gene ( Fig. 3b; marked as 8298). Furthermore, frequent alternative trans-spliced mod(mdg4) transcripts are observed in D. plexippus. For example, acceptor gene DPOGS208282 has nine alternative isoforms with variable combinations of exons ( Fig. 3b and Supplementary Table 5). To verify the presence of alternative trans-spliced transcripts, we have performed RT-PCR/sequencing analysis with specific primers designed for 18 of the mod(mdg4) isoforms. Fourteen are validated by PCR-amplification products of the predicted size (Fig. 3b, marked with orange arrows; Supplementary Fig. 1). It is worth mentioning that the trans-splice sites and the surrounding sequences in mod(mdg4) are not conserved between insect species, because they are located outside the conserved BTB and FLYWCH domains. Daphnia has a canonical cis-spliced mod(mdg4)-like homologue ( Supplementary Fig. 4). The same is true for species of the Coleopteran and Hymenopteran lineages, including T. castaneum, A. mellifera and C. floridanus ( Supplementary Fig. 4). To understand the evolutionary trajectory of mod(mdg4), we analyse the donor and acceptor genes that code protein sequences. Although the number of donor exons varies, the N-terminal segment of the mod(mdg4) protein contains a highly conserved BTB domain (B110 amino acids; see Methods for details and Fig. 4a), in which the homology between the N-terminus of mod(mdg4) proteins from different species is primarily found (Fig. 4b), implying a conserved function and critical role for the BTB domain. It is worth noting that the BTB domain has a zinc finger structure that can mediate protein-protein or protein-DNA interactions 33 . The C-terminus of the mod(mdg4) protein is highly variable in size, depending on the combination of exons retained from acceptor genes. The FLYWCH domain (B60 amino acids) is found in most acceptor isoforms (for example, 26 of 31 in D. melanogaster, 6 of 9 in B. mori, 25 of 34 in A. aegypti, 37 of 41 in P. xylostella and 37 of 40 in D. plexippus). To reveal its evolutionary path, consensus regions of the aligned FLYWCH domains are used to construct a phylogenetic tree using the Maximum Likelihood (ML) approach, in which the D. pulex domain is used as the outgroup (see Methods for details; Fig. 4c and Supplementary Fig. 5). The result of this analysis places the FLYWCH domains into two distinct groups between Dipterans and Lepidopterans, in which clusters containing FLYWCH domains from closely related species have formed. The duplications of the FLYWCH domain occurred separately within the Dipteran and Lepidopteran lineages after their divergence, because few interwoven FLYWCH domains are present across the two orders. Then, along the way several rounds of duplication took place in the lineages leading to D. melanogaster, B. mori, A. aegypti, D. plexippus or P. xylostella. Consistent with this hypothesis, FLYWCH domains clustered in one branch often correspond to tandem repeats of exons at the same acceptor gene locus (Fig. 4c). Non-mod trans-spliced products resemble regular transcripts. Because non-mod events represent the majority of trans-splicing events identified, it is of particular interest to determine how these events originated and evolved. Thus, we perform a pairwise comparison via BLASTP between the non-mod trans-spliced coding transcripts and canonical genes from the eight species (see Methods for details). Although the non-mod trans-spliced products do not match among themselves, a striking number have homologues among canonical genes in different species (Table 1). Under stringent thresholds of 50% identity and 90% coverage between matching transcripts, 146 non-mod transspliced transcripts find at least one match in different species. In other words, these non-mod trans-spliced transcripts resemble canonical genes from other species. This intriguing finding suggests that a significant fraction of trans-splicing events are likely the result of regular cis-splicing gene 'break-up' (Supplementary Data 3). A similar phenomenon was observed in C. elegans, in which the trans-spliced product of eri-6 and eri-7 was identical to the single contiguous gene, CBG03999, in Caenorhabditis briggsae 4 . The relationship between a trans-spliced transcript and its canonical gene homologues in other species is exemplified by the trans-splicing observed between BGIBMGA008294_exon4 and SIBSBM001135_exon2 in B.mori, which was validated experimentally by allele-specific RT-PCR 23 . Besides the trans-spliced form, no canonical transcript involving either the donor or acceptor is found in B.mori using RNA-seq data. However, its homologous cis-spliced canonical gene is found in daphnia (JGI_V11_211767) and 12 other insects, including Culex pipiens, A. aegypti, A. gambiae, D. melanogaster, P. xylostella, Heliconius melpomene, D. plexippus, T. castaneum, A. mellifera, Acromyrmex echinatior, C. floridanus and A. pisum (Fig. 5a and Supplementary Data 3). The BGIBMGA008294_exon4::SIBSBM001135_exon2 transcript in B. mori bears an exon-intron structure similar to that of other Lepidopterans, including P. xylostella (14 exons), H. melpomene (13 exons) and D. plexippus (13 exons). These trans-splice sites are also conserved in the most closely related species. However, gene structure became dramatically different as the phylogenetic distance increased between species. Homologues of the Hemipteran A. pisum and the Dipterans C. pipiens, A. aegypti, A. gambiae and D. melonagaster have exon counts of 2, 3, 3, 2 and 4, respectively, indicating that an exon-fusion process took place within the Hemipteran and Dipteran lineages. Given that the outgroup daphnia homologue, JGI_V11_211767, has 16 exons, this family of genes is likely to have undergone multiple rounds of exon breakup and fusion during the process of lineage expansion. Thus, we conclude that trans-splicing between BGIBMGA008294_exon4 and SIBSBM001135_exon2 originated in the B. mori lineage, where the breakup of the canonical homologue took place. Our analysis of the structural history of trans-spliced transcripts and their canonical gene homologues reveals a picture of frequent structural shuffling in parallel with the expansion of the insect lineage. Based on this observation, we propose that intergenic trans-splicing serves to preserve the function of broken genes following genomic mishaps. Transsplicing allows for the breakup of genes without a complete loss of function, which increases the tolerance to genome rearrangement and structural changes during evolution. This may be a general mechanism to relax constraints on gene structure and thus would exert a profound effect on the evolution of genes and genomes. Such a function may bear special significance in the case of rapidly expanding organisms such as insects, whose genomes continue to evolve rapidly. Although our pipeline cannot differentiate trans-splicing events between alleles of the same gene (that is, inter-allelic), which requires preknowledge of distinguishable allelic features, we reason that between-paralogue trans-splicing events would resemble those attributed to inter-allelic trans-splicing. By screening pairs of donor and acceptor genes in trans-spliced coding transcripts from the same species, we find that fewer than 5.8% of cases (60/1,027) possess homology between donor and acceptor sequences (Supplementary Table 6). Thus, trans-splicing events between likely paralogues account for a small fraction of the total events identified via our pipeline. We select some of these cases for visual evaluation. For example, the D. melanogaster calcium ion-binding protein genes TpnC73F and TpnC47D are paralogues located on chromosomes 3L and 2R, respectively. Our analysis finds that the TpnC73F transcript, NM_079398, donates the first three exons to the last two exons of the TpnC47D transcript, NM_057620, thereby forming a five-exon trans-spliced product (Fig. 5b) coexisting with their cis-spliced forms. RNA-seq reads covering the junction site reveal the precise joining of two exons with source-specific single-nucleotide variants aligned on both arms. The data of between-paralogue trans-splicing events support the inter-allelic results previously reported for Drosophila 16,19 , suggesting that both between-paralogue and interallelic trans-splicing may take place via a similar mechanism. We further speculate that between-paralogue and inter-allelic transsplicing events may play a role in creating a new mix of functional products. Discussion To address the question of how intergenic trans-splicing events originate and evolve and to what extent they impact the proteome, we assemble a mega data analysis on a phylogeny of eight species spanning multiple orders in class Insecta, significantly expanding the scope of trans-splicing study. By design, this model system represents the most explosive expansion of living organisms on the Earth, spanning an evolutionary timeframe of B400 million years. Interrogation of this model system provides us a historic view of the landscape of intergenic trans-splicing events and deepens our understanding of the function and evolutionary dynamics of trans-splicing. A total of 1,627 trans-splicing events involving 2,199 genes are identified. Consistent with previous results 9,11 from the ENCODE project and other studies, we observe the involvement of a small fraction (1.58%) of total genes, accounting for 0.73% of encoded protein species. The low occurrence of 'background' events does not support the existence of widespread trans-splicing activity in insects, in agreement with an early study performed in Drosophila 16 . However, our evidence also argues against the hypothesis that intergenic trans-splicing events are merely 'splicing noise.' Mod(mdg4)-like trans-splicing events are found in multiple species across Diptera and Lepidoptera, which share a common ancestor from which they diverged B360 million years ago. It is likely that the elements for trans-spliced mod(mdg4) originated from a canonical gene in their common ancestor and were separated into donor and acceptor genes during the expansion and divergence of the insect lineages. The start of mod(mdg4)-like trans-splicing was followed by the duplication and expansion of the acceptor genes encoding the FLYWCH domain. The expansion of acceptor genes enlarged the repertoire of mod(mdg4) isoforms that might be used for diversified functions, which in turn enabled their complex regulation during the development and adaptation of these insects. Notably, this unique case of generating functional diversity through intergenic trans-splicing might not be achieved through alternative cis-splicing because of limitations imposed on canonical cis-splicing genes. Remarkably, mod(mdg4)-like transsplicing remains the only event that is conserved and engaged in the diversification of gene function in insect lineages, despite the fact that the trans-splicing phenomenon has been observed in organisms as early as the primitive metazoans (for example, the nematode C. elegans) 3,4 . Taken together, our data and other studies indicate that intergenic trans-splicing appears to be tightly controlled through evolution. Its potential to create new functions by combining remote coding exons is not broadly utilized by insects throughout their evolutionary history. The finding that some non-mod trans-splicing transcripts resembling canonical cis-splicing genes from different species is somewhat unexpected. Albeit relatively small in number, these transcripts present strong evidence that trans-splicing acts to preserve occasionally broken genes during the structural shuffling of the genome and during exon-breakup-fusion events. In this scenario, trans-splicing mechanism in insects acts more as a 'saviour' than 'creator' of gene function. Trans-splicing appears to increase the tolerance to genome structural changes during evolution by relaxing constraints on gene structure, which in turn has profound implications for the evolution of genes and genomes. Methods Reference genome and collection of RNA-seq data. Insect genome reference sequences are retrieved from various public databases. They include R5/dm3 from UCSC 34 High-throughput RNA-seq data are obtained from the NCBI Sequence Read Archive 46 (SRA; http://www.ncbi.nlm.nih.gov/sra), except where specifically indicated otherwise. The RNA-seq data for D. melanogaster are obtained from 30 developmental stages of the embryo, pupae, larvae, adult male and adult female 47 . Their accession codes are: SRX007811, SRX008008, SRX008012, SRX008016, SRX008020, SRX008024, SRX008028, SRX008157, SRX008005, SRX008009, SRX008013, SRX008017, SRX008021, SRX008025, SRX008029, SRX010758, SRX008006, SRX008010, SRX008014, SRX008018, SRX008022, SRX008026, SRX008155, SRX012269, SRX008007, SRX008011, SRX008015, SRX008019, SRX008023, SRX008027, SRX008156, SRX012270 and SRX012271. The RNA-seq data for A. aegypti are obtained from four developmental samples, including male testes, male carcass, female ovary and female carcass 48 . Their accession codes are: SRX316667, SRX316704, SRX316706 and SRX316705. The RNA-seq data for B. mori are obtained from 77 mixed samples of various developmental stages 23 . The accession code is SRX084698. The RNA-seq data for D. plexippus are obtained from mixed samples of all developmental stages 49 . Its accession code is SRX191135. The RNA-seq data for P. xylostella are obtained from four developmental stages of egg, larva, pupa and adult 50 . Their accession codes are SRX056231, SRX056232, SRX056233 and SRX056234. The RNA-seq data for T. castaneum are obtained from whole larvae 51 . Its accession code is ERP001667. The RNA-seq data for A. mellifera are obtained using bees from colonies of single drone inseminated queens 52 . Their accession codes are SRR498622, SRR499808, SRR499882, SRR499883, SRR499919, SRR499920, SRR499992, SRR499993 and SRR499995. The RNA-seq data for C. floridanus are obtained from whole body samples of a queen and a virgin queen 53 . The accession codes are SRX091808 and SRX091809. The RNA-seq data for A. pisum are obtained from two samples taken during the late steps of sexual embryogenesis. The accession codes are SRX040564 and SRX040565 (ref. 54). Bioinformatics methods for identifying trans-splicing events. Low-quality reads are filtered using a previously described protocol 23 with the following modifications. First, reads are only retained if at least 60% of the bases have a quality score Z20. Second, both the 5 0 and 3 0 ends of the reads are trimmed to remove bases with a low-quality score (o20). Third, trimmed reads with more than one 'N' are discarded. The quality of each insect genome is assessed by mapping single-end and PE RNA-seq reads separately to the genome assemblies for the respective species (Supplementary Fig. 7). The pipeline for the identification of trans-splicing events is modified from an earlier study 23 (Supplementary Fig. 6). Briefly, quality-filtered reads are aligned to a reference genome by TopHat 55 (v2.0.9) with the default parameters. Mapped reads are filtered, and unmapped reads are processed by Bowtie 56 (v1.0.0) for another round of filtering with the default parameters. These two filtering steps ensure the removal of reads derived from canonical cis-splicing. The retained reads are then mapped to a between-gene exon-exon fusion library using the default parameters of Bowtie 56 (v1.0.0). A library is constructed for each of the eight insect species based on their annotated gene models and defined exon-intron boundaries. Each library include all possible fusion sequences between exons of two different genes within a single species. A trans-splicing event is identified only when a match is found to meet the following criteria: (i) the junction site is covered by at least two non-redundant reads, one of which must be perfect match; (ii) reads covering the junction site must match to both arms for at least 20 bp each; (iii) in addition to covering the junction site, the trans-splicing event must be supported by at least one of the PE reads bridging the two flanking regions; (iv) trans-splicing candidate events between overlapping or neighbouring genes are further examined, and events between genes with the same orientation are removed to eliminate false positives stemming from conjoined genes resulting from possible transcriptional readthrough. Using this approach, with the above filters designed to eliminate the false positives, the success rate of the identified events, validated by the RT-PCR assay as described below, is found to be independent of the quality of the genome assembly. Experimental validation of trans-spliced transcripts. Wild-type (Canton-S) D. melanogaster samples are collected from four developmental stages, including embryo (24 h after egg laying), third-instar larvae, pupae (third day after pupating) and adult (2-3 days after eclosion). Adult D. plexippus samples are provided by Dr Steven Reppert (University of Massachusetts) and are stored in RNAlater (Life Technologies) during shipment. Total RNA is isolated separately from each D. melanogaster sample, and from the D. plexippus thorax and head, using an RNeasy Plus Mini Kit (Qiagen) with gDNA Eliminator Spin Columns to remove gDNA contamination. RNA is reverse-transcribed using a RevertAid First Strand cDNA Synthesis Kit (Thermo Scientific) with oligodT and random primers. To detect template switching artefacts generated during reverse transcription, two different RTases, MMLV-derived RTase (Thermo Scientific) and AMV-derived RTase (New England Biolabs), are used in parallel reverse transcription experiments 24,29 . cDNA from each D. melanogaster stage, and from the D. plexippus thorax and head is mixed in equal amounts, respectively, before RT-PCR experiments. The trans-splicing-specific oligonucleotide primers for D. melanogaster and D. plexippus are designed using Primer Premier 5 (ref. 57; Supplementary Table 3). RT-PCR reactions are performed using KOD FX (Toyobo) DNA polymerase for 50 cycles. rp49 (a gene of ribosomal protein L32e family) in D. melanogaster and glyceraldehyde-3-phosphate dehydrogenase gene (DPOGS215460-TA) and actin gene (DPOGS207542-TA) in D. plexippus are used as controls. The amplified trans-spliced products are extracted from an agarose gel with a gel extraction kit (OMEGA) and are sequenced from both ends by Sanger sequencing. Only the candidates that are validated by RT-PCR using both MMLV-derived RTase and AMV-derived RTase are considered to be true positives for trans-splicing events. Pairwise comparison of trans-spliced transcripts. Trans-spliced coding transcripts are first translated into protein sequences. Pairwise comparisons are then performed between different species using BLASTP software (version 2.2.21). The donor and acceptor segments are compared separately. Events with both the donor and the acceptor reaching the threshold (E-value of 1E-5, coverage of 60% and identity of 30%) are considered to be conserved trans-splicing events. As both the donor and acceptor segments become incorporated into trans-spliced products, the criteria are designed to exclude cases of a partial match by either the donor or the acceptor segment alone. Phylogenetic analysis of mod(mdg4)-like proteins. The common N-terminus of mod(mdg4) proteins from the six insect species D. melanogaster, B .mori, A. aegypti, D. plexippus, A. gambiae 17 , P. xylostella and D. pulex are aligned using the PFAM profile hidden Markov model for BTB domain, and HMMalign 59 (HMMer package version 3.1b1). Multiple alignments for the BTB domain regions are shown in Fig. 4a,b. Hmmsearch 59 is used to search for hits of FLYWCH domain in the variable C-terminus of mod(mdg4) proteins from the six species with the E-value 1e-5. The identified FLYWCH domain regions are isolated and aligned using the pHMM FLYWCH.hmm and HMMalign. Terminal tails of non-aligned residues are trimmed, and unambiguous sequence alignments are subsequently subjected to phylogenetic analysis. The ML analysis is performed with the programme PhyML 60 (version 3.1) using the default parameters (amino-acid substitution model: LG; shape parameter: estimated gamma distribution). The D. pulex FLYWCH domain sequence is used as the outgroup. The ML analysis is initiated with a BIONJ tree, and tree topology is searched by the Nearest-Neighbour Interchanges method. The Shimodaira-Hasegawa-like approximate likelihood ratio test is used for the branch supports. The phylogenetic tree is visualized with the programme Figtree 61 . Trans-spliced products resembling canonical transcripts. Trans-spliced coding transcripts are first translated into protein sequences. Pairwise comparisons and alignments using the BLASTP (version 2.2.21) programme are performed between the trans-spliced products and the canonical gene products from each of the other 13 species, including C. pipiens, A. aegypti, A. gambiae, D. melanogaster, P. xylostella, B. mori, H. melpomene, D. plexippus, T. castaneum, A. mellifera, A. echinatior, C. floridanus, A. pisum and D. pulex. A 'homologous' trans-splicing event is defined as when an alignment between a trans-splicing product and a canonical protein reaches the threshold (E-value of 1E-5, coverage of 90% and identity of 50%). The stringent criteria are used to exclude partial homologues between trans-spliced products and canonical gene products.
2016-05-12T22:15:10.714Z
2015-11-02T00:00:00.000
{ "year": 2015, "sha1": "7e4b8ee7d9bc39fe7ad018d10fc18b53924a2090", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/ncomms9734.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e4b8ee7d9bc39fe7ad018d10fc18b53924a2090", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
6393989
pes2o/s2orc
v3-fos-license
Premature ovarian failure risk factors in an Iranian population Background The aim of this study was to determine possible correlates of premature ovarian failure (POF) in an Iranian population. Methods In a case-control study, 80 patients with POF were compared with 80 controls enrolled from the same setting during 2007–2008. A food diary was used to assess food consumption habits. Results Mean age of starting ovarian failure symptoms was 19.3 ± 5.7 years and mean age of menopause was 22.6 ± 6.3 years. Familial coincidence was observed in 16 POF patients versus no one in the control group (P < 0.05). POF patients had lower frequency of both eating red meat and fish when compared with controls (P < 0.001). POF and control subjects consumed similar amounts of dairy products, being 5.3 ± 3.2 times per week in POF and 5.6 ± 2.1 times in the control groups. Conclusion In this study, an association between POF and lower red meat or fish consumption was found. Introduction Premature ovarian failure (POF) is the occurrence of amenorrhea in conjunction with raised serum follicle stimulating hormone (FSH . 40 IU/L) before the age of 40 years. 1 Terminology is inconsistent. Some other terms have been used such as, premature menopause, premature ovarian dysfunction, and primary ovarian insufficiency. It is reported to occur in 0.9%-3.0% of the general female population. 2 For every decade before the age of 40, the prevalence of POF is estimated to decrease by a factor of 10. POF is the etiology in 10%-28% of the cases with primary amenorrhea and in 4%-18% of those with secondary amenorrhea. 3 POF, other than affecting different aspects of quality of life, may lead to infertility, osteoporosis, and cardiovascular disorders, which puts it on high priority of reproductive and health research. 4 Turner syndrome and gonadal dysgenesis are the best known causes of early POF. Nevertheless, in normal 46XX karyotype females presenting with early POF, the etiology is most often unknown. 3 There are ethnic differences in prevalence of POF in different populations. 5 The ethnic prevalence variations pose possible etiologic differences also due to environmental, genetic, and nutritional factors. Conduction of etiologic studies in different settings or populations can help in improving the knowledge regarding etiology and predictors of POF. The aim of this study was to determine possible correlates of POF in an Iranian population. Methods In a case-control study, 80 patients with POF were compared with 80 controls enrolled during 2007-2008. Study setting was Alzahra University Hospital and Tabriz Subspecialty Clinic for Infertility. Cases were menopause women under the age of 40, referred to the study centers. All the cases had referred due to amenorrhea and infertility and were assessed two times for serum FSH; they were assessed for POF if FSH was found to be above 30 U/L. Eighty patients lacking the outcome of interest were also enrolled as controls. Food consumption variables were measured using a food diary assessment tool, measuring average frequency of consumed food. Controls were selected from other clinics of Alzahra University Hospital. They were matched with cases for age using a frequency matching technique. Data were entered into the computer and analyzed using bivariate statistical tests as well as multivariate regression analysis. All the test results were interpreted as two-tailed results, and a P value , 0.05 was considered as statistically significant. The study protocol was approved by the regional committee of ethics in Tabriz University of Medical Sciences. Results Mean (and standard deviation) of the age of the patients were 30.3 (5.9) and 30 (5.6) years, respectively, for the POF and control groups (P = 0.3). Eighty-five percent of POF and 76% of control group subjects had urban residence (P = 0.2). Education level distribution compared between groups was not statistically significant ( Figure 1). Mean age of starting ovarian failure symptoms was 19.3 ± 5.7 years, and mean age of menopause was 22.6 ± 6.3 years. In 26 patients in this group, menstruations continued dependent on drug treatment. Maternal menopausal age, if reached, was 45.0 ± 3.8 years in the POF group versus 46.6 ± 3.3 years in the control group (P , 0.05). Mean age of menarche was 13.9 ± 2.0 years for the POF group and 13.6 ± 1.2 years in the control group. The difference was not statistically significant. A history of lifelong irregular menstruation was reported in 76.3% of participants in the POF group, while no one reported an irregular menstruation history in the control group (P , 0.001). Seven patients in the POF group had a history of ovarian surgery. The surgery type was cystectomies in five patients and cauterization in two cases. Mumps history was reported in 16 POF versus 25 control participants. The difference was not statistically significant. Familial coincidence was observed in 16 POF patients versus no one in the control group (P , 0.05). Familial marriage of the parents was the case in 23 and 30 subjects in POF and controls, respectively. The difference was not found to be statistically significant. None of the patients in either group had a history of radiotherapy or chemotherapy. Only four patients in the POF group reported history of renal disease (P = 0.1). Diabetes, thyroidal disease, lupus erythematous, and rheumatoid arthritis didn't exist in any of the participants of the study groups. POF patients had lower frequency of consuming both red meat and fish compared with controls (P , 0.001). POF patients had a mean 2.8 times of red meat consumption and 0.23 times of fish consumption per week. Controls had a mean 4.6 times of red meat consumption and 1.37 times of fish consumption per week. Vegetable consumption as reported by the POF patients had a mean of 3.1 ± 2.2 times per week. Mean weekly consumption of vegetables was recorded to be 3.5 ± 3.3 times per week among controls. The difference was not statistically different. POF patients consumed mean 0.4 times of fast foods per week. The figure was 0.5 times per week for controls (P = 0.3). POF and control subjects consumed similar amounts of dairy products, being 5.3 ± 3.2 times per week in the POF group and 5.6 ± 2.1 times in the control group (P = 0.5). Also, living close to high voltage power posts, living close to industrial centers, and using mobile phones were not found to be associated with POF (P values were 0.210, 0.444, and 0.200, respectively). None of the participants were smokers. Using multivariate regression analysis, the amount of weekly consumption of red meat and fish appeared as independent predictors of POF. Discussion Some of the possible predictors of POF were not found significant. This is possibly due to lower incidence and compared variability between groups. For example, contrary 337 Premature ovarian failure risk factors to many foreign studies, smoking is rare among women in this context and there were no smokers in either group, so the study could not investigate an association between smoking and POF. No doubt, in cases where there is a lack of variation as in the present study, the factors will be classed as insignificant. Nevertheless, smoking is repeatedly reported to be associated with POF. [6][7][8][9][10] In the present study, a history of irregular menstruation was found to be strongly associated with POF. This was in line with a large Italian study reporting that the risk of POF and of early menopause was higher among women reporting lifelong irregular menstrual cycles. 11 The present study was a case-control study, and the Italian study was cross-sectional study; however, due to lack of clear-cut temporality assessment, in both cases, caution needs to be taken in interpreting the results. Considering this point and also lack of a reasonable biological plausibility to explain a direct causality association, it cannot be clarified with certainty whether irregular menstrual cycles can be considered as a risk factor, a predisposing factor, or even an early symptom of POF. 12 Nevertheless, some studies have also supported lack of an association between irregular menstrual cycles and ovarian failure. 13 Due to such an uncertainty, the authors of the present paper preferred to assess and discuss the association just through a bivariate analysis rather than including it in multivariate analysis. There was no association found between age at menarche and POF that was consistent with the results of the Italian study, but controversial results can also be retrieved from literature. 11,14,15 As in the present study, insufficient or no cases of ovarian surgery, chemotherapy, or radiotherapy existed to investigate existence of association between these factors and POF. However, such an association seems quite plausible and is also reported in previous studies. [16][17][18] Although the present study was not supported to do genetic assessments, family clustering of disease was a significant finding. Although preferential recall of family history by women with early menopause could contribute to the association between family history and early menopause observed in this study, a genetic factor is also plausible, including partial deletions of the X chromosome, compatible with the deficiency of male siblings in cases with family history of early menopause. 19 A hereditary or genetic link to POF is well discussed and substantially supported in previous POF literature. [20][21][22][23][24] Interestingly, in the present study there was an association between POF and lower red meat or fish consumption. If we consider this to be a possible causal effect, the mechanism may be sought in dietary amino acid balance, minerals such as zinc in all of kinds of meat, omega fatty acids in fish, and other potential mechanisms. However, a major confounder in this regard is the economic level, 25,26 which is very difficult to measure and control for in developing countries and is recommended to be considered in future research. Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/international-journal-of-general-medicine-journal The International Journal of General Medicine is an international, peer-reviewed open-access journal that focuses on general and internal medicine, pathogenesis, epidemiology, diagnosis, monitoring and treatment protocols. The journal is characterized by the rapid reporting of reviews, original research and clinical studies across all disease areas. A key focus is the elucidation of disease processes and management protocols resulting in improved outcomes for the patient.The manuscript management system is completely online and includes a very quick and fair peer-review system. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors.
2018-04-03T03:30:45.872Z
0001-01-01T00:00:00.000
{ "year": 2012, "sha1": "fdfd90ea7ae45dddbe40dd377ce4d2e09949b5a5", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=12527", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fdfd90ea7ae45dddbe40dd377ce4d2e09949b5a5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17519356
pes2o/s2orc
v3-fos-license
Efficacy and safety of delayed-release dimethyl fumarate in patients newly diagnosed with relapsing-remitting multiple sclerosis (RRMS). BACKGROUND Delayed-release dimethyl fumarate (DMF) demonstrated efficacy and safety in the Phase 3 DEFINE and CONFIRM trials. OBJECTIVE To evaluate delayed-release DMF in newly diagnosed relapsing-remitting multiple sclerosis (RRMS) patients, in a post-hoc analysis of integrated data from DEFINE and CONFIRM. METHODS Patients included in the analysis were diagnosed with RRMS within 1 year prior to study entry and naive to MS disease-modifying therapy. RESULTS The newly diagnosed population comprised 678 patients treated with placebo (n = 223) or delayed-release DMF 240 mg BID (n = 221) or TID (n = 234). At 2 years, delayed-release DMF BID and TID reduced the annualized relapse rate by 56% and 60% (both p < 0.0001), risk of relapse by 54% and 57% (both p < 0.0001), and risk of 12-week confirmed disability progression by 71% (p < 0.0001) and 47% (p = 0.0085) versus placebo. In a subset of patients (MRI cohort), delayed-release DMF BID and TID reduced the mean number of new or enlarging T2-hyperintense lesions by 80% and 81%, gadolinium-enhancing lesion activity by 92% and 92%, and mean number of new non-enhancing T1-hypointense lesions by 68% and 70% (all p < 0.0001 versus placebo). Flushing and gastrointestinal events were associated with delayed-release DMF. CONCLUSION Delayed-release DMF improved clinical and neuroradiological outcomes relative to placebo in newly diagnosed RRMS patients. Introduction The pathological course of multiple sclerosis (MS) is believed to evolve over time. In the early stages of the disease, autoreactive lymphocytes gain access to the central nervous system, initiating a cascade of events leading to demyelination, axonal transection, and neurodegeneration. 1,2 In later stages, infiltrative inflammation plays a less prominent role, but extensive neuronal loss and gliosis are evident. 1 Hence, initiation of MS treatment early in the disease course, when the potential for slowing the accumulation of damage is greatest, could be a clinically meaningful approach. Consistent with this, previous studies with interferon beta and glatiramer acetate (GA) demonstrated an association between early treatment and improved outcomes, such as a prolonged time to conversion from clinically isolated syndrome (CIS) to clinically definite MS (CDMS) and a reduction in the number and volume of lesions on MRI. [3][4][5][6][7][8][9] Delayed-release dimethyl fumarate (DMF) is a novel, oral MS therapeutic studied in people with relapsingremitting MS (RRMS). In a pre-specified integrated analysis of the Phase 3 DEFINE and CONFIRM trials, 10,11 delayed-release DMF 240 mg twice (BID) and three times daily (TID) resulted in significant reductions in clinical and magnetic resonance imaging (MRI) activity and demonstrated an acceptable safety profile in RRMS patients over 2 years. 12 These effects were generally consistent across subgroups of patients stratified by baseline demographic and disease characteristics. 13 The mechanism by which delayed-release DMF exerts its therapeutic effect in MS is unknown. However, clinical benefits are believed to be related, in part, to activation of the nuclear factor (erythroid-derived 2)-like 2 (Nrf2) pathway 14,15 and modulation of the expression of pro-and anti-inflammatory cytokines [16][17][18][19] and phase 2 detoxification enzymes. 15,18 Inflammation and neurodegenerative processes are prominent early in the course of MS, so agents with putative dual antiinflammatory and neuroprotective effects, such as delayed-release DMF, may be particularly useful. To examine the efficacy of delayed-release DMF in newly diagnosed patients, a post-hoc analysis of integrated data from DEFINE and CONFIRM was conducted. The newly diagnosed population included patients who had been diagnosed with RRMS within 1 year prior to study entry and were naïve to MS disease-modifying therapy. The analysis included clinical and neuroradiological efficacy endpoints as well as basic safety data (adverse events). Patients and study design The designs of the DEFINE and CONFIRM trials have been described in detail elsewhere. 10,11 Briefly, eligible adult patients (18-55 years) had a diagnosis of RRMS per McDonald diagnostic criteria 20 and an Expanded Disability Status Scale (EDSS) score of 0-5.0, inclusive. 21 Further, patients had experienced at least one clinically documented relapse within one year prior to randomization, with a prior brain MRI demonstrating lesion(s) consistent with MS, or at least one Gd+ lesion on a brain MRI scan obtained within 6 weeks prior to randomization. The primary endpoint of DEFINE was the proportion of patients relapsed at 2 years. The primary endpoint of CONFIRM was the annualized relapse rate (ARR) at 2 years. Additional endpoints included the time to 12-week sustained disability progression, number of Gd+ lesions, number of new or enlarging T2-hyperintense lesions, and number of new T1-hypointense lesions, all at 2 years. The integrated analysis of DEFINE and CONFIRM was pre-specified prior to the unblinding of CONFIRM and was to be conducted only if the patient populations and treatment effects were similar between the studies. The integrated analysis was considered valid due to the many similarities between DEFINE and CONFIRM, including inclusion/exclusion criteria, regions from which patients were recruited, overall design, measurement criteria, and observed efficacy. Roughly equal proportions of newly diagnosed patients were drawn from each of the pivotal studies. The newly diagnosed population was defined as patients who had been diagnosed with RRMS per McDonald diagnostic criteria within 1 year prior to study entry 20 and who were naïve to MS disease-modifying therapy. The 1-year criterion was defined prior to the analysis being conducted and was chosen because the median time since diagnosis of RRMS in the overall treatment-naïve population was 1 year. Statistical analysis This post-hoc analysis was performed on data from the integrated DEFINE and CONFIRM dataset. The pre-planned integrated analysis was finalized prior to unblinding of CONFIRM and required baseline characteristics and treatment effects to be homogeneous across the studies. The analysis included data from patients in the intentto-treat (ITT) population (defined as patients who underwent randomization and received at least one dose of study drug) who were randomized to receive placebo or delayed-release DMF BID or TID. Patients randomized to receive GA were excluded because there was no GA comparator arm in DEFINE, and because CONFIRM was not designed to test the superiority or non-inferiority of delayed-release DMF to GA. In general, the analyses were based on all observed data before patients switched to alternative therapies. MRI endpoints were analyzed using ITT patients in the MRI cohort for whom at least one MRI scan was available for analysis. MRI lesion count data post-early withdrawal or post-alternative MS treatment usage were imputed using a constant rate assumption. Annualized relapse rate (ARR; total number of relapses divided by patient-years in the study, excluding data obtained after patients switched to alternative MS medications) was analyzed with the use of a negative binomial regression model adjusted for baseline EDSS score (≤2.0 vs. >2.0), baseline age (<40 vs. ≥40), study, region (1 [United States], 2 [Western European countries, Canada, Costa Rica, Australia, New Zealand, Israel, and South Africa], or 3 [Eastern European countries, India, Guatemala, and Mexico]) and number of relapses in the year prior to study entry. Regions were pre-defined based on geography, type of health care system, and access to health care in each country. The proportion of patients relapsed was derived using Kaplan-Meier analysis and analyzed with the use of a Cox proportional hazards model with study as a stratifying factor, and adjusted for baseline age (<40 vs. ≥40), region, baseline EDSS score (≤2.0 vs. >2.0), and number of relapses in the year prior to study entry. Disability as measured by time to 12-week confirmed EDSS progression was analyzed using a Cox proportional hazards model with study as a stratifying factor, and adjusted for the following covariates: baseline EDSS score (as a continuous variable), baseline age (<40 vs. ≥40), and region. The odds of having more Gd+ lesions were analyzed using ordinal logistic regression adjusted for study, region, and baseline number of Gd+ lesions. The mean number of new or enlarging T2-hyperintense lesions was analyzed using negative binomial regression adjusted for study, region, and baseline T2-hyperintense volume. The mean number of new non-enhancing T1-hypointense lesions was analyzed using negative binomial regression adjusted for study, region, and baseline volume of T1-hypointense lesions. Study population The ITT population for the integrated analysis comprised 2301 patients, of whom 678 met newly diagnosed RRMS criteria (332 from DEFINE and 346 from CONFIRM; n = 223, 221, and 234 in the placebo, delayed-release DMF BID, and delayed-release DMF TID groups, respectively). A subset of these patients (n = 100, 99, and 109 in the placebo, delayedrelease DMF BID, and delayed-release DMF TID groups, respectively) comprised the MRI cohort. Baseline demographic and disease characteristics were similar across treatment groups ( Table 1). The mean time since diagnosis (standard deviation [SD]) was 0.5 (0.5) years in all treatment groups. The proportion of patients who had received prior treatment with steroids was 7% in the placebo group, 10% in the delayed-release DMF BID group, and 9% in the delayed-release DMF TID group. The proportion of patients who completed the study was 85% in the placebo group, 78% in the delayedrelease DMF BID group, and 80% in the delayedrelease DMF TID group (Figure 1). The proportion of patients who completed 2 years of study treatment was 70% in the placebo group, 71% in the delayedrelease DMF BID group, and 75% in the delayedrelease DMF TID group. The mean (SD) number of weeks on study treatment was 80.0 (28.3) in the Clinical efficacy The frequency of relapse in the newly diagnosed population was reduced significantly by delayedrelease DMF treatment. The ARR at 2 years was 0.38 in the placebo group, 0.17 in the delayedrelease DMF BID group, and 0.15 in the delayedrelease DMF TID group, representing relative reductions of 56% (BID) and 60% (TID; both p < 0.0001 vs. placebo; Figure 2). The risk of relapse was also reduced by delayed-release DMF compared with placebo. On the basis of Kaplan-Meier estimates, the proportion of patients relapsed at 2 years was 0.42 in the placebo group, 0.21 in the delayed-release DMF BID group, and 0.21 in the delayed-release DMF TID group, representing relative reductions of 54% (BID) and 57% (TID; both p < 0.0001 vs. placebo; Figure 3(a)). The risk of 12-week sustained disability progression over 2 years was reduced significantly among newly diagnosed patients receiving delayedrelease DMF compared with placebo. On the basis of Kaplan-Meier estimates, the proportion of patients with confirmed 12-week disability progression at 2 years was 0.23 in the placebo group, 0.07 in the delayed-release DMF BID group, and 0.14 in the delayed-release DMF TID group, representing relative reductions of 71% (BID; p < 0.0001 vs. placebo) and 47% (TID; p = 0.0085 vs. placebo; Figure 3(b)). Discontinued study drug n=17 Completed study n=189 Completed study n=188 Completed study n=173 *DMF, delayed-release DMF Neuroradiological efficacy The mean number of new or enlarging T2-hyperintense lesions, odds of having more Gd+ lesions, and mean number of new non-enhancing T1-hypointense lesions were reduced significantly in the newly diagnosed population by delayed-release DMF treatment at 2 years. The adjusted mean number of new or enlarging T2-hyperintense lesions at 2 years was 20.0 in the placebo group, 4.0 in the delayed-release DMF BID group, and 3.9 in the delayed-release DMF TID group, representing relative reductions of 80% (BID) and 81% (TID; both p < 0.0001 vs. placebo; Figure 4(a)). The odds of having more Gd+ lesions at 2 years was reduced by 92% in both the delayed-release DMF BID and the delayed-release DMF TID group (both p < 0.0001 vs. placebo; Figure 4(b)). The adjusted mean number of new non-enhancing T1-hyperintense lesions at 2 years was 6.6 in the placebo group, 2.1 in the delayed-release DMF BID group, and 2.0 in the delayed-release DMF TID group, representing relative reductions of 68% (BID) and 70% (TID; both p < 0.0001 vs. placebo; Figure 4(c)). Adverse events The overall incidence of adverse events in the newly diagnosed population was similar across the placebo (92%), delayed-release DMF BID (97%), and delayed-release DMF TID (95%) groups (Table 2). Adverse events reported more frequently in patients receiving delayed-release DMF compared with placebo (experienced by ≥10% of patients in any group, with an incidence ≥3% higher in either delayedrelease DMF group vs. placebo) included flushing, nasopharyngitis, headache, diarrhea, nausea, upper abdominal pain, and abdominal pain ( Table 2). The overall incidence of adverse events leading to discontinuation of study treatment was 5% in the placebo group, 12% in the delayed-release DMF BID group, and 11% in the delayed-release DMF TID group. Discussion In this post-hoc analysis of integrated data from DEFINE and CONFIRM, delayed-release DMF demonstrated strong efficacy across a broad range of clinical and neuroradiological outcome measures in patients newly diagnosed with RRMS. Over 2 years, both dosing regimens of delayed-release DMF (240 mg BID and TID) significantly reduced the ARR, risk of relapse, proportion of patients with 12-week confirmed disability progression, odds of having more Gd+ lesions, mean number of new or enlarging T2-hyperintense lesions, and mean number of new non-enhancing T1-hypointense lesions, compared with placebo. The effects of delayed-release DMF in the newly diagnosed population were numerically stronger than those seen in the overall ITT population of DEFINE and CONFIRM, 10,11,13 consistent with findings from previous studies that early intervention with interferons or GA is associated with improved outcomes. [3][4][5][6][7][8][9] From a neuropathological and clinical perspective, the rationale for early intervention is strong. As neurodegenerative effects including axonal transection are observed from the early stages of the disease 2 and greater frequency of relapse and higher lesion load in early MS are associated with poorer long-term outcomes [22][23][24][25] , immediate intervention in newly diagnosed patients may slow the accumulation of damage and progression of disability. Indeed, associations between MS disease activity and long-term clinical prognosis seem to become weaker over time, suggesting an early window of maximal therapeutic opportunity. [23][24][25] There is no accepted universal criterion for newly diagnosed or "early" RRMS. Among the criteria that have been used previously are time from symptom onset, time from diagnosis, EDSS score, clinical presentation consistent with CIS, conversion from CIS to years, 29,30 or even 8-10 years of diagnosis. 31,32 As we sought to conduct a robust analysis of a newly diagnosed population, we selected 1 year from diagnosis as our criterion to ensure our study was adequately powered. One year was the median time since diagnosis for the treatment-naïve population of DEFINE and CONFIRM, and it fell clearly within the range of criteria used to characterize the newly diagnosed population in the literature. It should be noted, however, that the patient subgroup evaluated in this report represents patients with early RRMS and not patients with CIS at disease onset. The recommended dosing regimen of delayed-release DMF is 240 mg BID. The data for 240 mg TID were included here to explore the general consistency of the effects between the doses. In the newly diagnosed population, similar to the ITT population, the effect sizes with delayed-release DMF BID and TID were broadly similar. However, both BID and TID showed a treatment effect on reducing disability progression in the same direction. Although the effect size was different between BID and TID (71% vs. 47% reduction), the confidence intervals of the point estimates overlapped. Therefore, the different effect size was mainly due to data variations of the newly diagnosed population. The safety and tolerability profile of delayed-release DMF in newly diagnosed patients presented here, while limited in scope, is acceptable and comparable to that seen in the overall integrated safety population of DEFINE and CONFIRM. For example, the overall incidence of adverse events in the newly diagnosed subgroup was 92%, 97%, and 95% in the placebo, delayed-release DMF BID, and delayed-release DMF TID groups, respectively, compared with 93%, 95%, and 94% in the overall safety population. 12 Flushing, nasopharyngitis, and gastrointestinal events including diarrhea, nausea, and abdominal pain were among the most common adverse events reported by patients treated with delayed-release DMF in both populations. 12 The incidence of adverse events leading to discontinuation of study treatment in the newly diagnosed subgroup was 5%, 12%, and 11% in the placebo, delayed-release DMF BID, and delayed-release DMF TID groups, respectively, compared with 12%, 14%, and 14% in the overall safety population. 12 Finally, it should be borne in mind that this is a posthoc analysis, and, as such, the results should be interpreted cautiously. The study was not powered a priori to analyze the endpoints presented herein in this subgroup of newly diagnosed patients. Therefore, further prospective confirmation is necessary to support our findings. The odds of having more Gd+ lesions were analyzed using ordinal logistic regression adjusted for study, region, and baseline number of Gd+ lesions. Percentages are the reduction in odds of having more Gd+ lesion activity, compared with placebo. (c) The number of new non-enhancing T1-hypointense lesions was analyzed using negative binomial regression adjusted for study, region, and baseline volume of T1-hypointense lesions. Error bars indicate 95% CI. Abbreviations: BID, twice daily; CI, confidence interval; Gd+, gadoliniumenhancing; TID, three times daily. § p < 0.0001 vs. placebo Conclusion This integrated post-hoc analysis suggests strong treatment efficacy of delayed-release DMF in patients with newly diagnosed RRMS, and further supports the use of delayed-release DMF as an oral treatment option in a broad spectrum of people with relapsing MS. (7) 13 (6) Pain in extremity 22 (10) 13 (6) 14 (6) a These events had an incidence ≥3% higher in either delayed-release DMF group vs. placebo. Abbreviations: BID, twice daily; MS, multiple sclerosis; TID, three times daily. *DMF, delayed-release DMF.
2018-04-03T02:12:44.318Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "e5742c856910ce767c13a52a67ddf7962b292c17", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1352458514537013", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e5742c856910ce767c13a52a67ddf7962b292c17", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55078437
pes2o/s2orc
v3-fos-license
Ground Layer Adaptive Optics for the W. M. Keck Observatory: Feasibility Study Ground-layer adaptive optics (GLAO) systems offer the possibility of improving the"seeing"of large ground-based telescopes and increasing the efficiency and sensitivity of observations over a wide field-of-view. We explore the utility and feasibility of deploying a GLAO system at the W. M. Keck Observatory in order to feed existing and future multi-object spectrographs and wide-field imagers. We also briefly summarize science cases spanning exoplanets to high-redshift galaxy evolution that would benefit from a Keck GLAO system. Initial simulations indicate that a Keck GLAO system would deliver a 1.5x and 2x improvement in FWHM at optical (500 nm) and infrared (1.5 micron), respectively. The infrared instrument, MOSFIRE, is ideally suited for a Keck GLAO feed in that it has excellent image quality and is on the telescope's optical axis. However, it lacks an atmospheric dispersion compensator, which would limit the minimum usable slit size for long-exposure science cases. Similarly, while LRIS and DEIMOS may be able to accept a GLAO feed based on their internal image quality, they lack either an atmospheric dispersion compensator (DEIMOS) or flexure compensation (LRIS) to utilize narrower slits matched to the GLAO image quality. However, some science cases needing shorter exposures may still benefit from Keck GLAO and we will investigate the possibility of installing an ADC. Introduction Ground-layer adaptive optics (GLAO) provides partial correction of atmospheric blurring over a significantly larger field of view (several to 10 arcminutes) and over a broader wavelength range (optical and infrared) than classical AO systems. 1 Science applications enabled by GLAO are thus complementary to classical AO and include extra-galactic spectroscopic surveys over a broad range of redshifts, intergalactic and circumgalactic medium studies with integral field and slit spectrographs, and stellar population studies in the Milky Way and nearby galaxies where crowding limits current sensitivities. Maunakea is ideally suited for GLAO with its thin ground layer and weak free-atmosphere turbulence. 2 Currently, none of the 8-10 m class telescopes on Maunakea have a GLAO system; however, several observatories are investigating the feasibility and designing GLAO systems for the future. In this proceeding, we explore the feasibility of a GLAO system deployed at the W. M. Keck Observatory, which could have the benefit of feeding Keck's existing seeing-limited instruments including the multi-object spectrographs, LRIS and DEIMOS in the optical and MOSFIRE in the infrared. Nearly all seeing-limited science cases would benefit from the expected 2× better spatial resolution in the near-infrared and red-optical delivered by GLAO. 3,4 The ultimate question that must be addressed is whether the science gains enabled by GLAO system at Keck are timely and significant enough to warrant the cost. The science gains delivered by a Keck GLAO system can be quantified with the following ingredients. First, we need to estimate the image quality delivered by a notional GLAO system. We utilize multiple atmospheric modeling codes and mean Maunakea atmospheric conditions to simulate point spread functions (PSFs) for Keck GLAO. We also use results from the on-sky 'imaka experiment to validate these simulations. Second, we need to determine how the existing instruments would further reduce the GLAO-delivered image quality due to internal aberrations, distortions, and flexure. Thirdly, we need to propagate these image quality estimates through to science metrics for a wide range of current and future science cases. Finally, science gains must always be weighed against cost and risk. In this proceeding, we present initial results from the first 2−3 steps of this process. GLAO Performance: On-sky Validation and Keck Simulations One of the primary motivations for GLAO at Keck has been results from on-sky experiments that indicated that the turbulence above Maunakea is typically dominated by a significant, but thin, ground layer. 2 In particular, the 'imaka GLAO experiment commissioned on-sky on the UH 2.2 m telescope on Maunakea 5 has shown that the FWHM can be improved by a factor of 1.5−1.7× and the Noise Equivalent Area is improved by >2× at R-band and I-band with a guide star constellation spread over 18'. 3 Preliminary results from 2018 indicate that improvements of >1.3× are achievable even at B-band and V-band over an 11' field of view. 4 The GLAO PSFs are also more stable by a factor of 7-10×. The results of 'imaka are promising for a Keck GLAO system. Furthermore, GLAO performance has been validated at the VLT, showing 2× improved and very uniform FWHM over a 10' radius. 6 Simulations of the expected performance of the a Keck GLAO system were made using the Multi-threaded Adaptive Optics Simulator (MAOS). 7 The initial system configuration is for a 20×20 actuator system with four sodium laser guide stars placed on the periphery of a 10'×10' field of view. A single low-order natural-guide star wavefront sensor was placed at the center of the field to provide tip, tilt, and focus sensing. Point spread functions were generated across the 10' field for wavelengths from 0.45 to 2.2 µm from a sequence of ten independent 5000-iteration simulations. The input optical turbulence profile is derived from the Thirty Meter Telescope project's site testing at the TMT site on Maunakea. 8 The results shown in Fig. 1 were run to provide an initial estimate of the gains across the science wavelengths of interest and represent averages across the field of view. One-standard deviation in this variation are indicated on the plot. The improvement in the FWHM is 2× at infrared wavelengths and ∼1.5× at optical wavelengths (∼500 nm). Analysis of other metrics, such as the encircled energy and the noise-equivalent area, is underway. Future simulations will focus on optimizing the image quality for the specific science instrument fields of view and wavelengths and including the effect of the conjugation altitude the adaptive secondary. Examples of GLAO Science Cases In this section, we present a brief summary of science cases that would most benefit from GLAO fed multi-object spectroscopy at Keck. More detailed science cases for GLAO at other telescopes have previously been published for Subaru, 9 Gemini, 10, 11 CTIO, 12 and VLT. [13][14][15] Measurements of galaxy properties test our physical understanding of galaxy formation and evolution. The sensitivity of ground-based extra-galactic spectroscopy is limited by the sky brightness. The improved spatial resolution delivered by GLAO is sufficient to resolve galaxies that were previously unresolved with seeing-limited observations. As a result, spectroscopic slits can be narrower, thereby reducing the solid angle of blank sky that adds noise to the galaxy spectrum. Cutting the spectroscopic aperture down to the size of resolved star-forming regions within galaxies also increases spectral resolution, which improves sensitivity to spectral lines. It follows that GLAO enables spectroscopic observations of fainter galaxies. One of the great advantages of GLAO is how the improved spatial resolution allows us to resolve finer detail in distant galaxies (Fig. 2). The power of spatially-resolved spectroscopy in large galaxy surveys has recently been demonstrated in the MaNGA survey using seeing-limited, integral-field, multi-object spectroscopy. A Keck GLAO system can push to higher redshift and lower-mass galaxies than current IFU surveys such as MaNGA. There are many additional GLAO science cases including: (1) detailed studies of galactic nuclei, (2) resolving stellar populations and star formation histories in nearby galaxies, (3) mapping and characterizing the interstellar medium, supernovae remnants, and outflows from stars in the Milky Way, (4) detection of exoplanet from the astrometric wobble of their host star, (5) complete imaging of Mars, Jupiter, and other planets simultaneously with their ring and moon systems. Most of these science cases utilize the improved spatial resolution of GLAO to resolve further detail on an object or overcome stellar confusion rather than the improved sensitivity. Instrument Feasibility Many of Keck's existing multiplexed spectrographs and wide-field imagers could benefit from a GLAO feed. However, the feasibility of using existing instruments depends on many factors. First and foremost is whether the intrinsic optical quality of the telescope and instrument are sufficiently good to take advantage of the higher spatial resolution that GLAO would deliver. Figure 3 shows the locations and field sizes for different instruments relative to the optical axis of the telescope. We also consider other factors that might limit the science gains from a GLAO feed such as atmospheric refraction, flexure, and slit construction. These turn out to be the limiting factors in most cases for existing instruments. In sections below, we consider the feasibility of the two multiobject optical spectrographs, LRIS and DEIMOS, as well as the infrared multi-object spectrograph, MOSFIRE. LRIS The multi-object, optical slit spectrograph, LRIS, is located on the optical axis of the telescope ( Figure 3) and thus has access to the best image quality directly from the telescope. It's 6' × 6' FoV, 0.135"/pixel plate scale in the imaging direction, and designed image FWHM of ∼ < 0.24" should make it an ideal GLAO instrument. 16 A GLAO feed for LRIS would allow the use of smaller slits for higher spectral resolution and lower background. Typical slits in seeing-limited mode are 0.8" -1.0" and we anticipate a slit width reduction to 0.5". First, the as-delivered image quality on LRIS has been verified with on-sky images from both the LRIS Blue and Red camera. We collected a random sample of ∼ 400 LRIS images from observations taken after 2011 Jan from the Keck Observatory Archive. Stars were identified using DAOPHOT with a number of criteria to ensure that resolved sources and artifacts were excluded. PSF fitting was performed on each star and the median stellar PSF FWHM for the data set was calculated (Figure 4). The median image FWHM for all cameras and filters is 0.836". The discrepancy between that value and the expected value of 0.67" (atmospheric free seeing of 0.65" and contribution from telescope astigmatism of 0.18") suggests the presence of 0.49" of unaccountedfor wavefront error during typical LRIS usage. The telescope contributes ∼0.18" averaged over the field due to off-axis field aberrations. Thus, an unexplained term of 0.5" remains and is likely due to slowly time-varying focus or astigmatism and should be correctable by the GLAO system. Using realistic GLAO PSFs with the appropriate ASM conjugate altitude, we calculate exposure time improvement factors of 10% to 140% (i.e., t/1.1 − t/2.4) for stars within a 4' × 4' field at 8000Å. We assumed the GLAO system parameters include 3-guide stars with a constellation diameter of 4'. These improvement factors depend on a number of parameters, including the line width, assumptions about the distribution of correctable and non-common path wavefront error, and other factors not explored here. The most important factor is the contribution of non-common path wavefront error. The median image quality obtained with LRIS suggests that this factor could be as high as 0.49". If it is, the improvement in required exposure time seen with GLAO would be small. However, if the error is common-path and could be sensed and corrected by the GLAO system, the improvement in required exposure time with GLAO could be as high as 150%. In addition to the image quality, we consider other sources of aberration and distortion. LRIS has an atmospheric dispersion compensator, so differential atmospheric refraction should not contribute significantly for a GLAO PSF. On the other hand, LRIS is a Cassegrain instrument and is know to suffer from time-variable distortion and focus. Previous studies of the astrometric distortion 17 indicate that in seeing limited mode, the flexure amounts to 0.1" -0.2" over the field. However, the same document states that flexure of more than 10% of a slit width is not tolerable. Thus if the LRIS slit sizes are reduced from 0.8" to 0.5" then the LRIS flexure would not keep the science object on its slit. These flexures need to be more extensively quantified in order to determine if some real time corrections could be applied when using GLAO. DEIMOS We investigated whether ground-layer adaptive optics (GLAO) can benefit spectroscopy with the DEIMOS spectrograph on the Keck telescope. DEIMOS is a wide field multi-object spectrograph designed for long exposure extra-Galactic work, particularly observation of distant, early, galaxies. The DEIMOS science field is 16.7' × 5' with a pixel scale of 0.1185" pixel −1 for both the spectrograph and imager. This would be sufficient to Nyquist-sample a PSF that has been sharpened by GLAO. The intrinsic image quality within DEIMOS is has been characterized using pinhole mask observations ( Figure 5) and is 0.26" on average from the instrument. However, the telescope has an increasing astigmatism that increases with distance from the telescope's optical axis and would yield and additional 0.3" (added in quadrature) on average over the DEIMOS field. Finally, DEIMOS has not been equipped with an atmospheric dispersion compensator. During long exposures, this could cause objects to shift outside the slit due to atmospheric refraction. MOSFIRE MOSFIRE is a infrared imager and multi-object spectrograph located at the Cassegrain focus of the Keck I telescope. It has a 6.1'×6.1' field of view and a plate scale of 0.18", which is a bit coarse for sampling a 0.3"-0.4" FWHM PSF from a GLAO feed. Ideally, MOSFIRE could be upgraded with a new detector (i.e. H4RG) with more and smaller pixels to preserve the field of view; but improve the sampling. The image quality inside MOSFIRE is excellent -typically smaller than a single pixel. 18 However, MOSFIRE lacks an atmospheric dispersion corrector, similar to DEIMOS, and would likely need one added to take full advantage of a GLAO feed. Conclusions We study the feasibility of deploying a GLAO system at the W. M. Keck Observatory. A GLAO system at Keck would likely deliver images with a 2× better FWHM, resulting in improved point source sensitivity, better image resolution, and possibly higher spectral resolution through the use of narrower slits on GLAO-fed spectrographs. While Keck's existing multi-object spectrographs (LRIS, DEIMOS, MOSFIRE) have sufficient instrumental image quality to take advantage of a GLAO feed, they each have a shortcoming that requires further investigation before we can determine the final delivered image quality. First, LRIS suffers from flexure as it is a Cassegrain mounted instrument. The degree of flexure may be large enough to prevent long exposures with a narrow (0.3") slit. Second, DEIMOS and MOSFIRE both lack an atmospheric dispersion compensator. Again, narrowing the slit without an ADC could cause objects to shift out of the slit at certain wavelengths at low elevations due to atmospheric refraction. Installation of an ADC for DEIMOS and MOSFIRE will be explored. Investigations are also underway to determine the feasibility of implementing laser guide star wavefront sensors in front of each of the instruments, and of implementing a 1.4 m diameter adaptive secondary mirror at Keck.
2018-07-23T21:13:57.000Z
2018-07-10T00:00:00.000
{ "year": 2018, "sha1": "43494eb7a508b62b472b80c6733104c1ddc307d4", "oa_license": null, "oa_url": "https://authors.library.caltech.edu/87805/1/107030N.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6da3dfa2fa193a4ee42aeefbb311efa4fdeb956a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Engineering", "Physics" ] }
247765390
pes2o/s2orc
v3-fos-license
Neighbourhood socio-economic vulnerability and access to COVID-19 healthcare during the first two waves of the pandemic in Geneva, Switzerland: A gender perspective Summary Background Neighbourhood socio-economic inequities have been shown to affect COVID-19 incidence and mortality, as well as access to tests. This article aimed to study how associations of inequities and COVID-19 outcomes varied between the first two pandemic waves from a gender perspective. Methods We performed an ecological study based on the COVID-19 database of Geneva between Feb 26, 2020, and June 1, 2021. Outcomes were the number of tests per person, the incidence of COVID-19 cases, the incidence of COVID-19 deaths, the positivity rate, and the delay between symptoms and test. Outcomes were described by neighbourhood socio-economic levels and stratified by gender and epidemic waves (first wave, second wave), adjusting for the proportion of inhabitants older than 65 years. Findings Low neighbourhood socio-economic levels were associated with a lower number of tests per person (incidence rate ratio [IRR] of 0.88, 0.85 and 0.83 for low, moderate, and highly vulnerable neighbourhood respectively), a higher incidence of COVID-19 cases and of COVID-19 deaths (IRR 2.3 for slightly vulnerable, 1.9 for highly vulnerable). The association between socio-economic inequities and incidence of COVID-19 deaths was mainly present during the first wave of the pandemic, and was stronger amongst women. The increase in COVID-19 cases amongst vulnerable populations appeared mainly during the second wave, and originated from a lower access to tests for men, and a higher number of COVID-19 cases for women. Interpretation The COVID-19 pandemic affected people differently depending on their socio-economic level. Because of their employment and higher prevalence of COVID-19 risk factors, people living in neighbourhoods of lower socio-economic levels, especially women, were more exposed to COVID-19 consequences. Funding This research was supported by the research project SELFISH, financed by the Swiss National Science Foundation, grant number 51NF40–160590 (LIVES centre international research project call). Introduction Almost 2 years after its emergence, the COVID-19 pandemic has taken the lives of more than 5.9 million people worldwide 1 and is still, as of February 2022, an active threat. The COVID-19 illness can affect anyone but risk factors associated with severe symptoms are now better known. Being a man, 2,3 having chronic conditions, 2 frailty, 4 socio-economic conditions, 2,5 housing and living conditions, 6 occupations, 7 the ability to work at home and the use of public transportations, 8 ethnicity and migrant status 9 and access to the healthcare system affect the probability of being infected by the SARS-CoV-2 virus, of being hospitalised and of dying from the COVID-19 illness. 10 The COVID-19 pandemic did not spread uniformly across communities, 11 leading to growing inequalities in infection and mortality. 5 Though equitable care and access to care is always important, in the case of infectious diseases, it is also crucial to prevent new waves of infections. 12 Socio-economically disadvantaged communities are rendered vulnerable through an accumulation of social conditions that may increase the impact of the pandemic on these communities. Indeed, the COVID-19 incidence has been shown to be higher in neighbourhoods with disadvantaged socio-economic conditions in multiple countries, such as in Switzerland, 5 in the USA, 13,14 in Spain (Barcelona), 15 in Peru (Lima), 16 Italy, 17 France 18 and India. 19 Spatial socio-economic inequities have been shown to affect COVID-19 incidence, related deaths, but also access to tests and positivity in three large US cities, 20 in France 18 and in Switzerland. 5 Furthermore, due to the learning and adaptation capacity of health systems and public health authorities between the first and subsequent waves, the impact of the COVID-19 pandemic varies over time, and this was observed in a study in the USA where the association between income and COVID-19 outcome appears to change over time. 22 Furthermore, although gender differences in risk of SARS-CoV-2 infection have been documented, 2,21,22 the potential role that they might play at the intersection with socio-economic conditions at the community level remains to be explored. In this paper, we define gender as an institutionalised system of social practices for constituting people as two significantly different categories, men and women. 23 Because of their social roles and distribution within the workforce, women represent a majority of healthcare workers and tend to have more family caring responsibilities, all factors which could increase their exposure to SARS-CoV-2. 24 Characterising the role of neighbourhood-level social inequities, their change over time and interplay with gender during the COVID-19 pandemic is a key point for effective and appropriate public health interventions to prevent adverse outcomes and redress health inequities. 25 Using the state register ARGOS, 26 we performed an ecological study examining the association between COVID-19 related outcomes, COVID-19 testing capacities and neighbourhood-level socio-economic inequities in a region of half a million people. We focus on how these associations vary over time and across gender. Design The design of this study is a population-based ecological study. It uses the ARGOS database, 26 which is an ongoing prospective cohort created by the Geneva health state agency (Geneva Directorate of Health) and consists of an operational database compiling all SARS-CoV-2 test results conducted in the state of Geneva. The register contains baseline, follow-up, and contact information of all COVID-19 positive tested persons (57 438 positive cases between the first case on Feb 26, 2020, and June 1, 2021, 242 821 negative cases) residing in the State of Geneva, Switzerland. Geneva is a state of around 507 600 inhabitants (in January 2021), mainly urban, with a high population density, which doubles its population on working days (excluding pandemic restrictions) as a result of national and international Research in context Evidence before this study We searched for cohort and ecological studies published up to June 1, 2021 within the global literature on COVID-19 collected by WHO (search.bvsalud.org/globalliterature-on-novel-coronavirus-2019-ncov) with the terms socio* AND (disp* OR ineq* OR vuln*). We also searched for references cited in relevant publications. Existing studies report lower testing, higher case incidence, higher positivity, higher incidence of hospitalization, and higher death incidence from COVID-19 amongst populations with low incomes and from areas with high inequities. No study specifically studied the variation of these associations with gender. Added value of this study Using a database gathering all SARS-CoV-2 tests of a population of half a million people in Geneva, Switzerland, we showed that poor neighbourhood socio-economic conditions are associated with a lower rate of testing, and a higher incidence, positivity, and death incidence from COVID-19. This association varied greatly between the first and the second wave of the pandemic. The association between neighbourhood socioeconomic conditions and COVID-19 death incidence was stronger for women, whereas the association with testing capacities was lower. Implications of all the available evidence The association between neighbourhood socio-economic conditions and COVID-19 mortality and access to healthcare stems in part from the occupational settings and the economic and health policies of the population. This highlights the importance of equity in access to health services and suggests targets for public health measures. Articles commuter traffic (mainly from neighbouring France). We used the ARGOS database to assess the number of tests, confirmed cases, deaths, and delay between test and symptoms by gender, epidemic wave, and neighbourhood in the State of Geneva. Gender was selfreported: participants could choose between the categories 'man', 'woman' and 'other'. The gender category 'other' was not included in our analysis, due to the very low number of cases in the time range studied (33 cases). People tested in Geneva but not living in Geneva were not included in the present analysis. We define two main time periods according to the evolution of the pandemic in Geneva: the first wave, corresponding to the surge of COVID-19 between Feb 26 and July 1, 2020, and the second wave, spanning from July 2, 2020 to June 1, 2021. The statistical office of Geneva divides the territory in 476 areas at the neighbourhood level (hereafter neighbourhood), with a median [ Table 1). The present ecological study is performed at this neighbourhood level: the outcomes are calculated for each neighbourhood, the covariates are provided for each neighbourhood, and thus the dataset used for the analysis contained one row per neighbourhood. To do so, each entry (person) of the ARGOS database was localised with its address and assigned to one neighbourhood, using the official list of 53 226 official addresses of the state of Geneva. 27 5.3% of the addresses of the patients tested positive for SARS-CoV-2 could not be recovered, as well as 12.0% of those negative, but their postal code was available. There are 61 different postal codes in Geneva, corresponding to geographic zones containing a median number [IQR] of 5 [4,11] neighbourhood and 738 [352, 1313] addresses. We used multiple imputation 28 to handle missing data. We generated 50 imputed datasets at the neighbourhood level, where patients with missing or wrong addresses were randomly attributed to a neighbourhood corresponding to their postal code. This sampling process was weighted with a probability given by the population of the neighbourhood. The statistical analyses were performed on each imputed dataset and the results were then pooled according to Rubin's law. 29 Ethics approval and consent to participate Research received the agreement of the Cantonal Ethic Committee of Geneva (CCER protocol 2020−01,273). Individuals who refused to share their data were removed from the analysis. Outcomes For each neighbourhood, gender and time period, incidence of COVID-19 cases was assessed by the number of cases divided by the population of men or women in this area. Persons were only counted once, irrespective of the number of positive tests. Thus, reinfections were not included as additional cases. Incidence of tests was calculated as the total number of tests (positive or negative, PCR or antigenic) performed by the persons living in the neighbourhood divided by the population of interest (men, women, or all) of this area. The incidence of COVID-19 deaths was calculated as the number of death associated with COVID-19 by the official Swiss authorities in the neighbourhood divided by the population of interest. Positivity ratio of Sars-Cov-2 was defined as the ratio between the number of positive cases and the total number of tests performed within the population in the neighbourhood. In the ARGOS database, 85% of the positive patients had at least one follow-up call, during which they were asked if they had symptoms, and if yes at which date. The mean delay between date of test result and date of the first symptoms were then calculated within each neighbourhood. Spatial distribution of the outcomes can be found in Figure 1. Exposures Socio-economic vulnerability was assessed using a neighbourhood socio-economic vulnerability index (NSVI) defined by the centre for Territorial Analysis of Inequalities (CATI-GE). 30 The state of Geneva provided for each neighbourhood 6 variables corresponding to different aspects of vulnerability: the proportion of household perceiving a housing allowance, the median income of households, the share of low income, the share of students coming from a modest family, the share of active persons registered to the unemployment office, and the share of persons perceiving social subsidies (e.g., disability subsidies, health insurance subsidies, or supplementary pension benefits). These variables are summarized in the supplementary Table S1, which provides the threshold used to determine if the variable contribute the socio-economic vulnerability. The NSVI for each neighbourhood was operationalised as the sum of the six dichotomised variables (1 low socio-economic level, 0 non-low socio-economic level), resulting in a score ranging from 0 to 6. In order to have similar sizes of group, we defined four vulnerability groups: the reference group of neighbourhood with an NSVI equal to 0, slightly vulnerable neighbourhood with an NSVI of 1, moderately vulnerable neighbourhoods with an NSVI between 2 and 3, and highly vulnerable neighbourhood with a NSVI higher than 3. Spatial distribution of this score can be found in Figure 1. Confounders Given that age above 65 years is one of the major risk factor for COVID-19 21 and that a larger proportion of retired persons may have socio-economic difficulties, we used the proportion of person above 65 years as a confounder. Statistical analysis All the statistical spatial analyses were performed using R, 31 data.table for data management, the package spatialreg 32 for the spatial regression models, sp and sf 33 to handle spatial data. Spatial autocorrelation of the outcomes was assessed using Moran's Index. When the outcome was the number of tests, the number of COVID-19 cases or of COVID-19 related deaths, the associations between the outcome and the covariates were assessed using a generalised negative binomial regression model to account for over dispersion. The incidence for these three outcomes was obtained by offsetting the regression with the neighbourhood population of interest. When the outcome was the positivity rate or the delay between symptoms and test, the associations between the outcome and the covariates were assessed using standard linear model. When the outcome was the mean delay between symptoms and test, the regression was weighted by the number of available measures of delay in each neighbourhood. For all regression, Moran's index of the residuals was computed to ensure that there was no residual spatial correlation left. Policies may affect men and women differently, and disease incidence may vary by gender 34 therefore all associations between socio-economic vulnerability and COVID-19 related outcomes were examined separately for each wave and gender. Role of the funding source The funder had no role in study design; in the collection, analysis, and interpretation of data; in the writing of the report; and in the decision to submit the paper for publication. DM, SR and DSC had full access to dataset. All authors were responsible for the decision to submit for publication. Results There were 57 438 positive cases (excluding repeated infections) of COVID-19 in people living in Geneva during the period studied (11.3% of the population). 769 of these individuals died of COVID-19 since the beginning of the pandemic (0.14% of the population). In Geneva state, 242 821 residents tested negative (47.8% of the population), and 565 488 tests were performed. Table 2 details these numbers per epidemic wave for men and women. Amongst the 368 neighbourhoods of more than 90 people, 47.8% had a vulnerability score of 0 (Table 1). Figure 1 panel a presents the vulnerability score for the considered neighbourhoods. Neighbourhoods in Geneva had a median incidence of 9.2 cases per hundred inhabitants since the beginning of the pandemic, a median incidence of 0.77 test per inhabitant, a median positivity rate of 12%, and a median incidence of COVID-19 deaths of 0.056 per hundred inhabitants. The median mean delay between symptoms and tests was of 3.1 days. The percentage of COVID-19 associated risk factors reported by positive COVID-19 cases increases with the NSVI (see Figure 2 panel a). Regarding occupations in Geneva, jobs including contact with the public such as receptionists, nursing, health care aids, cashiers, housekeepers, and service staff are mainly occupied by women (see Figure 2 panel b). Overall results The vulnerable neighbourhoods of Geneva had a higher incidence of COVID-19 deaths (see Table 3 Incidence of COVID-19 cases increased for moderately vulnerable and highly vulnerable neighbourhoods, with a multiplication of 1.1 of the COVID-19 cases incidence when compared to reference neighbourhoods (53 cases per 100 inhabitants). As a consequence of this incidence increase and of the decrease of testing, the positivity rate significantly increased with vulnerability (see Table 4), with a clear dose-response effect: when compared to reference neighbourhoods (with a mean positivity of 13.5%), the positivity rate increased by 1.6% [95% CI 0.6, 2.6] for slightly vulnerable neighbourhood, and up to 2.8% [95% CI 1.9,3.8] for highly vulnerable neighbourhoods. The mean delay between tests and symptoms did not vary with vulnerability. Though this delay significantly decreased with the proportion of older residents, the effect size was very small, with a coefficient representing a decrease of less than 2 h of the delay for an increase of 10% of this at risk population. Stratification by wave and gender The stratification by wave and gender reveals clear differences between men and women and between the two epidemic waves. First wave During the first wave, the incidence of deaths was higher in vulnerable neighbourhoods for men and women (see Table 3). Furthermore, the difference between vulnerable neighbourhoods and reference neighbourhoods was larger for women than for men (p < 0. . For access to tests, the number of tests per person was significantly associated with vulnerability amongst women: women in vulnerable neighbourhoods were more likely to be tested than those in reference neighbourhoods (see Figure 3 panel a). This was not the case for men. The incidence of COVID-19 positive cases was higher for men and women in vulnerable neighbourhoods, with a stronger association amongst women (p < 0.001). The positivity rate did not show any clear association with vulnerability indices (see Table 4). Amongst men, the mean time between symptoms and test was shorter for more vulnerable neighbourhoods, with a doseresponse effect: moderately vulnerable neighbourhoods had 1.0 days [95% CI 0.2, 1.8] less delay between symptoms and tests than the reference neighbourhoods, when the difference was of 1.4 days [95% CI 0.7, 2.0] for highly vulnerable neighbourhoods (Table 4). In contrast, the mean delay for women remained similar for all vulnerability neighbourhood. Table 3: Incidence rate ratio [Confidence Interval] for the vulnerability categories compared to the reference category and for an increase of 1% of the proportion of population over 65 years when predicting the COVID-19 related death incidence, test incidence and cases incidence, during the whole period of interest (Overall, February 26, 2020 until June 1, 2021) for both men and women, and stratified by epidemic wave (First wave: Feb 26, 2020 to July 1, 2020, Second wave: July 2, 2020 to June 1, 2021) and gender. Incidence is calculated as the number of deaths, tests or cases divided by the population of interest (men and women, men, or women) of each neighbourhood. "*" indicates a p value 0.05>p > 0.01, "**" 0.01>p > 0.001, and "***" p < 0.001. Second wave During the second wave, the association between vulnerability and death incidence was reduced for both men and women. The effect was no longer significant for women, and remained, although weaker, for men (death incidence multiplied by 1.6 [95% CI 1.1, 2.5] for highly vulnerable neighbourhoods, see Table 3). It should be noted that the reference neighbourhoods had a higher death incidence during the second wave (mean death incidence of 509 per million inhabitants for women, 599 per million inhabitants for men, see Fig. 3 panel d). A strong association was apparent between the proportion of population above 65 years and COVID-19 related deaths in men and women, although women's association increased compared to wave 1 (p < 0.001). Positivity rate increased in similar proportion with vulnerability for men and women (dose response effect ranging from 1.6 to 2.5 percent increase when compared to reference neighbourhoods, see Table 4), but for different reasons. For women, the association between positivity and vulnerability is mainly due to a graded relationship between vulnerability and case-incidence (incidence of COVID-19 cases are multiplied by 1.09, 1.11 and 1.15 for slightly vulnerable, moderately vulnerable and highly vulnerable neighbourhoods), combined with a slight decrease of the testing. For males, the COVID-19 case incidence did not yield any association with vulnerability, but a clear decrease of the test incidence was observed for vulnerable neighbourhoods, reaching a division per 1.3 of the number of tests per person in the most vulnerable neighbourhoods. During the second wave, the mean delay between symptoms and tests decreased overall by a factor of 1.8 when compared with the first wave, and had a much lower dispersion between neighbourhoods (see Fig. 3, panel e). It did not present any association with vulnerability anymore. Discussion In this register of all declared tests in an neighbourhood of half a million inhabitants, neighbourhood socio-economic vulnerability was associated with an increased incidence of death, a lower incidence of testing, a higher incidence of COVID-19 cases, and a higher positivity rate. These results are consistent with previous findings 2,5,18,20,21 and confirm the recent observation of the inverse care law in Switzerland during the pandemic 5 : deprived neighbourhoods have less access to healthcare while being more at risk of infection and of severe disease. The stratification by gender and epidemic waves exposes that this effect was modulated by the epidemic activity and the subsequent changes of public health policies, and differed for men and women. The lower access to tests for people living in poor socio-economic neighbourhoods was only observed during the second wave and went along an increased positivity rate for deprived neighbourhoods. During the first wave, women from poor neighbourhoods were more likely to be tested than those from wealthy neighbourhoods, and men from poor neighbourhoods were on average tested more rapidly than men from rich neighbourhoods. Finally, the association between socio-economic vulnerability and COVID-19 death incidence was only observed during the first wave and was stronger amongst women. We will here provide some potential hypotheses that could explain these results. These stratified results must be analysed knowing that the two pandemic waves had a different context in terms of policies and testing capacities. During the first wave, the number of available tests was low and the population was under-tested. Priority was given to symptomatic cases, and during the shortage of reactants, to people with risk factors and to health care workers. Furthermore, a lockdown was implemented on March 26, then lifted on April 27, 2020, and was not re-implemented afterward. During the second wave, few restrictions were implemented, but the contact tracing of close contacts of persons infected by SARS-CoV-2 was systematised, leading to a 14-day quarantine. The lower access to testing capacities of people living in vulnerable neighbourhoods probably finds its root in their occupational settings, the social distribution of which is structurally gendered. 35 Jobs typically require a lower level of qualifications, are more likely to be manual for men, and tend to be public-facing for women (care, nursing, and health). During the first wave, such workers represented the large majority of the essential workers that were not locked down. 36 As a consequence, they had higher access to testing during this period of restricted testing capacities, as illustrated by the shorter delay between symptoms and tests for men, or by the higher number of tests for women from vulnerable neighbourhoods. For these women, this came also with a higher incidence, because the people working in the nursing, home care services, and healthcare sectors were more exposed to the virus, as shown in Geneva by a previous study of SARS-CoV-2 antibodies prevalence 36 and in other countries as well. 37,38 During the second wave, the same occupational settings had the opposite effect. The tests became widely available, and the lower ability to work distantly for people from vulnerable neighbourhoods exerted downward pressure on their access to COVID-19 tests. Indeed, testing positive implied being forced to stop working for 14 days and caused co-workers to quarantine. The economic cost of such measures may have caused implicit or explicit discouragements to be tested, explaining the lower testing incidence amongst men from vulnerable neighbourhoods during the second wave. This was less the case for women, who have professional activities that are more public-facing, where testing was still recommended or even compulsory. On the other hand, this higher contact with the public resulted in a higher exposure to SARS-CoV-2 and thus an association between positivity and vulnerability similar to those of men. The association between the incidence of COVID-19 deaths and socio-economic vulnerability could be the result of multiple factors. Air pollution could be one, as it seems to increase COVID-19 mortality 39 and has been shown to higher in areas with lower economic position. 40 Another factor could be the higher prevalence of COVID-19 risk factors associated with COVID-19 severity and mortality 41 amongst deprived populations. 42 These comorbidities have a higher incidence amongst men (e.g., obesity, 43 comorbidities, 44 chronic diseases or other main COVID-19 risk factors 45,46 ), explaining their overall higher incidence of COVID-19 deaths. Biological factors, such as the role of oestrogens in the modulation of the ACE2 47 expression and regulation, 48 could also play a role. But the higher association between these comorbidities and the socio-economic vulnerability for women 42 could explain the observed stronger link between their COVID-19 death incidence and socio-economic vulnerability. The lack of significant association between COVID-19 deaths and vulnerability during the second wave stemmed mainly from an increase in the death incidence amongst non-vulnerable populations (from a mean incidence of 230 per million inhabitants during the first wave to 554 per million inhabitants during the second wave). Indeed, the second wave of COVID-19 in Geneva was strong, and the city had one of the highest COVID-19 cases per inhabitant in Europe during this period. As a result, in such a high transmission context, the population was probably more equally affected across the social strata. The use of a register representative of all reported tests on a regional level primarily serving operational needs with the aim of contacting all COVID-19 cases is a solid asset to this study, as it reduced the risk of selection bias affecting many COVID-19 studies. The use of an official index based on data from 2020 to identify socio-economic vulnerability is also a strength, as these data are contemporary to the pandemic. Finally, the low percentage of missing data, especially of geocoded addresses, combined with the multiple imputation approach, strengthens our results. Despite these strengths, ARGOS database has been influenced by the testing policy. Individuals without risk factors for COVID-19 and those younger than 65 years were underrepresented in the database during the first wave. Furthermore, considering socio-economical vulnerability based only on geographically averaged variables can be seen as a limitation of our study, preventing us from studying individual aspects of the vulnerability. It furthermore does not incorporate other important aspects of social vulnerability, such as legal status, ethnicity, or working conditions. People living in neighbourhoods with disadvantaged socio-economic conditions were more affected by COVID-19 than people living in wealthy environments in two ways: they tended to be more exposed to the disease and more at risk of severe disease and to have lower access to testing. As the association between socio-economic conditions and COVID-19 was in part driven by people's occupational settings and living conditions, it can be modulated by economic and health policies but also by gender. Although men have a global higher incidence of death, the difference between the two sides of the social ladder is greater for women. The difference in access to tests between wealthy and poor neighbourhoods can be hidden by global restrictions of testing capacities or by strong political measures such as lockdown, or on the contrary enhanced by quarantine or isolation measures which affect workers selectively depending on their ability to work remotely. Therefore, the effect of neighbourhood socio-economic condition on access to healthcare must be considered in the light of the social and health policies and at the intersection with gender. Thus, although public health policies are supposed to target populations and not individuals, this study highlights the need to tailor policies for specific groups. Contributors Denis Mongin performed the data curation and the analysis, created the data visualisation, designed the article and wrote the first draft; St ephane Cullati participated to the literature review, participated to the result interpretation and revised critically the article; Michelle Kelly-Irving participated to the result interpretation and revised critically the article, Maevane Rosselet participated to the result interpretation and revised critically the article, Simon Regard participated in the study design and revised critically the article, Delphine S. Courvoisier acquired the financial support for the project, conceptualised the analysis, participated in the data interpretation and to the article design, and revised critically the article. Denis Mongin, Simon Regard and Delphine Courvoisier had full access to the dataset. Denis Mongin and Delphine Courvoisier verified the data. All authors were responsible for the decision to submit for publication. Data sharing statement The de-identified database underlying this article will be shared on reasonable request using the form (https:// edc.hcuge.ch/surveys/?s=TLT9EHE93C). Declaration of interests We declare no competing interests. Funding This research was supported by the research project SELFISH, financed by the Swiss National Science Foundation, grant number 51NF40−160590 (LIVES centre international research project call).
2022-03-29T13:10:47.079Z
2022-03-28T00:00:00.000
{ "year": 2022, "sha1": "24656c81384e2ea8c5e267d14cbf1e2f7d5f142f", "oa_license": "CCBYNCND", "oa_url": "http://www.thelancet.com/article/S2589537022000827/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7187ac0f23681b6edbdbc85626f9bdff0d1e957f", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
268307882
pes2o/s2orc
v3-fos-license
Insulin treatment adherence in type 2 diabetes mellitus patients: Literature review One of the hallmarks of diabetes, a metabolic disease caused by a lack of secretion, work, or both insulins, is hyperglycemia. Although insulin is the most effective treatment for diabetes mellitus, most patients are reluctant to inject insulin. A problem that can arise from not taking insulin as prescribed at the initial dose is cardiovascular disease, which is a leading cause of morbidity and death in diabetics. The goal is to analyze the administration of insulin therapy in diabetic individuals who receive it. The PICO Strategy method is applied to research topics using populations/problems, interventions, comparisons, outcomes, and keywords relevant to Sciencedirect, PubMed, Nelitik, and Google Scholar (in Indonesian and English). The results of the review obtained were the compliance of Diabetes Mellitus (DM) patients in undergoing therapy, some participants were obedient to undergo insulin therapy regularly. Conclusion factors of non-compliance with insulin therapy often arise fear of hypoglycemia, discomfort and fear in using insulin injections Introduction Elevated blood sugar levels are a hallmark of diabetes, a chronic metabolic disease that gradually damages the heart, blood vessels, eyes, kidneys, nerves, and other organs.Hyperglycemia is a hallmark of diabetes, a metabolic disease caused by lack of insulin secretion, lack of insulin action, or both [1].The World Health Organization (WHO) estimates that 422 million people worldwide have diabetes, and most sufferers live in low-and middle-income countries.Every year, an additional 1.5 million people are diagnosed with this disease.Over the past few decades, there has been an increase in the number and prevalence of diabetic patients.According to projections made by the International Diabetes Federation (IDF), 537 million people worldwide, aged 20 to 79 years, will suffer from diabetes mellitus (DM) by 2021 [2].High blood sugar is the main reason Proper diabetes treatment can be used to prevent chronic complications.Treatment of diabetes often focuses on maintaining stable blood sugar levels and preventing complications due to high blood sugar levels.Insulin is currently a useful therapy for people with diabetes mellitus (DM), but insulin injections are usually rejected by patients Insulin administration is still a major problem because many type II diabetes patients do not adhere to insulin administration in daily life [3].Adherence is an important component of successful treatment for patients, including people with type 2 diabetes.If not compliant, the patient may lose the benefits of treatment and the condition may worsen further.The extent to which a person adheres to agreed advice from healthcare experts to undergo insulin therapy on time, adhere to diet, or make lifestyle changes is known as adherence to insulin therapy treatment [4].The success of treatment depends on the person himself, including knowledge of the disease and adherence to treatment.Adherence is determined by whether or not the patient's blood sugar levels are controlled [5]. Insulin therapy is a treatment that has a rapid onset of action, as indicated by the action of insulin, and converts glucose into glycogen and stores it in the liver, thereby lowering blood sugar levels.However, some patients choose to discontinue insulin use because they are not interested in insulin injections, which causes discomfort, fear of injections, or finds injections stressful.On the other hand, diabetic patients who are not adherent to insulin therapy may lose control of blood sugar levels more often than patients who are adherent [6].To provide efficient treatment, minimize complications from other diseases, and improve the quality of life of diabetic patients to maintain a stable condition, it is important to identify patients who do not adhere to the prescribed regimen [7].Improving adherence to optimize treatment outcomes can help individuals with diabetes mellitus avoid problems [8]. Method This research uses a literature study design, by selecting and analyzing articles that are relevant and consistent with the research objectives.The literature study selection process was adapted from Preferred Reporting Items for Systematic Review (PRISMA) to search for and identify selected articles in the literature review.The study's inclusion criteria included the use of Indonesian and English with adherence keywords, insulin therapy, and diabetes mellitus.The article search process includes data used in four databases, literature search used in searching the database, including Pubmed data (33 articles), Google Scholar (58 articles), ScienceDirect (16 articles), and Neliti (13 articles).A detailed explanation of the article I received contains about adherence to the use of insulin to control blood sugar levels.To search for articles in any database by entering keywords, follow these steps: PubMed Database ((Adherence) AND (Insulin Therapy) AND (Diabetes).mellitus)), Google Scholar "Adherence of Insulin Therapy in Diabetic Patients", sciencedirect "Adherence, Insulin Therapy, Diabetes", Neliti "Keywords "Insulin Therapy, Diabetes".A total of 22 duplicate articles were issued, 29 full-text articles with responses were issued, and finally there were 5 articles with relevant content and in accordance with the research topic. Result Figure 1 Article Selection Process Flowchart Based on PRISMA-ScR Based on a study of several research journals, it can be displayed in a summary table of research results as follows: In this study, 62% of type 2 diabetes patients were likely to start insulin therapy.There was a statistical difference between the total items of positive and negative attitudes towards insulin therapy (agree/disagree) and acceptance of insulin therapy (P <0.05).Since 38% of type 2 diabetes patients refuse to start insulin therapy, it seems that effective communication between doctor and patient as well as ongoing follow-up by health care providers can increase positive attitudes toward insulin injections. Discussion One of the most important components of successful patient therapy is adherence, along with other elements including an appropriate treatment regimen, accuracy in medication selection, and patient support to lead a healthy lifestyle.In a study by Saibi et al., (2020) A total of 65 respondents (37.1%) reported taking antidiabetic drugs with a high adherence rate, while 71 respondents (40.6%) reported a moderate adherence rate.Of the total, 39 respondents (22.3%) achieved a high level of compliance.The 110 respondents fall into the moderate and poor compliance categories can be used to find out the reasons behind their non-compliance.One of the most important elements in a patient's therapeutic success is adherence, along with other elements including appropriate treatment regimen, accuracy in medication selection, and patient support [9].Namely the boredom of taking anti-diabetic drugs regularly for a long time, sometimes for life, late payment of drugs, and drugs run out because respondents are too lazy to return to treatment as usual, low attention so no one reminds to take medicine.Respondents did not understand the use of drugs such as insulin injections with their families.Injectable insulin requires daily injections, which can cause discomfort and fear in using insulin injections.In addition, working hours are very busy and falling asleep at night before taking medicine, so it makes you feel scared.Based on the results of the study, individuals with low medication adherence may experience uncontrolled blood sugar levels, while individuals with good adherence are able to maintain stable blood sugar levels.The boredom factor is the most common cause of non-compliance in respondents [10] In addition to the importance of insulin adherence to achieving treatment goals, there are other risk factors.A study by Farsaei et al., (2014) Morisky Green test results showed 99.4% of patients were adherent in insulin injections and only 2 patients were non-compliant.Factors that showed a significant association with insulin adherence in the type 1 and type 2 diabetes groups were: long-term intake, embarrassment, exacerbations after injections, and forgetting and neglecting insulin on sick days, hypoglycemia, medication costs, overweight, insulin deficiency, difficulty in preparing insulin [11].Simple insulin injections or administration do not affect insulin therapy.The study showed that dissatisfaction with the time it took for injections, embarrassment, and difficulty in preparing for injections were associated with decreased adherence in both type 1 and type 2 diabetes patients [12]. A study by Chefik et al., (2022) found that respondents who had a more positive attitude towards insulin therapy had up to 4.55 times greater adherence to insulin therapy than respondents who had a less positive attitude [7].The study also found that the adherence rate to insulin therapy was 121 (38.9%), mainly due to a diabetes diagnosis period of more than 15 years, poor attitude towards insulin therapy, and suffering from type 2 diabetes, in addition, the absence of a home blood glucose measuring device and poor physical condition, knowledge of insulin therapy, adherence to regular visits to the hospital, and taking two types of drugs. A study by Bermeo-Cabrera et al., (2018) 83 (41.5%) patients were classed as adherent, and 117 (58.5%) as noncompliant.Only 22 patients (11%) received insulin therapy with excellent adherence [13].The following elements were associated with non-compliance: lack of a daily activity plan (46.1%), fear of hypoglycemia (41%), economic factors (15.4%), and insulin consumption (2.31 vs. 1.76 per year).The main factors associated with discontinuation of insulin use are lower socioeconomic status, fear of hypoglycemia, and increased daily insulin doses.The results of this study may help determine interventions to address and correct insulin treatment non-adherence in type 2 diabetes patients [14]. A study by Davoudi et al., (2020) in this study found some degree of adherence to insulin treatment in 62% of patients receiving insulin therapy and adherence and in 38% of patients not adhering to insulin therapy [2].The main factors that prevent patients from adhering to the use of insulin are fear of injections, fear of obesity, and how to handle insulin injections.Non-compliant patients also believe that insulin therapy may worsen their symptoms.These include fear of incorrect injection methods and discomfort with insulin injections.Fear of hypoglycemia after injections and insulin injections.Since 38% of type 2 diabetes patients are not adherent to insulin therapy, we believe that effective communication between physician and patient as well as ongoing follow-up by healthcare professionals will increase positive attitudes towards insulin injections [15]. Conclusion Diabetes mellitus (DM) patients who receive insulin therapy generally report a degree of adherence to the treatment.Factors of non-compliance with insulin therapy often arise fear of hypoglycemia, discomfort and fear in using insulin injections Disclosure of conflict of interest No conflict of interest to be disclosed.
2024-03-11T17:52:38.359Z
2024-03-30T00:00:00.000
{ "year": 2024, "sha1": "9e70d55b93570210d99dd55f7435f6ae3a8bc94a", "oa_license": "CCBY", "oa_url": "https://oarjst.com/sites/default/files/OARJST-2024-0038.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "5a189bf8d27c666bfe5a2647c30d21b9822c58a5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
6184282
pes2o/s2orc
v3-fos-license
Neurodynamics of executive control processes in bilinguals: evidence from ERP and source reconstruction analyses The present study was designed to examine the impact of bilingualism on the neuronal activity in different executive control processes namely conflict monitoring, control implementation (i.e., interference suppression and conflict resolution) and overcoming of inhibition. Twenty-two highly proficient but non-balanced successive French–German bilingual adults and 22 monolingual adults performed a combined Stroop/Negative priming task while event-related potential (ERP) were recorded online. The data revealed that the ERP effects were reduced in bilinguals in comparison to monolinguals but only in the Stroop task and limited to the N400 and the sustained fronto-central negative-going potential time windows. This result suggests that bilingualism may impact the process of control implementation rather than the process of conflict monitoring (N200). Critically, our study revealed a differential time course of the involvement of the anterior cingulate cortex (ACC) and the prefrontal cortex (PFC) in conflict processing. While the ACC showed major activation in the early time windows (N200 and N400) but not in the latest time window (late sustained negative-going potential), the PFC became unilaterally active in the left hemisphere in the N400 and the late sustained negative-going potential time windows. Taken together, the present electroencephalography data lend support to a cascading neurophysiological model of executive control processes, in which ACC and PFC may play a determining role. Introduction The bilingual brain can distinguish and control which language is in use. For example, individuals who communicate in more than one language are able to produce words in the selected language and to inhibit the production of words in the non-selected language. This cognitive ability to control multiple languages is assumed to rely on the involvement of different cognitive processes. More generally, cognitive control, also known as executive functions, can be defined as a set of processes involved in managing processes and resources in order to achieve a goal. It is an umbrella term for the neurologically based skills involving mental control and self-regulation. Current psychological and neurobiological theories describe cognitive control either as unitary or as a system fractioned into different sub-processes. Alternatively, hybrid theoretical accounts as proposed by Miyake et al. (2000) attempt to integrate both unifying and diversifying characteristics of executive functions. Miyake et al. (2000) postulate three main executive functions, namely inhibition of dominant responses ("inhibition"), shifting of mental sets ("shifting") and monitoring and updating of information in working memory ("updating"). In the study presented in this paper, we examined cognitive inhibition but also overcoming inhibition mechanisms. One of the key discoveries in human cognitive and brain sciences in the past 20 years is the increasing evidence from behavioral, neurophysiological, and neuroimaging studies for the plasticity of executive functions (Dahlin et al., 2008;Li et al., 2014). Psychological research has shown that the efficiency of executive control processes can be influenced among others by multiple language use (for reviews, see Costa et al., 2009;Kroll and Bialystok, 2013;Baum and Titone, 2014;Grant et al., 2014). The rationale for accounting for an improvement of executive control processes in bilinguals is the following: both languages are activated to some degree in bilingual individuals (Van Heuven et al., 1998;Hoshino and Thierry, 2011); therefore, executive control processes are regularly solicited to maintain the target language(s) in a given interactional context and to avoid persistent bidirectional cross-language influences (Blumenfeld and Marian, 2013). This constant training may make these processes more efficient in the long run. A convincing argument in favor of such a bilingualism advantage in executive functioning is empirical evidence of shorter color naming times in conflicting trials of a Stroop task (i.e., incongruency between the word of the color and the ink) in bi-than in monolinguals (Bialystok et al., 2008;Heidlmayr et al., 2014). It is however important to note that although a growing number of behavioral studies investigating control processes in bilingualism show that bilinguals perform better in many executive functions tasks (Kovacs and Mehler, 2009;Prior and Macwhinney, 2009;Gathercole et al., 2010;Hernández et al., 2010;Isel et al., 2012;Kroll and Bialystok, 2013;Kuipers and Thierry, 2013;Marzecová et al., 2013;Heidlmayr et al., 2014), a significant number of studies failed to report such an advantage of bilingualism (Morton and Harper, 2007;Paap and Greenberg, 2013;Antón et al., 2014;Duñabeitia et al., 2014;Gathercole et al., 2014; for reviews, see Costa et al., 2009;Hilchey and Klein, 2011;Kroll and Bialystok, 2013;Valian, 2015). For example, in a large sample of 252 bilingual children (age 10.5 ± 1.8 years), using both a Classic Stroop task (linguistic component) and a Numerical Stroop task (no linguistic component) to disentangle the effects that are due to language processing and those due to control processes, Duñabeitia et al. (2014) failed to observe any group differences in overall response times (RTs), as well as in Stroop (incongruent vs. congruent) and in Incongruity (incongruent vs. neutral) effect sizes, for both Classic and Numerical Stroop tasks. These findings contribute to the larger picture that a bilingual advantage is not systematically found in control tasks and they suggest that we may still have much to learn about the diversity of bilinguals we are testing. However, these findings also indicate that if a bilingual advantage is found in a Stroop task, it is not straightforward to explain it with reduced L1 language activation in bilinguals [cf. the weaker links hypothesis by Gollan et al. (2005)]. More importantly, the overall RT advantage in bilinguals compared to monolinguals on both congruent and incongruent trials seriously questions the conclusion that multiple language use may specifically improve performance in tasks presenting a conflict (see Hilchey and Klein, 2011 for a review). This overall RT advantage in some bilingual individuals suggests that these bilinguals are not better in conflict resolution in particular but rather that they may have either a "bilingual executive processing advantage" as proposed by Hilchey and Klein (2011) or a general enhanced capacity of processing information independently of the presence of conflicting information. The more general question we are asking here is whether there is a relationship between the use of multiple languages and the improvement of executive control efficiency, at least at some stages of second-language learning, or, more specifically, which kinds of control processes are improved by multiple language use. This assumption relates to Hilchey and Klein (2011) who claimed that many executive processes show a bilingual benefit, though not necessarily inhibition. In this paper, we will provide evidence for very specific bilingual benefits with respect to sub-processes of cognitive control. To account for inconsistencies observed in the literature of bilingualism and executive functions, various methodological considerations can also be invoked. One of them is that until now most of the studies have used RTs as the dependent variable, which are known to result from a combination of multiple processes and sub-processes. In the present study, we recorded online electrical responses of the brain in order to trace the precise time course of the two sub-processes of interference control under investigation, namely conflict monitoring and interference suppression and their neural underpinnings. More particularly, we recorded event-related potentials (ERPs) and associated neuronal generators of ERP signatures while a group of French-German participants and their matched monolingual controls performed a Stroop task combined with a Negative priming paradigm. To study cognitive inhibition, and more particularly the overcoming of inhibition, the Negative priming paradigm, initially implemented in a Stroop task by Dalrymple-Alford and Budayr (1966), constitutes a suitable tool (Aron, 2007; for a review and for alternative explications of the Negative priming effect, see MacLeod and MacDonald, 2000). The inconsistencies observed in the literature of bilingualism and executive functions can also be the result of considering bilingualism as a categorical variable, thus masking the impact of the multiple dimensions characterizing bilingual individuals. In the present study, we used correlation analyses to embrace the multidimensional facets of bilingualism. Over the past 20 years in cognitive psychology, neurophysiological and neuroimaging techniques have demonstrated their capacity to detect effects on a more fine-grained scale than various behavioral methods. In research on executive functions in monolinguals, three ERP signatures have been established repeatedly using different tasks. From a neurochronometric point of view, the first signature is the fronto-central N200 effect (i.e., a larger negative amplitude in the conflict compared to the non-conflict condition) assumed to reflect cognitive control (response inhibition, response conflict, and error monitoring; Boenke et al., 2009), and whose main neuronal generator was found in the ACC (Folstein and Van Petten, 2008). The second ERP signature is the centro-parietal N400 effect, usually found in Stroop studies (i.e., a larger negativity in the incongruent condition in comparison to the congruent or to the neutral condition; Liotti et al., 2000;West, 2003;Hanslmayr et al., 2008;Appelbaum et al., 2009;Bruchmann et al., 2010;Coderre et al., 2011;Naylor et al., 2012;among others). The N400 Stroop interference was interpreted to reflect higher cognitive cost in responding to stimuli in the incongruent condition -usually causing a conflict between the two sources of information, the color word and the print color -in comparison to the congruent condition. The main neuronal generators of the N400 effect were mainly found in both, the ACC and the prefrontal cortex (PFC; Liotti et al., 2000;Markela-Lerenc et al., 2003;Hanslmayr et al., 2008;Bruchmann et al., 2010). Finally, a later ERP signature was also observed, namely a late sustained negative-going potential (540-700 ms), that is a sustained fronto-central negative deflection in the incongruent condition compared to the congruent one (West, 2003;Hanslmayr et al., 2008;Naylor et al., 2012). Note that some studies also reported the inverse effect: a positive deflection, over the centro-parietal scalp (Liotti et al., 2000;West, 2003;Hanslmayr et al., 2008;Appelbaum et al., 2009;Coderre et al., 2011). The late sustained negative-going potential has been proposed to reflect either engagement of executive processes (Hanslmayr et al., 2008), conflict resolution processes (Coderre et al., 2011;Naylor et al., 2012), semantic reactivation of the meaning of words following conflict resolution (Liotti et al., 2000;Appelbaum et al., 2009), or response selection (West, 2003). Source localization has rarely been done for this late sustained negative-going potential but there is some evidence of its main neuronal generators in the middle or inferior frontal gyrus and the extrastriate cortex (West, 2003). Recently, in an ERP study examining the impact of bilingualism on interference suppression, using Stroop, Simon, and Erikson flanker tasks, Kousaie and Phillips (2012) reported language group differences in conflict processing at the neurophysiological level (i.e., larger fronto-central N200 amplitudes and later P3 peak latencies for mono-than for bilinguals in a Stroop task) but not at the behavioral level. This finding suggests that neurophysiological measures can be more sensitive than behavioral measures. Moreover, in an ERP study also using a Stroop task, Coderre and van Heuven (2014) found a descriptively smaller N400 effect in bilinguals compared to monolinguals. In an MEG study using a Simon task, Bialystok et al. (2005) reported different correlations between the brain areas activated and the reaction times comparing bi-and monolinguals, indicating systematic differences in the activation of cognitive control areas (e.g., PFC, ACC) between the two language groups. In general, the positive correlation of faster reaction times with stronger activation in PFC and ACC in bilinguals corroborates the idea that bilingualism is associated with plasticity in cognitive control efficiency. Regarding the neuronal sources underlying bilingual language control, Abutalebi and Green (2008;see also, Green and Abutalebi, 2013) formulated a neurocognitive model constituted by a cerebral network including the ACC, the PFC, the basal ganglia (especially the caudate nucleus; see also, Crinion et al., 2006), the bilateral supramarginal gyri (SMG) and the parietal lobe (in case of high attentional load). Note that this model is widely coherent with neurocognitive models of domain-general control Shenhav et al., 2013). The present ERP study relies on an integrative theoretical account, i.e., the Adaptive Control Hypothesis model postulating that various control processes are involved in use of multiple languages (Green and Abutalebi, 2013). Our goal was to investigate the impact of bilingual experience on the neurodynamics of distinct control processes, i.e., conflict monitoring, interference suppression, overcoming of inhibition, and conflict resolution by combining a Stroop task with a Negative priming paradigm and using a high temporal resolution technique, namely electroencephalography (EEG). The experiment was administered to 22 late non-balanced French-German bilinguals and 22 French monolinguals. A correlation statistical approach, in which multiple dimensions inherent to bilingualism (i.e., linguistic, environmental, and demographic dimensions) were treated as continuous variables, was adopted to take into consideration the non-categorical nature of bilingualism. Based on previous studies, an N200 effect (conflict detection/conflict monitoring), an N400 effect (interference suppression) and a late sustained negative-going potential (conflict resolution) should be observed for both the Stroop and Negative priming tasks. For bilinguals, smaller effect sizes were expected in the three time windows for the Stroop task and even more so for the more costly Negative priming task, when compared to monolinguals. Finally, and critically, based on current assumptions on the functional relationship between ACC and PFC , we hypothesized that ACC should monitor conflict and then communicate with PFC for implementation of control once the need has been identified. Thus, we predicted to find ACC activation especially for the early N200 and the N400 effect while PFC activation was supposed to mainly underlie the N400 component and the late sustained negative-going potential. Materials and Methods Participants Forty-four right-handed (Edinburgh Handedness Inventory) participants were selected for the experiment and tested at Paris Descartes University, France. Among them were 22 successive French (L1) -German (L2) bilinguals and 22 French monolingual individuals, all of them living in France at the time of the experiment. The study was approved by the Conseil d'évaluation éthique pour les recherches en santé at Paris Descartes University and participants gave their written informed consent prior to participation. By their own account, participants had no history of current or past neurological or psychiatric illnesses, had normal or corrected-to-normal vision and normal color vision. They were paid 10€ per hour or received course credits for their participation. As it has been pointed out in previous studies demographic factors such as socioeconomic status (SES; Morton and Harper, 2007) and environmental factors such as expertise in music (Bialystok and DePape, 2009), video game playing (Dye et al., 2009), and actively performing sports requiring high bimanual coordination (Diamond and Lee, 2011) are all critical factors for developing executive control mechanisms. Consequently, these factors were controlled in our study (see Table 1). Twenty-two successive French (L1)-German (L2) bilinguals (16 female) of an average age of 26.9 ± 5.5 years (range = 18-36 years) were tested. They were late learners of German who had started to study German from the age of 10 at secondary school in France. The mean AoA of their second language (L2) was 10.6 ± 0.7 years (range = 9-12 years). Bilingual participants had a regular use of their L2 German during the past 3 years and at present (20.9 ± 14.6% per day; see Table 1) and even if they were highly proficient in their L2 [self-evaluation; 1.7 ± 0.6 (1high proficiency to 5 -low proficiency); score language test: 83.0 ± 9.5%] they were non-balanced bilinguals. Language background data assessed with a language history questionnaire are summarized in Table 1. Twenty-two monolingual French native speakers (13 female) of an average age of 25.5 ± 4.4 years (range = 19-39 years) who had had little use of languages other than their L1 during the past 3 years and at present (0.6 ± 0.9% per day; see Table 1) were selected as the monolingual control group. Stimuli An adapted version of the original Stroop task (Stroop, 1935) was used in the experiment. The task consisted of manually responding to the print color of stimuli in four different conditions, namely congruent, incongruent, negative priming, and neutral. In the congruent condition, the meaning of the color word and the print color matched (ROUGE red ), while in the incongruent and negative priming conditions they did not (ROUGE red ). In the negative priming condition, an incongruent stimulus (trial n) was preceded by an incongruent trial (trial n-1) serving as the negative prime: in trial n-1 the color word that had to be inhibited ('red' in ROUGE red ) was equal to the print color which was to name in trial n ('red' in BLEU blue ). Therefore, the inhibition affecting the color 'red' in trial n-1 needed to be overcome to correctly respond to the print color in trial n. In the congruent, incongruent and negative priming conditions, the following four color words were presented in L1, French: ROUGE red , BLEU blue , JAUNE yellow , VERT green and their translation equivalents in L2, German: ROT red , BLAU blue , GELB yellow , GRÜN green . In the neutral condition, four non-color words were presented in the same print colors as in the congruent and incongruent conditions (CHAT cat ) in L1, French: CHAT cat , CHIEN dog , MAIN hand , PIED foot and their translation equivalents in L2, German: KATZE cat , HUND dog , HAND hand , FUSS foot . The stimulus words, written in capitals of font "Calibri" in font size 48, were presented individually against a black background in the center of the screen. Procedure Participants were seated in front of the computer screen (14" screen) and instructed to perform a manual color response task, that means they had to indicate as fast and as correctly as possible the print color of the stimulus word by pressing one of the four color-coded response buttons (keys d, f, j, and k). The color-finger-assignment was counterbalanced between-subjects. Stimuli were presented with E-Prime 1.2 (Psychological Software Tools, Pittsburgh, PA, USA). Each trial started with a fixationcross presented in the center of the screen for 500 ms (Figure 1), which was then replaced by the stimulus word. The stimulus remained visible until one of the four color response keys was pressed (online RT) but maximally for 1500 ms. Then followed an inter trial interval (ITI) of 2300 ms figuring a black screen. After the first 1000 ms of the ITI, a blink sign (a symbolized eye) was displayed for 300 ms. Participants were instructed to limit eye blinks to the interval starting with the blink sign until the end of the ITI in order to reduce motor artifacts on the ERP response. In order to enable the participants to learn the color-key correspondences, two training blocks of 40 trials each were presented before starting the ten experimental blocks. If accuracy was below 80% after the second training block, training was repeated. For bilinguals, five experimental blocks featured words in German, the five other blocks featured words in French. For monolinguals all blocks consisted of words in French but only five were selected for further analysis. In order to compare Language groups, only the procedure for L1 (French) blocks is presented as follows. Each block consisted of 72 trials, consisting of 24 congruent, 12 incongruent, 12 negative priming and 24 neutral stimuli, presented in a pseudo-randomized order. Online RT was defined as the interval between the onset of the stimulus word and the button press. Responses before 200 ms or after 1500 ms were coded as missing. We averaged the RTs for correct responses for each experimental condition across participants and across items. RTs outside a range of 2 standard deviation from the mean per participant were excluded from the statistical analysis. Analysis of Behavioral Data Two-way repeated measures analysis of variances (ANOVAs), including the within-subjects factor Condition (congruent, incongruent, negative priming, neutral) and the between-subjects factor Language group (bilingual, monolingual) were conducted for the dependent variables Error rate and RT. Moreover, in order to compare the behavior in the two languages of the bilingual participants, two-way repeated measures ANOVAs were conducted including the within-subjects factors Condition (congruent, incongruent, negative priming, neutral) and Language (L1, L2) for the dependent variables Error rate and RT. ERP Recording Electroencephalography was recorded using a Geodesics 64channel sensor net and the software NetStation (Electrical Geodesics Inc., Eugene, OR, USA). All channels were referenced online against Cz. For data analysis, channels were re-referenced to an average reference. Electrode impedances were kept below 50 k . Data were recorded at a sampling rate of 500 Hz, with an online 0.1-80 Hz frequency bandpass filter. Then, data were filtered offline with a 0.5-35 Hz bandpass filter. ERP Analysis The continuous EEG were segmented into epochs from 200 ms pre-stimulus until 1500 ms post-stimulus onset and baseline corrected with the baseline set from −200 to 0 ms. Only trials with correct responses that were not contaminated by ocular or other movement artifacts were kept for further data analysis. Automatic detection was run followed by a visual inspection of the segmented data. The total percentage of rejected trials were distributed equally over the four conditions (F < 1; congruent: 37.3 ± 16.9%, neutral: 38.3 ± 15.4%, incongruent: FIGURE 1 | Timing of a trial. The timing of a trial in the manual version of a Stroop task is displayed. Two succeeding trials are given in order to demonstrate the Negative priming procedure. Frontiers in Psychology | www.frontiersin.org 38.0 ± 16.9%, negative priming: 37.6 ± 16.1%). This is true for rejected trials due to erroneous behavioral responses (congruent: 3.2 ± 3.0%, neutral: 3.3 ± 3.5%, incongruent: 3.8 ± 4.2%, negative priming: 3.6 ± 3.8%) as well as due to artifacts in the signal (congruent: 34.0 ± 16.7%, neutral: 35.0 ± 14.9%, incongruent: 34.1 ± 16.4%, negative priming: 34.0 ± 15.6%). In each experimental condition, the ERP activity was then averaged over stimuli and over participants (i.e., grand average ERP). Statistical analyses were conducted for three ERP signatures for which the time windows were selected based on previous ERP studies of executive functioning and adjusted by visual inspection of the grand averages: N200 (200-300 ms), Stroop N400 (400-500 ms), and a late sustained negative-going potential (540-700 ms). For the three selected intervals, analyses were conducted on the ERPs from selected electrodes. All analyses were quantified using the multivariate approach to repeated measurement and followed a hierarchical analysis schema. In order to allow for an examination of hemispheric differences, the data recorded at the lateral recording sites were treated separately from the data recorded at the midline electrode sites. Analyses are presented for the Stroop effect (incongruent vs. congruent condition) and the Negative priming effect (negative priming vs. congruent condition) because our hypotheses were centered on these effects. For the lateral recording sites, for each time window, a fourway repeated measures ANOVA including the within-subjects factors Condition (Stroop: incongruent, congruent; Negative priming: negative priming, congruent), the topographical variables Hemisphere (left, right) and Region (anterior, posterior) and the between-subjects factor Language group (bilingual, monolingual) was conducted. For the midline electrodes, a three-way repeated measures ANOVA including the within-subjects factors Condition (Stroop: incongruent, congruent; Negative priming: negative priming, congruent), Electrode (Fz, Cz, Pz) and the between-subjects factor Language group (bilingual, monolingual) was run for each of the three time windows of interest. Moreover, given that we had a hypothesis on differences between Language groups based on previous studies, two-way repeated measures ANOVAs including the factors Condition (Stroop: incongruent, congruent; Negative priming: negative priming, congruent), and Language group (bilingual, monolingual) were run on each of the three midline electrodes in each time window. The dependent variable was the voltage amplitude averaged over each interval of interest. The Greenhouse-Geisser correction (Greenhouse and Geisser, 1959) was applied when evaluating effects with more than 1 degrees of freedom in the numerator. Post hoc pairwise comparisons at single electrode sites were performed using a modified Bonferroni procedure (Keppel, 1991). A significance level of 0.05 was used for all statistical tests and only significant results are reported. Hanslmayr et al. (2008) proposed a dipole (localizing neuronal source activity) model for a Stroop task containing eight discrete dipoles in fixed locations: LOC/ROC (visual stimulus processing), LMC/RMC (manual response), ACC (cognitive control), LMTC (color processing), LPFC/RPFC (cognitive control). This eight dipoles model is based on theoretical assumptions of cognitive processes and their neural correlates involved in the execution of a Stroop task and has been tested and partially confirmed by Bruchmann et al. (2010). Here, we applied a 10-regional sources model including the sources proposed by Hanslmayr et al. (2008) plus two further neuronal generators found to be involved in Stroop processing, that is the LIFG/RIFG (cognitive control, inhibition; Peterson et al., 2002), in order to capture the largest number of neuronal sources ( Table 2). Due to heterogeneous findings of peak activation in the ACC for a Stroop task in the previous literature and in order to improve the variance explained by the source model, the coordinates for the regional source in the ACC were chosen from a meta-analysis on a Stroop task (Laird et al., 2005). In the present study, discrete source analysis was done with the Brain Electrical Source Analysis program (BESA, version 5.3., Megis Software, Heidelberg, Germany). Regional sources were seeded in fixed locations while their orientations were a free parameter. This theoretical model of regional sources explained 75.3% of the variance. In order to trace the neuronal generators of scalp ERP effects, statistical analyses using bootstrap confidence intervals (99%) were conducted using BESA (version 5.3.) and the Waveforms toolbox for Matlab. The bootstrapping procedure was applied to investigate source activation underlying the Stroop effect (incongruent vs. congruent) and the Negative priming effect (negative priming vs. congruent) on each neuronal source in our theoretical model (ACC, LPFC, RPFC, LIFG, RIFG, LMC, RMC, LOC, ROC, LMTC). The source ERP amplitude between two conditions was considered to be significantly different (p < 0.01) for intervals in which the confidence interval (99%) of the difference wave did not include zero. Correlation Analyses As we consider that taking bilingualism as a categorical variable and therefore conducting ANOVAs is a necessary but not a sufficient approach to explore the impact of bilingualism on neuronal measures of cognitive control, we additionally conducted correlation analyses between linguistic background measures and behavioral and neurophysiological Stroop and Negative priming effect 1 sizes in bilinguals, with the following factors: the frequency of L2 and of L3 use, L2 proficiency, duration of immersion in an L2 environment, and age of immersion. Stroop Effect In the time-window 200-300 ms, neither the four-way ANOVA on lateral electrodes nor the three-way ANOVA on midline electrodes revealed any main effect or interaction involving the factors Condition or Language group. Two-way repeated measures ANOVAs on each of the midline electrodes did not reveal any Condition by Language group interaction or main effect of Language group. In the time window 400-500 ms, the four-way ANOVA on lateral electrodes revealed a main effect of Condition [F(1,42) = 13.46, MSE = 0.15, p < 0.001, η 2 p = 0.243] reflecting a more negative amplitude in the incongruent compared to the congruent condition (Stroop effect Figure 3B) but not for bilinguals (p > 0.10; Figure 3A; see also Figure 3C). Given that we had a strong hypothesis on the modulation of the N400 effect between the two groups based on previous studies, we then conducted further two-way ANOVAs on electrodes neighboring the Cz electrode to determine whether the N400 effect was significant over other electrodes. These analyses revealed a significant Condition (incongruent, congruent) by Language group (bilingual, monolingual) interaction also on the electrode C1. A small ROI including the electrodes Cz and C1 was created and we conducted a three-way ANOVA [Condition (incongruent, congruent), Electrode (Cz, C1), Language group (bilingual, monolingual)] which revealed a significant Condition by Language group interaction [F(1,42) = 5.92, MSE = 0.804, p < 0.05, η 2 p = 0.123]. Post hoc analyses revealed that there was a tendency toward a significant effect of Condition over this ROI for monolinguals (p = 0.078) while it was not significant in bilinguals (p > 0.10). In the time-window 540-700 ms, neither the four-way ANOVA on lateral electrodes nor the three-way ANOVA on midline electrodes revealed any main effect or interaction involving the factors Condition or Language group. Two-way repeated measures ANOVAs revealed a main effect of Condition on the Cz electrode, reflecting a more negative amplitude in the incongruent compared to the congruent condition [Stroop effect;F(1,42) = 5.69, MSE = 0.570, p < 0.05, η 2 p = 0.119]. Moreover, a Condition by Language group interaction on the Cz electrode [F(1,42) = 4.7, MSE = 0.57, p < 0.05, η 2 p = 0.101], indicated that the late sustained negative-going potential was only significant in monolinguals (p < 0.01; Figures 3A-C). To test whether the interaction effect between Condition and Language Group was significant over electrodes neighboring Cz, additional two-way ANOVAs were run. These analyses revealed a Condition by Language group interaction also on electrodes C1 and FC1. Creating a small ROI with these three electrodes we conducted a three-way ANOVA [Condition (incongruent, congruent), Electrode (Cz, C1, FC1), Language group (bilingual, monolingual)], which revealed a significant Condition by Language group interaction [F(1,42) = 6.77, MSE = 1.094, p < 0.05, η 2 p = 0.139]; Post hoc analyses showed that the incongruent condition was significantly more negative compared to the congruent condition in monolinguals (p < 0.001) while there was no significant difference in bilinguals (F < 1). Negative Priming Effect In the time-window 200-300 ms, the four-way ANOVA on lateral electrodes did not show any main effect or interaction involving the factors Condition or Language group. The three-way ANOVA on midline electrodes revealed a significant Condition by Electrode by Language group interaction [F(2,84) = 3.9, MSE = 0.93, p < 0.05, η 2 p = 0.085]. Post hoc analyses revealed a marginally significant Condition by Language group interaction on the Fz electrode [F(1,42) = 3.65, MSE = 0.79, p = 0.063, η 2 p = 0.08] that was due to an effect inversion between Language groups (the negative priming condition being more negative compared to the congruent condition in bilinguals, while this effect was reversed in monolinguals). The two-way repeated measures ANOVAs on each of the three midline electrodes only revealed a main effect of Condition on the Cz electrode [F(1,42) = 8.17, MSE = 0.231, p < 0.01, η 2 p = 0.163], reflecting a larger negativity in the negative priming condition compared to the congruent one (Negative priming effect). In the time-window 400-500 ms, the four-way ANOVA revealed a significant Condition by Region interaction [F(1,42) = 14.63, MSE = 0.64, p < 0.001, η 2 p = 0.258], indicating that the negativity was larger in the negative priming condition compared to the congruent one over the posterior electrodes. The three-way ANOVA on the midline electrodes In the time-window 540-700 ms, the four-way ANOVA (Condition, Hemisphere, Region, Language group) revealed a significant Condition by Region interaction [F(1,42) = 5.32, MSE = 0.55, p < 0.05, η 2 p = 0.112], indicating that over the anterior scalp, the negative priming condition was more negative compared to the congruent condition while over the posterior scalp the negative priming condition was more positive as compared to the congruent condition. The threeway ANOVA (Condition, Electrodes, Language group) revealed a significant main effect of Condition [F(1,42) = 5.55, MSE = 0.74, p < 0.05, η 2 p = 0.117], indicating that the amplitude in the negative priming condition was more negative as compared to the congruent condition (Figures 3A-C). Two-way repeated measures ANOVAs on each of the three midline electrodes did not reveal any main effect or interaction involving the factor Language group. Correlation Analyses One linguistic factor turned out to modulate the neurophysiological effect size in bilinguals: the frequency of L2 use was negatively correlated with the N400 Negative priming effect over the Pz electrode [r(22) = 0.424, p < 0.05; Figure 5A]. That means, the more bilinguals used their second language on a daily basis, the smaller was the N400 Negative priming effect. Discussion The present study aimed to investigate the impact of bilingual experience on the neurochronometry of different control processes, i.e., control monitoring, interference suppression, overcoming of inhibition, and conflict resolution. For this purpose, a combined Stroop/Negative priming task was administrated to 22 late highly proficient but non-balanced French-German bilinguals and 22 French monolinguals while event-related brain potentials were recorded. At the neurophysiological level, a bilingualism benefit was found as revealed by reduced ERP effects in bilinguals in comparison to monolinguals, but this benefit was only observed in the Stroop task and was limited to the N400 and the late sustained potential ERP components. Moreover, and critically, we were able to show a differential time course of the activation of ACC and PFC in executive control processes. While the ACC showed major activation in the early time windows (N200 and N400) but not in the latest time window (late sustained negative-going potential), the PFC became unilaterally active in the left hemisphere in the N400 and late sustained negative-going potential time windows. Event-Related Potentials On the neurophysiological level, three effects were expected: a central N200 effect (more negative amplitude in the negative Frontiers in Psychology | www.frontiersin.org priming and the incongruent conditions compared to the congruent condition in the 200-300 ms time window), a centro-parietal N400 effect (more negative amplitude in the negative priming and the incongruent conditions compared to the congruent condition in the 400-500 ms time window) and a fronto-centrally distributed late sustained negative-going potential (more negative amplitude in the negative priming and the incongruent conditions compared to the congruent condition in the 540-700 ms time window). We predicted to find reduced Stroop and Negative priming interference effects -reflecting reduced cost in conflict processing -in bilinguals compared to monolinguals. An N200 effect was only observed for the Negative priming task (negative priming minus congruent). The increased negativity reported in the incongruent condition could be explained by an inhibition account (Aron, 2007) postulating that responses in a negative priming condition are usually delayed due to the necessity to overcome previously applied inhibition in order to access response-relevant information. However, note that we did not find longer latency in the incongruent condition in comparison with the congruent one. Hence, the N200 Negative priming effect may reflect overcoming of inhibition and/or high demand in conflict monitoring, which are processes that plausibly take place in negative priming trials but not in incongruent trials. Furthermore, an N400 effect was found for the Negative priming task (negative priming more negative than congruent; N400 Negative priming effect) as well as in the Stroop task (incongruent more negative than congruent; N400 Stroop effect). This observation replicates previous observations of a sensitivity of the N400 time window to Stroop interference (Liotti et al., 2000;Markela-Lerenc et al., 2003;Hanslmayr et al., 2008). Similarly, in the present study, the more negative N400 amplitude in the incongruent Stroop condition may reflect underlying inhibitory processes. Furthermore, consistent with previous findings the N400 effect was larger for the more costly task, i.e., Negative priming (N400 Negative priming effect; negative priming more negative than congruent). The critical question of the present study concerned group differences: we observed smaller effect sizes for bilinguals in comparison with monolinguals but only for the N400 and the late sustained negative-going potential ERP effect in the Stroop task. No group difference was found in the early time window of the N200. It is plausible that a smaller Stroop N400 effect reflects reduced orthographic interferences that might be due to more efficient inhibition of interfering information. Similarly, Coderre and van Heuven (2014) have also reported a smaller Stroop N400 effect in bilinguals compared to monolinguals. Note that some authors label this incongruency effect P3 effect; for example Kousaie and Phillips (2012) found that the Stroop P3 peaked earlier in bilinguals as compared to monolinguals. Finally, a larger Stroop N400-like effect has been reported for children with learning disabilities as compared to age-matched controls, which was interpreted to reflect interference control deficits (Liu et al., 2014). Correlation analyses between behavioral and neurophysiological effect sizes corroborate the idea that a smaller Stroop effect reflects better inhibitory capacities, in that an increasing behavioral Stroop effect was found to be reflected by an increasing N400 effect [at Pz electrode; r(44) = 0.393, p < 0.01; Figure 5B] in the present study. Concerning the reduced late sustained negative-going potential effect observed in the Stroop task for bilinguals as compared to monolinguals, it is not easy to find a good interpretation as there is a lack of consensus on the functional significance of this effect. Some authors have proposed that the late sustained negative-going potential may reflect stages of conflict resolution. Thus, the group differences we reported for the N400 and the late sustained negative-going potential might suggest that the bilinguals tested in our study may have less cost in dealing with the conflict present in a Stroop task. Taken together, for the Stroop task, a bilingual advantage has been found in the stages of conflict processing that are thought to reflect control implementation involving interference suppression (N400 effect) and conflict resolution (late sustained negative-going potential). However, surprisingly, and against our predictions on task complexity, we failed to show both at the behavioral and neurophysiological levels a bilingual advantage in the Negative priming task, though considered a more complex task. Hence, the similarity of behavioral and electrical responses in the two groups in the Negative priming task could be an indicator that control processes specifically involved in this task may not be more efficient due to bilingual experience. Nonetheless, correlation analyses revealed a modulation of Negative priming effect size with frequency of L2 use (positive correlation), which indicates that bilingualism experience does have a certain impact on processes taking place in a Negative priming task, such as overcoming of inhibition, but that considering bilingualism as a categorical variable might not be sufficiently sensitive to capture this effect. Moreover, the heterogeneity in the monolingual group should not be neglected, in that 'monolingual' individuals nonetheless do have some basic foreign language experienceeven if the extent was controlled to be as little as possible. This heterogeneity should, however, influence Stroop effects and Negative priming effects equally. The differences between language groups observed for Stroop but not for Negative priming effect sizes, though unexpected, may actually corroborate the idea that the bilingual advantage in the Stroop task is mainly due to differences in control efficiency but not to the lower activation of the linguistic component in bilinguals. The weaker links hypothesis by Gollan et al. (2005) predicts similar effects for Stroop and Negative priming effects sizes. Thus, if the use of more than one language and consequently the reduced frequency of use of each single language in bilinguals were the main cause for their Stroop benefits, a comparable reduction of the effect size should have been observed for the Negative priming effect sizes in the present study, which was not the case. Consequently, the differences between the two language groups appear to be attributable to differences in the efficiency of specific control processes involved in the different tasks. To account for the absence of a group difference for the neurophysiological N400 effect in the Negative priming task despite the observation of (1) a bilingualism advantage in the N400 Stroop task, i.e., a less complex task, (2) a negative correlation between frequency of L2 use and magnitude of the Negative priming N400 effect in bilinguals, and (3) a stronger involvement of ACC in bilinguals than in monolinguals, we propose the following interpretation: we suggest that the specificity of the experimental constraints imposed by the Negative priming design is playing a major role here. Whereas in the Stroop task, incongruent trials were equally preceded by congruent or by neutral trials, in the Negative priming paradigm, a negative priming trial was always preceded by an incongruent trial due to the rational of the paradigm (overcoming of an information that was inhibited in an incongruent previous trial). Thus, we propose that the absence of a group effect in the negative priming condition at the neurophysiological level could be due to the fact that the monolingual individuals were already in a mode of inhibition when they encountered a negative priming trial. Consequently, they benefited from a local advantage so that they were able to manage the complexity of the Negative priming task as well as the bilinguals. The bilinguals, on the other hand, may have benefited less from this local advantage as their inhibitory capacities are already at ceiling. This post hoc explanation of a finding that turned out to be inconsistent with our primary hypothesis of task complexity, may shed a new light on the functioning of control processes. Indeed, it suggests that when we put monolinguals in an inhibition mode, they become able to manage a complex control task as efficiently as bilinguals. This means that, at least at short-term, the executive control processes involved for performing the Negative priming task were sufficiently efficient in monolinguals for reaching the same level of control as that observed in bilinguals usually assumed to present an advantage in cognitive control. At least at short-term, an advantage may also be found in monolinguals when they work in an inhibition mode. This would argue for neurophysiological plasticity of the cognitive control processes under investigation in the present study. Further work should attempt to disentangle the respective role of second language use and mode of information processing on the improvement of executive control functioning, and explore the long-term impact of these factors in mono-and bilingual individuals. Source Localization: Proposal of a Cascading Neurophysiological Model of Executive Control Processes in Bilinguals As already found in previous studies (Folstein and Van Petten, 2008;Hanslmayr et al., 2008;Bruchmann et al., 2010;among others), we found that the ACC as well as the PFC were main neuronal generators of the N200 and N400 Stroop and Negative priming effects in the present study (Figures 4 and 6). Moreover, the present data allow us to precise the time course of these main generators. While in the N200 and the N400 time windows, the ACC showed high activation and was a main neuronal generator for scalp interference effects, its activation did not play a major role in the late sustained negative-going potential time window (for similar findings on transient ACC activation in conflict trials, see Carter et al., 2000). The PFC, on the contrary, was a main neuronal generator for scalp interference effects in the N400 and late sustained negative-going potential time windows. This pattern of ACC and PFC activation was mainly driven by the bilingual group. Thus, our data suggest that the ACC may play a major role in initiating transient control as necessary when conflict has been detected while the PFC would be more active for implementing the control when the need has been detected (i.e., applying inhibition and conflict resolution), which is in line with previous findings and theoretical accounts (Dreher and Berman, 2002;Botvinick, 2007;Carter and Van Veen, 2007;Abutalebi and Green, 2008;Shenhav et al., 2013). However, it has been shown that there are functional subdivisions of the ACC that behave differently according to task demands and are affected differently by task practice (Leung et al., 2000;Milham and Banich, 2005). Concerning the group differences observed in source activation but not in behavioral data in the present study, note that a similar pattern of results has been reported in a previous MEG study (Bialystok et al., 2005). In this MEG study using a Simon task, Bialystok et al. (2005) found that underlying neuronal processes in the Simon task were different for bilinguals compared to monolinguals even if the groups did not differ in response speed. Bialystok et al. (2005) found that the language group differences did not only consist in the differential intensity of activation of the areas involved in performing the Simon task but even more so in the pattern of areas that were involved. Beyond differences in other areas, in bilinguals as well as monolinguals the incongruency effect was reflected by activation in the left PFC and ACC (among others) but this activation was stronger in bi-than in monolinguals. It is particularly interesting that our data are compatible with these observations since we were using a different task, the Stroop task, which is however, comparable to the Simon task in that both involve conflict processing and are thought to necessitate interference suppression amongst the executive functions. Our results are in line with the Bialystok et al.'s (2005) in two ways, both of which concern especially the differential involvement of the ACC in the two groups: (1) the differences in the pattern of control region involvement in bilinguals and monolinguals in performing the Stroop task, and (2) the more salient difference in source activation between the incongruent and the congruent condition in bilinguals as compared to monolinguals, a group difference that is however, not reflected at the behavioral level. These findings indicate that multiple language use impacts the activation in the neuronal basis of domain-general control processes not only quantitatively in potentially leading to more efficient control but also seems to qualitatively modulate the activation of the control network. The absence of a behavioral bilingual advantage in the present study may be due to the fact that behavioral measures constitute the end-product of a combination of subprocesses, which could mask some effects that are difficult to be traced because of intrinsic heterogeneity of bilingual participants (for similar findings, see Gathercole et al., 2010;Coderre and van Heuven, 2014;Duñabeitia et al., 2014; however, other studies did find a bilingual advantage in a Stroop task, see Bialystok et al., 2008;Heidlmayr et al., 2014;Yow and Li, 2015; for a review, see Paap and Greenberg, 2013). Further investigation of the neuronal processes in bilingualism should include fMRI studies to obtain higher spatial accuracy as FIGURE 6 | Schematized view of control processes and the neuronal underpinnings in a Stroop task. Scalp ERP effect sizes (Stroop effect, Negative priming effect) for each of the three ERP components (N200, N400, late sustained negative-going potential) and the main underlying neuronal sources for the scalp ERP effects are plotted. A schematized view of the time course of Stroop conflict processing is displayed below. Data are collapsed over Language group (n = 44). ACC, anterior cingulate cortex; LPFC, left prefrontal cortex. * p < 0.05; * * p < 0.01; * * * p < 0.001. Image credit for the schematized image of pressing a button: Download Clipart (2013). well as functional connectivity analyses. Indeed, Crinion et al. (2006) suggested that subcortical regions like the left caudate may play a crucial role in monitoring and controlling the language in use. Consequently, the description of the neuronal network supporting executive control processes in language control cannot escape a better understanding of how different cortical (ACC, PFC among others) and subcortical (left caudate) brain areas communicate in monitoring and controlling the language in use. Summing up, bilinguals seem to benefit from higher efficiency in their neuronal and cognitive processing of control implementation, namely interference suppression and conflict resolution because of their experience in handling two languages on a daily basis. However, there appears to be less of an advantage in conflict monitoring, at least for the type of bilinguals selected and the paradigm used in the present study. Moreover, this advantage of bilingualism was not observed in the Negative priming task. Yet, future research using different neuroimaging techniques should help to give a more detailed account of the current findings in trying to characterize the relation between conflict monitoring and interference suppression and the impact of bilingualism in each of these processes. Identifying the neuronal sources of these processes as well as their connectivity with higher precision would be of greatest interest. Moreover, the requirement to deal with linguistic conflict or complexity is not limited to the case of bilingualism but control processing is also crucial in handling within-language interference, as it has been shown for ambiguity resolution in the domains of semantics (Rodd et al., 2010), and syntax (January et al., 2009), but also for phonology, and phonetics (e.g., tongue twisters, Acheson and Hagoort, 2014). Whether control processing involved in managing between-versus within-language interference is quantitatively and/or qualitatively different is still unclear. Further behavioral and neuroscientific research will be necessary to advance our understanding of the similarities and differences between bilingual and monolingual language control. Conclusion The present findings are partially in line with previous studies demonstrating a bilingual advantage on interference control, and more specifically interference suppression. We were able to show a bilingual advantage in the Stroop task but only in the N400 and the late sustained negative-going potential time windows. Unexpectedly, however, we failed to find a bilingual benefit in the Negative priming task, though considered a more complex task. We proposed that this lack of an effect may be due to the specific task demands of the Negative priming task. Nevertheless, the current results are compatible with the hypothesis that bilingualism enhances efficiency of domain-general cognitive control because the neuronal network of general control and the multiple language control network largely overlap (Abutalebi and Green, 2008). Interestingly, we were able to confirm an activation of ACC and PFC which Dreher and Berman (2002) have already established in an fMRI study using a task-switching paradigm, with the Stroop and the Negative priming paradigm, allowing to test conflict monitoring and interference suppression. One of the innovative contributions of our study is the demonstration that there are differential time courses of the involvement of ACC and PFC in conflict processing. While the ACC showed major activation in early time windows (N200 and N400) but not in the later one (late sustained negative-going potential), the PFC became active in the left hemisphere in the N400 time window and in the late sustained negative-going potential time windows. This chronometric finding adds an important piece to the puzzle of theories of the functional relationship between ACC and PFC postulating that ACC would participate in conflict monitoring and communicate with the lateral PFC that would implement cognitive control Shenhav et al., 2013; for a schematic overview, see Figure 6). Further research, combining fMRI and ERP measures, will be necessary to study with both high temporal and spatial resolution the neurochronometry of the cognitive control network, involving amongst others the ACC and the PFC. Moreover, our results are a valuable contribution to the bilingualism literature in that we were able to show that there are specific control processes that seem to be involved in and improved by multiple language use while this may not be the case for other control processes. However, caution is at order before drawing firm conclusions regarding the relation between multiple language use and efficiency of executive functioning. In the present study, the contradictory findings between tasks challenge the view of a systematic bilingualism advantage. On the contrary, the task differences can be explained by assuming that the efficiency of executive functions could also be improved in monolinguals when the experimental design leads them to expect a need for inhibition, thus encouraging an inhibition strategy. Thus, in future research, it will be relevant to apply a battery of tasks tapping into different cognitive and executive functions while taking into consideration the multidimensional characteristics of bilingualism. Such an approach will improve our understanding of the impact of multiple language use on the plasticity of the cognitive control efficiency. To sum up, the main contribution of the present study is threefold: our findings indicate that (1) studying the neurodynamics of conflict processing with high temporal resolution can help us disentangling different sub-processes of conflict processing, (2) cascading models appear to capture essential aspects of the time course of neuronal source activation in conflict processing and (3) bilinguals seem to perform better on specific control processes while performing equally to monolinguals on others. Hence, our findings are a valuable contribution to the executive function literature in general and to new theoretical accounts of neurodynamics of executive control in bilingualism in particular.
2016-05-12T22:15:10.714Z
2015-06-15T00:00:00.000
{ "year": 2015, "sha1": "027b786cee061aff5d42f05a05ac35b45b0b79d5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00821/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "462d1a4ac1899c54de8a204a57c823b4d740c71a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
17716248
pes2o/s2orc
v3-fos-license
A Note on Particles and Scalar Fields in Higher Dimensional Nutty Spacetimes In this note, we study the integrability of geodesic flow in the background of a very general class of spacetimes with NUT-charge(s) in higher dimensions. This broad set encompasses multiply NUT-charged solutions, electrically and magnetically charged solutions, solutions with a cosmological constant, and time dependant bubble-like solutions. We also derive first-order equations of motion for particles in these backgrounds. Separability turns out to be possible due to the existence of non-trivial irreducible Killing tensors. Finally, we also examine the Klein-Gordon equation for a scalar field in these spacetimes and demonstrate complete separability. Introduction Taub-NUT solutions arise in a very wide variety of situations in both string theory and general relativity. NUT-charged spacetimes, in general, are studied for their unusual properties which typically provide rather unique counterexamples to many notions in Einstein gravity. They are also widely studied in the context of issues of chronology protection in the AdS/CFT correspondence. Understanding the nature of geodesics in these backgrounds, as well as scalar field propagation, could prove to be very interesting in further exploration of these spacetimes. There is a strong need to understand explicitly the structure of geodesics in the background of black holes in Anti-de Sitter space in the context of string theory and the AdS/CFT correspondence. This is due to the recent work in exploring black hole singularity structure using geodesics and correlators in the dual CFT on the boundary [1][2][3][4][5][6]. Black holes with charge are particularly interesting for this type of analysis since the charges are reinterpreted as the R-charges of the dual theory. The class of solutions dealt with in this paper also include black holes that carry both NUT and electric charges in various dimensions, and could prove very interesting in this sort of analysis. In this paper we explore a very general metric describing a wide variety of spacetimes with NUT charge(s). In addition further metrics can also be obtained from these through various analytic continuations (which does not affect separability as demonstrated for these class of metrics). As such, the study of separability in this set of spacetimes encompasses the cases of both singly and multiply NUT-charged solutions, electrically and magnetically charged solutions with NUT parameter(s), solutions with a cosmological constant and NUT parameters(s), and time dependant bubble-like NUT-charged solutions. Many of these describe very interesting gravitational instantons. Some of these solutions include static backgrounds, while others are time-dependant and provide very interesting backgrounds for studying both string theory and general relativity. Some of these solutions, especially the bubble-like ones, are particularly interesting in the context of string theory as they arise in the context of topology changing processes. e.g. they show up as possible end states for Hawking evaporation., and they show up in transitions of black strings in closed string tachyon condensation. We study the separability of the Hamilton-Jacobi equation in these spacetimes, which can be used to describe the motion of classical massive and massless particles (including photons). We use this explicit separation to obtain first-order equations of motion for both massive and massless particles in these backgrounds. The equations are obtained in a form that could be used for numerical study, and also in the study of black hole singularity structure using geodesic probes and the AdS/CFT correspondence. We also study the Klein-Gordon equation describing the propagation of a massive scalar field in these spacetimes. Separation again turns out to be possible with the usual multiplicative ansatz. Separation is possible for both equations in these metrics due to the existence of nontrivial second-order Killing tensors. The Killing tensors, in each case, provides an additional integral of motion necessary for complete integrability. There has been a lot of work recently dealing with geodesics and integrability in black hole backgrounds in higher dimensions both with and without the presence of a cosmological constant [7][8][9][10][11][12][13][14][15][16]. Of particular note in the context of this paper are [12,14] which deal with black holes with NUT parameters in some special cases. This work extends, and generalizes, some of the results obtained in these papers. Overview of the Metrics The class of metrics dealt with in this paper, and their generalizations obtained via analytic continuations, have been constructed and analyzed in [17][18][19][20][21][22], as well as some references contained therein. We will very briefly describe the metrics, and some of the various types of spacetimes that can be obtained from them. As mentioned earlier, separability for all the metrics is addressed by dealing with the class we do here, since analytic continuations do not affect separability of either the Hamilton-Jacobi or Klein-Gordon equation (though they do affect the physical interpretations of the various variables and their associated conserved quantities). The general spacetimes we study are described by the metrics A very general class of metrics in even dimensions where the (φ i , θ j ) sector has the form In this case the functions are given by and an expression for F (r) can be found in [21] along with a detailed description. Generalizations to include electric charge are obtained by suitably modifying F (r), and can be found in [20,22]. Metrics describing "bubbles of nothing" also fall under this class and can be found in [19]. Examples of NUT-charged spacetimes in cosmological backgrounds also fall in this framework and can be found in [19]. For the purposes of analyzing separability, some odd dimensional NUT-charged spacetimes also fall under this category. For instance in five dimensions (i.e p = 2) a NUT charged spacetime is obtained by taking g 2 (θ 2 ) = 0 and N 2 = 0, i.e. a metric of the form This describes a spacetime in an AdS background; similar dS and flat background spacetimes can be obtained by following the prescriptions in (2.2) while maintaining g 2 (θ 2 ) = 0 and N 2 = 0. Generalizations to higher odd dimensional spacetimes are obvious. Various twists of these spacetimes can also be obtained through analytic continuations. For instance, using the prescriptions t → iθ, θ → it, we can obtain time-dependant bubbles. In five dimensions in an AdS background, some examples obtained via this prescription, and a few other suitable obvious variable redefinitions are For future use, we give the determinant of the metric (2.1) The components of the inverse metric are , (2.6) These formulae are somewhat tedious to derive, but can be proved using a few Maple calculations, and then using mathematical induction [23]. The Hamilton-Jacobi Equation and Separability The Hamilton-Jacobi equation in a curved background is given by where S is the action associated with the particle and λ is some affine parameter along the worldline of the particle. Note that this treatment also accommodates the case of massless particles, where the trajectory cannot be parametrized by proper time. Separability We can attempt a separation of coordinates as follows. Let After some manipulation, we can recursively separate out the equation into (3.4) For future reference we will use the notation K = p i=1 K i . Also note that for the metrics obtained through analytic continuations discussed earlier, the issue of separability is clearly not affected. However, for an analytic continuation of the form t → iθ, θ → it, we need to replace E → −iL θ , and the energy is no longer conserved as we have a time dependant background. However, now the angular momentum L θ associated to θ is conserved. Similar substitutions need to be made for any other analytic continuations or variable redefinitions used to define the new metrics. The Equations of Motion To derive the equations of motion, we will write the separated action S from the Hamilton-Jacobi equation in the following form: (3.6) To obtain the equations of motion, we differentiate S with respect to the parameters m 2 , K i , E, L φ i and set these derivatives to equal other constants of motion. However, we can set all these new constants of motion to zero (following from freedom in choice of origin for the corresponding coordinates, or alternatively by changing the constants of integration). Following this procedure, we get the following equations of motion: , It is often more convenient to rewrite these in the form of first-order differential equations obtained from (3.7) by direct differentiation with respect to the affine parameter: . (3.8) The general class of metrics discussed here are stationary and "axisymmetric"; i.e., ∂/∂t and the ∂/∂φ i are Killing vectors and have associated conserved quantities, −E and L φ i . In general, if ξ is a Killing vector, then ξ µ p µ is a conserved quantity, where p is the momentum of the particle. Note that this quantity is first order in the momenta. The additional constants of motion K i which allowed for complete integrability of the equations of motion is not related to a Killing vector from a cyclic coordinate. These where K is any second order Killing tensor, and the parentheses indicate complete symmetrization of all indices. The Killing tensors can be obtained from the expressions for the separation constants K i in each case. If the particle has momentum p, then the Killing tensor K µν is related to the constant K via We can use the expression for the K i in terms of the the θ i equations. For the Taub-NUT metrics analyzed above, the expression for K i from (3.4) is Thus, from (4.2) we can easily read We can easily check using Maple [23], that the Killings tensors do satisfy the Killing equation. Note that if any of the NUT parameters N k were zero, then the corresponding Killing tensor K k would simply be the usual Killing tensor of the underlying two dimensional space M k (which is a reducible one in the case of a homogenous constant curvature space M k as is the case for many situations here). In general, however, a non-zero NUT parameter N k provides a nontrivial coupling between the (r, φ i , θ i ) sectors, and the existence of the Killing vectors ∂ φ i and ∂ t along is not enough to ensure complete separability. It is the existence of these nontrivial irreducible Killing tensors K i that provides the addition separation constants K i necessary for complete separation of each space M i from another space M j , as well as separation of the angular sectors completely from the radial sector. These tensors are irreducible since they are not simply linear combinations of tensor products of Killing vectors of the spacetime. The Scalar Field Equation Consider a scalar field Ψ in a gravitational background with the action where we have included a curvature dependent coupling. However, in these (Anti)-de Sitter and flat backgrounds with charges, R is constant (proportional to the cosmological constant Λ). As a result we can trade off the curvature coupling for a different mass term. So it is sufficient to study the massive Klein-Gordon equation in this background. We will simply set α = 0 in the following. Variation of the action leads to the Klein-Gordon equation Using the explicit expressions for the components of the inverse metric (2.6) and the determinant (2.5), the Klein-Gordon equation for a massive scalar field in this spacetime can be written as We assume the usual multiplicative ansatz for the separation of the Klein-Gordon equation Then we can easily completely separate the Klein-Gordon equation as where the K i are again separation constants. At this point we have completely separated out the Klein-Gordon equation for a massive scalar field in these spacetimes. We note the role of the Killing tensors in the separation terms of the Klein-Gordon equations in these spacetimes. In fact, the complete integrability of geodesic flow of the metrics via the Hamilton-Jacobi equation can be viewed as the classical limit of the statement that the Klein-Gordon equation in these metrics also completely separates. Conclusions We studied the complete integrability properties of the Hamilton-Jacobi and the Klein- This is due to the enlarged dynamical symmetry of the spacetime. We construct the Killing tensors in these spacetimes which explicitly permit complete separation. We also derive first-order equations of motion for classical particles in these backgrounds. It should be emphasized that these complete integrability properties are a fairly non-trivial consequence of the specific form of the metrics, and generalize several such remarkable properties for other previously known metrics. Further work in this direction could include the study of higher-spin field equations in these backgrounds, which is of great interest, particularly in the context of string theory. Explicit numerical study of the equations of motion for specific values of the black hole parameters could lead to interesting results. The geodesic equations presented can also readily be used in the study of black hole singularity structure in an AdS background using the AdS/CFT correspondence.
2014-10-01T00:00:00.000Z
2005-11-06T00:00:00.000
{ "year": 2005, "sha1": "c0e0744e44504754ffe6523e3e6038e08f94e3cc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2005.10.076", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "387efe8f721cbdba3b69f9e49b4b20bb25af3092", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
54463993
pes2o/s2orc
v3-fos-license
Machine Vision Systems for Industrial Quality Control Inspections . In this paper we introduce Machine Vision System (MVS) for industrial quality control inspections presenting new perspectives with the recent developments of Artificial Intelligence (AI). A brief literature review is provided which indicates a substantial growth of machine vision new studies and an improved workflow is proposed to include these findings. Besides already existing machine vision solutions there is space to increase detection in quality control inspection and reduce current implementation constraints and technical limitations. The paper shows MVS new development and evinces that a deeper understanding of AI,MVS limitations is needed to provide a clearer path for future studies. Introduction Visual inspection is an important process in an industry to recognize defective parts, to assure quality conformity of a product and fulfill customer demands [1] [2].In assembly and manufacturing activities, product and process inspection are usually performed by human inspectors, but due fatigue, small parts, small details, hazardous inspection conditions and process complexity this task may not achieve the desired quality or be almost impossible to detect some types product non-conformities.In this cases a machine vision solution is recommended [1] [3] [4].Machine Vision System (MVS) consist in applying computer vision to industrial solutions [5].MVS can be used to perform visual inspection and fulfill industrial and factory performance, consequently improving product quality outcomes.To meet industrial expectations, MVS has been used to reduce product quality problems through improved inspections.Inspection system must be adapted to a scenario which have a wide variety of product features and high production speed assembly lines with complex environment variables from the MVS perspective [5] [6]. The purpose of this article is to provide an overview of MVS concepts and theirs current status for industrial inspection, identifying the main Artificial Intelligence (AI) concepts and application to quality in the automotive industry. 2 Theoretical foundation Machine vision with artificial intelligence techniques Machine Vision System concepts and applied technologies MVS systems have become imperative in many modern manufacturing facilities as forms of automatic quality inspection.These systems are integrated with manufacturing process where all products must pass through.Most systems consist of a camera (or cameras), PC and usually a controlled lighting environment within an enclosure [7]. Machine vision with artificial intelligence techniques Machine learning and deep learning are data-driven artificial intelligence techniques, which may be applied to MVS.Both techniques uses neural network architecture concepts.which transforms raw data into representative information for decision making.Machine learning can be applyied to feature extraction and classification where each step is constructed separately, and it may be highly dependent of an expert knowledge.In the other hand deep learning is applied in both feature extraction and classification steps as an unified neural network solution with requires minimum human interference.Figure 2 shows a comparison between machine learning and deep learning [8]. One major drawback of MVS system without artificial intelligence techniques is that it cannot learn with all the processed images.Every image detail or new fault detection that may occur after the MVS initial setup may not be detected, which can lead to an incorrect information output while MVS with learning features has the potential to learn with new images incomes. Golbani and Asadpour [4] proposed a block diagram for a typical vision system operation when artificial intelligence techniques were still being developed for MVS, as shown in Figure 3.A new diagram was built containing the original block together with new AI techniques identified in recent literature.In this new framework is important to emphasize the need of an image knowledge database, which contains object features and quality criteria definitions, which may be used to assist and improve the current learning methods [1][8]. 1-D and 2-D MVS industrial applications One (1-D) and two dimensional (2-D) MVS have a wide range of applications such as measurement, surface and depth inspection, thermal inspection and robot vision.Each kind of application has its own characteristic equipment with different image gathering source, such as photoelectric sensor, lasers, cameras and so on.Some MVS source types and applications are shown in Figure 4. Camera based MVS for industrial applications are commonly used to verify presence or absence of components, verify if the components are in their correct position and orientation, verify if components has the desired colors, analyze and recognize image content such as code bars and inspect size and measure of parts and assembly components [4]. Photoelectric and laser-based MVS inspections can also be used to presence or absence of components, check component positioning and verify desired colors, mainly to measure parts. 3-D MVS industrial applications Optical non-contact 3-D measurement technique has been used to measure an image of an object and extract its geometrical information.It can be divided in passive and active 3-D sensing systems, where passive works with natural lighting from the scene without controlling the light that goes to the inspected object.While active sensing systems uses an external light, such as laser or a known projected light, by measuring speed of light, laser coherence or applying triangulation techniques [10]. Structured light is an active 3-D sensing system, which illuminates the object with predefined patterns and analyses how these patterns are deformed by the object when observed from a different angle of the projection.Some systems adopt non-visible structured light to avoid interfering with other computer vision [11]. Stereo vision profilometry techniques simulates human vision through two camera setups angled with each other, which aims to identifying and match common features of an object images from multiple allowing it to be reconstructed through triangulation techniques [11].Stereo vision normally is a passive 3-D sensing system but there are new camera setups which also uses a projected structured light in the object that turns them into active stereo vision [7]. Another active vision techniques are through time of flight light measurement.This technique uses light pulses with a known camera range so the time for the emitted light to travel from the camera and hits the object and it reflected to the camera is measured, based a fixed and known light speed the distance can be calculated [7]. Light coding imaging is also an active 3-D sensing system but instead of using light pulses it keeps light source constantly turned on.It also uses an infrared spectrum emitter and receiver, which analyses lens distortion, the emitted light patter and the distance between object, emitter and receiver and the deformation of the light over the inspected object [7]. MVS industrial evaluation Pérez et al. [7] compared several 3-D machine vision techniques applied to industrial environments emphasizing which factors need to be considered in order to select the most adequate vision, considering, system accuracy, working distance, image output, system advantages and limitations.An overview of this evaluation is shown in Figure 5 along with the current status of machine vision system in the automotive industry. Environment light influence is one of the major problem to MVS, only a few systems are not subject to external light conditions.Another constraint that may affect your MVS selection is the necessity of both the camera and the object to remain static which, in some cases, are an exception in the industry.Accuracy and working distances are variables that must be taken in account, because depending on the precision needed the MVS solutions options narrows down. 2.6 MVS applications to Industry 4.0 The fourth industrial revolution or Industry 4.0 aims to develop intelligent factories with upgraded manufacturing technologies through new features such as cyber-physical systems (CPSs), the Internet of Things (IoT), Big Data and cloud computing.New manufacturing systems propose simultaneous monitoring of physical processes with being controlled by digital technologies, being able to make smart decision through real-time communication and interaction between humans, machines, or any smart device [12].Figure 6 contains a simplified 4.0 Industry diagram, adapted from more complex diagrams available in the literature.Machine vision is located in an IoT layer .Its function is to provide image data through a connected network to a big data cloud server.This data will be subject to mining and cleaning procedures, removing unnecessary information.This information can be used as an input for machine learning techniques, allowing an integrated system to detect and describe what happened to the product, determining why that happened, predict what may happen and prescribe which actions must be taken.MVS future solutions, such as knowledge-driven decision-making, real time control, online advanced analytics and artificial intelligence in CPS, are considered as challenging implementation process.It may take from 3 to 10 years for industries to achieve a concrete degree of maturity and obtain a fully operational system with these functionalities.[14] 3 Methodological approach In order obtain a detailed comprehension of the theme and to direct future studies, a systematic review is proposed.Based on the keywords identified in the theoretical foundation, they were grouped in in three fields of study: Industrial Quality Control (IQC), AI and Machine Vision (MV).Papers available through Scopus database were chosen, covering the period between 2007 to 2017. Figure 1 shows each keyword used for each group.Keywords combinations from same group were not used in this paper, in order to increase the likelihood of significant search results.The primary search results totalized 60.176 articles but they contained duplicate values.Before removing duplicate values, results which belonged of at least more than one search group were labelled as 'AI+MV+IQC'.After that duplicates were removed, articles remained.Figure 2 shows these non duplicated publications distributed over the analysed years.The graphic results indicates an increased number of publications for each group.Artificial intelligence and machine vision keywords combination presented the most relevant increase, mainly after 2012.It can be observed that vision systems applied to industrial quality throught AI technologies tend to be revelant in the following years.We filtered our search results to article labelled by SCOPUS as Journals, reducing to 16760 articles.Through ISSN information of each journal and the database extracted from SCIMAGO, a link between database could be established with the remaining search results.With SCIMAGO information another filter was applyied, selecting journals that has been evaluated by SCIMAGO, totalizing 8549 articles. A content analisys with detailed keyword identification was made using the remaing articles.Titles and abstracts with keywords related to medicine, human features identification and non industrial applications were used to create an exclusion filter, because they were not related to this paper, resulting in 3350 articles. New relevant keyword were also identified and included in initial keywords used to create the first database.With the updated groups, a filter with articles considering at least one keyword of each group was made and 289 results were found. Table 1 contains the keywords used in the last filter and Table 2 summarizes all filtering steps database results. Table. 1. Search results for machine vision applications and trends. Table .2. Search results for machine vision applications and AI and IQC trends. Initial Results During abstract reading step, solutions which integrates AI and IQC applied to MVS were found.One proposed solution used genetic programming with machine learning to develop and modify preprocessing programs.It has the ability to adapt to new production parameters and light conditions changes with an automated preprocessing framework.This solution has a potential to solve common MVS problems [15].One common aspect of most of the articles is that they only provide a solutions according to a specific type of industry, such as welding systems, PCB fault detectetion, and sometimes in does not have the potential to be applyied to other scenarios.One important factor detected in the articles is the absence of a common key performance indicators (KPI) in order to evaluate how accurate is each MVS solution .Another point identified is the absence of a method for selecting the best AI solution framework for each kind of solution.Figure 3 shows that from 289 results, only a few articles provides a framework or method as an important part of the abstract. Conclusion MVS solutions applied to quality inspection for the industry can be improved thorught existing and new technologies such as AI, increase inspection detection processing speed and the capacity detect new types of defects. There are many project restrictions and technical constraints in MVS implementation, which must be consider during selection, implementation and validation steps for industrial solutions, otherwise the system may not perform accordingly. Despite of existing MVS solutions with AI applied to IQC, they tend to be exclusive for each scenario and they lack of a common KPI to evaluate its performance or the ability the replicated in other industrial scenarios.A method to select the most appropriate type of AI technique and how to integrate with existing or new MVS systems it is still lacking. 5 Future work A detailed review initital of the current research is necessary, in order to indentify which are main adopted AI frameworks and techniques to MVS and IQC, which are the KPI's to evaluate MVS performance, how much MVS with AI are improving industrial KPIs and which are the main researches. Fig. 2 . Fig. 2. Publications in each field of application over the years. Fig. 3 . Fig. 3. Publications in each field of application over the years.
2018-12-11T14:05:51.841Z
2018-07-02T00:00:00.000
{ "year": 2018, "sha1": "5d461a37ee578956d1d50653ff30f9645b5bfc33", "oa_license": "CCBY", "oa_url": "https://hal.inria.fr/hal-02075633/file/474419_1_En_58_Chapter.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "5756f9c618d41497d464c5da96e0d68103286c4f", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
155091085
pes2o/s2orc
v3-fos-license
Bronchoscopic lung volume reduction using an endobronchial valve to treat a huge emphysematous bullae: a case report Background In patients with chronic obstructive pulmonary disease (COPD), bronchoscopic lung volume reduction (BLVR) techniques using unidirectional endobronchial valves improve lung function and increase exercise tolerance. BLVR treatment is included in the Global Initiative for Chronic Obstructive Lung Disease (GOLD) treatment guidelines for COPD patients without interlobar collateral ventilation. However, BLVR using an endobronchial valve has not been attempted in patients with giant bullae. Case presentation We report successful and safe BLVR using an endobronchial valve in a patient with a huge bullous emphysema in the right middle lobe. A 65-year-old male was diagnosed with COPD 5 years prior and had a large bullae in the right middle lobe at that time. During regular follow-up, the symptoms of respiratory distress gradually worsened, and the size of the bullae gradually increased on computed tomography (CT). Therefore, we decided to treat the patient via BLVR using an unidirectional endobronchial valve. The Chartis system (Pulmonx, Inc., Palo Alto, CA) confirmed the absence of collateral ventilation of the right middle lobe. We successfully inserted an endobronchial valve into the right middle bronchus. After insertion, the bullae decreased dramatically in size, and the patient’s symptoms and quality of life improved markedly. Conclusion This case supports recent suggestions that BLVR can serve as a good alternative treatment for appropriately selected patients. Background Chronic obstructive pulmonary disease (COPD) is characterized by persistent respiratory symptoms and airflow limitations caused by a combination of small airway disease (chronic bronchiolitis) and parenchymal destruction (emphysema) [1]; the latter involves permanent enlargement of the airspace in the distal parts of the terminal bronchioles because of destruction of the alveolar sacs. Emphysema triggers the loss of elastic tissue, airway collapse, and difficulties in gas exchange [2]. Standard treatments for COPD include smoking cessation, inhaled drugs such as long-acting beta-agonists and anti-muscarinic agents, pulmonary rehabilitation, and long term oxygen therapy. Lung volume reduction surgery (LVRS) improves the survival of patients with upper lobe-predominant emphysema and a low exercise capacity [1]. In such patients, LVRS affords a greater survival benefit than other medical treatments. However, non-upper lobe predominant emphysema is the sole predictor of operative mortality [3]. For selected patients with decreased lung function, advanced emphysema refractory to medical therapy or an inability to undergo LVRS, bronchoscopic lung volume reduction (BLVR) using an unidirectional endobronchial valve improves exercise tolerance and lung function [4,5]. Valve implantation prevents hyperinflation by blocking inspiratory airflow to the targeted lung, and allows air to escape during exhalation. By reducing the size of a hyperexpanded emphysematous lung, the remaining lung re-expands and overall lung function is improved [6]. BLVR treatment is included in the Global Initiative for Chronic Obstructive Lung Disease (GOLD) treatment guidelines for COPD patients without interlobar collateral ventilation. Park et al. studied the relatively long-term outcomes of BLVR with endobronchial valve placement in Koreans with severe emphysema; the procedure proved safe and effective [7]. However, patients with giant bullae (greater than 5 cm in diameter) were excluded; no attempt to use BLVR/endobronchial valve placement to treat patients with giant bullae has yet been reported. Here, we report successful and safe BLVR using an endobronchial valve in a patient with a huge bullous emphysema in the right middle lobe. Case presentation A 65-year-old male diagnosed with COPD 5 years prior was admitted to our hospital in November 2017. He had stopped smoking 2 years prior, but had a smoking history of 80 pack years. He had been taking indacaterol/ glycopyrronium once daily and had been on 3.5 L/min home oxygen therapy for 2 years. In the past year, he had experienced two acute exacerbations that required hospitalization. A pulmonary function test (PFT) conducted in October 2017 revealed severe obstructive lung disease: the ratio of forced expiratory volume in 1 s (FEV1) to forced vital capacity (FVC) was 29%, the FEV1 was 0.41 L (percentage of predicted FEV1, 13%), the residual volume (RV) was 6.43 L (percentage of predicted RV, 275%); the total lung capacity (TLC) was 8.23 L (percentage of predicted TLC, 135%), and the percentage of predicted diffusing capacity of carbon monoxide (DLCO) was 23%. Arterial blood gas analysis revealed a pH of 7.413, PaCO2 of 53.8 mmHg and PaO2 of 65.4 mmHg. Chest computed tomography (CT) performed in May 2017 indicated severe centrilobular emphysema in both lungs with huge bullae in the right middle lobe (Figs. 1c and e). The maximum area of the huge bullae in the axial view was 15.0 × 10.1 cm. On CT, the bullae became larger over time and the right lower lobe parenchyma became increasingly compressed. The fissure around the right middle lobe (the target lobe) was intact on chest CT. We decided to perform BLVR using an unidirectional endobronchial valve. Atropine 0.5 mg was administered 30 min before bronchoscopy to minimize bronchial secretions. To locally anesthetize the oropharynx, we delivered 2 mL of lidocaine using a nebulizer. Commencing with bronchoscopy, we administered midazolam 3 mg for sedation and instilled lidocaine 10 mL for local anesthesia of the vocal cords and large airway. Using the Chartis system (Pulmonx, Inc., Palo Alto, CA), a catheter with a balloon at the distal tip was placed in the entrance of the right middle bronchus. The balloon was inflated to block the airway and the distal airflow, resistance, and pressure were measured. A gradual decrease in flow and increase in resistance and pressure were observed. Thus, we confirmed the absence of collateral ventilation of the right middle lobe [8]. The right middle bronchus was sufficiently small to be blocked by a single endobronchial valve; for this, we selected a Zephyr 5.5 endobronchial valve (Pulmonx). After endobronchial valve insertion, chest X-rays were taken for 3 days and 1 week later, and confirmed that the size of the huge bullae had decreased dramatically ( Fig. 1a and b). Two months after valve insertion, we performed chest CT, which showed that the endobronchial valve was located in the right middle bronchus, and that the huge bullae in the right middle lobe had disappeared. The volume of the right middle lobe had thus decreased but that of the compressed right lower lobe had re-expanded ( Fig. 1d and f ). We performed a PFT at 2 months after valve insertion. The ratio of the FEV1 to FVC increased from 29 to 32%, and the FEV1 increased by 170 mL from 0.41 L (percentage of predicted FEV1, 13%) to 0.58 L (19%). The RV decreased from 6.43 L (percentage of predicted RV, 275%) to 4.74 L (201%), and the TLC decreased from 8.23 L (percentage of predicted TLC, 135%) to 6.61 L (108%). The symptoms and quality of life improved markedly, and no valve migration or obstruction, pneumonia, or pneumothorax has been noted to date. Discussion Conventionally, the pathophysiology of bullous emphysema is considered to involve a valvular obstruction that allows air to enter, but not exit, the cystic space. However, a new hypothesis suggests that the bullous emphysema is associated with free airway communication caused by compliance higher than that of the surrounding lung [9]. For bullae that are not fed by an airway, endobronchial valve treatment will probably not be effective. However, in the present case, the size of bullae increased over time, indicating airway communication. Thus, when bullae are large, contained within a lobe, and likely fed by an airway, lobar volume reduction and reduction of the volume of bullae are essentially identical (i.e. the underlying mechanism/intention is the same). Although differences in survival rates or long-term outcomes between patients undergoing BLVR and LVRS have not been documented, BLVR appears to effectively reduce mortality and improve the quality of life of patients who do not respond to medical treatment and are at high risk of surgical complications [5]. Klooster et al. found that endobronchial valve placement in appropriately selected patients significantly improved lung function and exercise capacity [5]. To optimize clinical outcomes, strict selection criteria are required when identifying patients who might benefit from BLVR [10]. Flandes et al. proposed the following criteria: symptomatic COPD; modified Medical Research Council (mMRC) dyspnea score > 1; emphysema-predominant phenotype; no more than two COPD exacerbations annually; severe airflow obstruction; hyperinflation and air trapping evident on the PFT; no smoking for at least 6 months, absence of prior lung surgery on the side of target lobe; and no collateral ventilation as confirmed by the Chartis system [11]. Our patient fulfilled all of these criteria so we performed lung volume reduction using an endobronchial valve after confirming the absence of collateral ventilation. Two case reports using BLVR to treat giant bullae in the right middle lobe have appeared worldwide. Tian et al. and Hou et al. successfully performed lung volume reductions using endobronchial valves in patients with giant bullae [12,13]. Our results are superior, in that our patient had a lower basal pulmonary function and larger bullae, and exhibited greater improvement in Fig. 1 a, c and e Chest X-ray taken before the procedure (November, 2017) and computed tomography (CT) scan taken in May, 2017 (transverse and sagittal views, respectively) indicated severe emphysema and a huge bullae in the right middle lobe compressing the right lower lobe parenchyma. b, d and f A Chest X-ray taken 1 week after the procedure (November, 2017) and a CT scan taken 2 months after the procedure (January, 2018, transverse and sagittal views, respectively) showed that the huge bullae in the right middle lobe had disappeared. This caused the volume of the right middle lobe to decrease but that of the compressed right lower lobe to re-expand pulmonary function, compared to the patients in the cited case reports. Conclusion This case study supports recent suggestions that BLVR can serve as a good alternative treatment for appropriately selected patients.
2019-05-16T13:03:49.118Z
2019-05-14T00:00:00.000
{ "year": 2019, "sha1": "f3a18dd771f364d5497c1e81915aca6f6409734a", "oa_license": "CCBY", "oa_url": "https://bmcpulmmed.biomedcentral.com/track/pdf/10.1186/s12890-019-0849-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f3a18dd771f364d5497c1e81915aca6f6409734a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248775498
pes2o/s2orc
v3-fos-license
Reciprocal Regulation between lncRNA ANRIL and p15 in Steroid-Induced Glaucoma Steroid-induced glaucoma (SIG) is the most common adverse steroid-related effect on the eyes. SIG patients can suffer from trabecular meshwork (TM) dysfunction, intraocular pressure (IOP) elevation, and irreversible vision loss. Previous studies have mainly focused on the role of extracellular matrix turnover in TM dysfunction; however, whether the cellular effects of TM cells are involved in the pathogenesis of SIG remains unclear. Here, we found that the induction of cellular senescence was associated with TM dysfunction, causing SIG in cultured cells and mouse models. Especially, we established the transcriptome landscape in the TM tissue of SIG mice via microarray screening and identified ANRIL as the most differentially expressed long non-coding RNA, with a 5.4-fold change. The expression level of ANRIL was closely related to ocular manifestations (IOP elevation, cup/disc ratio, and retinal nerve fiber layer thickness). Furthermore, p15, the molecular target of ANRIL, was significantly upregulated in SIG and was correlated with ocular manifestations in an opposite direction to ANRIL. The reciprocal regulation between ANRIL and p15 was validated using luciferase reporter assay. Through depletion in cultured cells and a mouse model, ANRIL/p15 signaling was confirmed in cellular senescence via cyclin-dependent kinase activity and, subsequently, by phosphorylation of the retinoblastoma protein. ANRIL depletion imitated the SIG phenotype, most importantly IOP elevation. ANRIL depletion-induced IOP elevation in mice can be effectively suppressed by p15 depletion. Analyses of the single-cell atlas and transcriptome dynamics of human TM tissue showed that ANRIL/p15 expression is spatially enriched in human TM cells and is correlated with TM dysfunction. Moreover, ANRIL is colocalized with a GWAS risk variant (rs944800) of glaucoma, suggesting its potential role underlying genetic susceptibility of glaucoma. Together, our findings suggested that steroid treatment promoted cellular senescence, which caused TM dysfunction, IOP elevation, and irreversible vision loss. Molecular therapy targeting the ANRIL/p15 signal exerted a protective effect against steroid treatment and shed new light on glaucoma management. Introduction Steroids, one of the most prescribed drugs, are mainly used in the treatment of various autoimmune and inflammatory conditions. It has been reported by National Institute on Drug Abuse (NIDA) that 1.6% of the population has been treated with steroids and this number keeps growing [1]. Despite its numerous benefits, steroid usage can cause many adverse effects on the eyes, the most important being steroid-induced glaucoma (SIG) [2][3][4]. SIG is defined as a type of open-angle glaucoma. Topical treatment with glucocorticoids introduces an intraocular pressure (IOP) increase in 30-40% of the normal population to different extents [5][6][7]. Among these steroid responders, about 3% of cases that develop will continue to have high IOP and irreversible SIG manifestations [8], i.e., optic disk cupping, attenuation of retinal nerve fiber layer (RNFL), visual field defects, and vision loss after stopping steroid treatment [9]. Considering such catastrophic visual damage and the related social burden [10], it is of great importance to investigate the molecular mechanisms underlying SIG. Mechanistically, elevation of IOP in SIG is caused by aqueous outflow resistance at the level of the trabecular meshwork (TM). TM is a complex three-dimensional structure that consists of TM cells and an extracellular matrix (ECM) [11]. Evidence suggests that steroidinduced imbalance between the deposit and destruction of the ECM contributes to the increased outflow resistance of SIG. The deposition of ECM components includes collagen, elastin, fibronectin, and glycosaminoglycans [12,13]. In addition, degradation of ECM is blunted due to the upregulation of tissue inhibitors of metalloproteinases [14]. Notably, TM cells, another important component of TM, were involved in glaucoma pathology as well. Autophagic dysregulation [15], oxidative DNA damage [16], and fibrosis of TM cells [17] have been implicated in primary open angle glaucoma (POAG); however, whether cellular changes in TM cells are involved in SIG and its underlying mechanisms have not been investigated. High-throughput screening technology has been widely applied in identifying genetic regulators in various pathological processes, including micro-RNAs and non-coding RNAs (lncRNA), in the context of glaucoma [18][19][20]. Inspired by this, we recruited a microarray to profile the dysregulated lncRNAs and mRNAs in TM tissues from SIG mice. LncRNA AN-RIL and its target gene p15 have been prioritized as top candidates and cellular senescence showed the top enrichment scores in pathway analyses. ANRIL is a long non-coding RNA encoded in the chromosome 9p21 region. Evidence from various studies highlighted the role of ANRIL in regulating cellular senescence under oncogenic and inflammation scenarios [21,22]. The senescence regulatory effect of ANRIL results from inhibiting its neighbor gene, p15 [23]. It has been reported that p15 can induce cell-cycle arrest, i.e., blocking the progression of cell cycle from G1 phase to S or G2/M phase. Such cross-talk between ANRIL and p15 has been consistently associated with cellular senescence in epithelial ovarian cancer [24], non-small cell lung cancer [25], and type 2 diabetes [26]. Given these clues, we hypothesized that the ANRIL/p15 signal in TM cells may contribute to SIG by regulating cellular senescence. To test this hypothesis, we investigated the regulation and interaction between lncRNA ANRIL and the p15 gene using luciferase reporter assay and real-time PCR. The essential roles of ANRIL and p15 were tested for regulating cell cycle progression in cultured TM cells and cellular senescence in a SIG mice model. The spatial patterns of ANRIL/p15 expression were analyzed based on single-cell atlas and transcriptome dynamics of human TM tissue. The relationship between ANRIL and genetic glaucoma susceptibility was investigated by colocalization between GWAS and GTEx expression of quantitative trait loci (eQTL) signals. Our results suggested that cellular senescence of TM is the key factor in steroid-induced IOP elevation. The reciprocal regulation between ANRIL and p15 plays a major role in cellular senescence and SIG. Transcriptome Landscape in TM from SIG Models C57BL/6J mice were given topical 0.1% dexamethasone phosphate (DEX) eye drops, 3 times daily for 12 weeks, to induce SIG. Mice receiving sterile PBS eyedrops and without any ocular or systematic steroid exposure (details in Methods) served as controls. IOP was measured every three days in both eyes, and the IOP of control group remained stable throughout the experiment with a mean of 12.5 mmHg. When exposed to topical DEX, IOP increased from 12.5 mmHg to 15.3 mmHg at Day 28. By the end of the 12 week treatment, the IOP of DEX-treated eyes reached 23.4 mmHg. Only the DEX-treated eyes with increased IOP (>21 mmHg) were recruited in the SIG group in the following analysis. IOP information of all the mice in this study, including the excluded mice, is presented in the Supplementary Data. TM samples from three SIG mice and paired controls were collected for microarray screening ( Figure 1A). A total of 527 lncRNA and 894 mRNAs were found to be aberrantly expressed in SIG. The top 20 dysregulated lncRNAs and mRNAs with a fold change >4 and p < 0.05 are listed in the heatmap ( Figure 1B,C). C57BL/6J mice were given topical 0.1% DEX or sterile PBS eye drops, 3 times daily for 12 weeks. IOP was measured in both eyes of control mice and SIG mice every three days (A). Only eyes with increased IOP (>21 mmHg) were recruited in the SIG group for the following microarray screening. TM tissues were isolated from SIG mice and normal controls as stated in the Methods. The dysregulated lncRNAs (B) and mRNAs (C) with a fold change >4 and p < 0.05 are listed in the heatmap. Notes: n = 3 per group; CON = control; SIG = steroid-induced glaucoma; IOP = intraocular pressure. Correlation between ANRIL/p15 and SIG Clinical Manifestations As presented in Figure 1B, ANRIL (antisense non-coding RNA in the INK4 locus) is the most differentially expressed lncRNA with a 5.4-fold change in SIG TM tissue. Decrease in ANRIL was confirmed by real-time PCR assay in SIG TM tissues (Figure 2A). ANRIL is located within the p15-p16/CDKN2A-p14/ARF gene cluster, in the antisense direction. Therefore, ANRIL could function as an antisense RNA to complement p15 mRNA, and thereby block its translation into a protein [27]. (C) The correlation between the expression level of ANRIL/p15 and ocular manifestations (IOP, CDR, and RNFL) was presented (r 2 values were calculated from Pearson's method, all p < 0.05). Notes: n = 5 per group; CON = control; SIG = steroid-induced glaucoma; IOP = intraocular pressure; CDR = cup disc ratio; RNFL = retinal nerve fiber layer; ** p < 0.01 compared with control group. This hypothesis was confirmed by mRNA profiling of SIG TM tissues. As shown in Figure 1C, p15 was significantly increased in SIG TM compared to controls. The expression level of p15 was further examined by real-time PCR and Western Blot assay in TM tissues and was found to be increased ( Figure 2B). Thus, we suspected that ANRIL/p15 interact with each other to participate in SIG development. To define the role of ANRIL/p15 in SIG, we analyzed their correlation with steroidinduced ocular manifestations in mice. In this study, increased IOP and cup disc ratio (CDR), along with decreased thickness of RNFL was used to indicate glaucoma lesions. As shown in Figure 2C, the expression level of ANRIL is negatively correlated to IOP and CDR but is positively correlated to the thickness of RNFL. In contrast, the expression of p15 is positively correlated to IOP and CDR but is negatively correlated to RNFL thickness. With this evidence, we proposed that ANRIL and p15 are essential in SIG-induced glaucomatous manifestations with opposite effects. ANRIL Is Poorly Expressed in SIG Whilst p15 Is Highly Expressed Since ANRIL and p15 show opposite roles in the ocular manifestations (IOP, CDR, and RNFL thickness), we suspected that there might be a reciprocal regulation between ANRIL and p15 in SIG. To illustrate the interaction between ANRIL and p15 in TM cells, a luciferase reporter assay was used. Mouse p15 cDNA was cloned upstream of the luciferase reporter. Mouse ANRIL plasmid or small interfering RNA were co-transfected with luciferase reporter into TM cells. As indicated by the relative luminescence, luciferase activity was increased in ANRIL knock-down (siANRIL) cells compared to scramble. In ANRIL overexpressed cells (oANRIL), luciferase activity was suppressed, indicating that ANRIL blocked the transcription of the p15 gene ( Figure 3A). Real-time PCR assay was then used to test the interaction between ANRIL and p15. Our results suggested that ANRIL depletion increased the mRNA level of p15, and ANRIL overexpression inhibited the expression of p15 in DEX-treated cells ( Figure 3B). We also found that knocking down ANRIL can increase the protein level of p15 in TM cells, which further confirmed reciprocal regulation between ANRIL and p15 ( Figure 3C). Role of ANRIL/p15 in Regulating Cellular Senescence in Response to Steroid To illustrate the biological function of a dysregulated transcriptome in SIG, enrichment analyses were conducted to map the canonical pathways using Panther [28], with 6378 genes (aberrantly expressed in SIG with a fold change >2). In addition to the important role of ECM deposit, the major cellular disturbance lay in cell senescence, the permanent arrest of the cell cycle ( Figure 4A). Consistently, p15 protein was reported to function as a cell growth regulator via control cell cycle G1 progression [29,30]. This evidence raised concerns about cell cycle regulation in response to steroids. In cultured mouse TM cells, the expression of p15 protein and Cyclin D3 protein was measured in nuclear protein fractions, while Histone H3 served as internal controls. The phosphorylated Rb protein was checked in cytoplasm fraction with total Rb protein as control. n = 6 per group. (E) Quantification of (D). Notes: CON = control; SIG = steroid-induced glaucoma; ECM = extracellular matrix; Veh = vehicle; siANRIL = ANRIL knock down; DEX = dexamethasone; sip15 = p15 knock down; ** p < 0.01 compared with vehicle; # p < 0.05 compared with scramble RNA; ## p < 0.01 compared with scramble RNA; NS = no significant difference. In TM samples of SIG mice, we witnessed a very large increase in cells blocked in G1 stage, and a decrease in other stages as compared to the control group ( Figure 4B). Then we analyzed the cell cycle of DEX-treated mouse TM cells. Consistently, 81.5% of DEX-treated TM cells were blocked in the G1 phase compared to that of vehicle-treated cells (63.3% in G1 phase, p < 0.05). Our results showed that DEX treatment introduced significant G1 arrest in TM cells. In ANRIL-depleted cells, the rate of cells blocked in the G1 stage (79.3%) was comparable to DEX treatment (81.5%, p > 0.05). When p15 was knocked down, DEX-induced G1 arrest could be largely rescued ( Figure 4C). The percentage of G1 stage cells decreased to 66.1%. These results indicated that a loss of ANRIL in TM promoted G1 arrest by increasing p15 expression. As reported, p15 forms a complex with Cyclin D3 to prevent the activation of Cyclin Dependent Kinase 4 (CDK4) [31]. Suppressed CDK4 kinases activity blocked proliferative cells in G1 phase. Hence, CDK4 kinase activity was recruited to illuminate the underlying mechanism of p15 in G1 arrest. Since phosphorylation of retinoblastoma protein (Rb) directly represented CDK4 kinase activity, phospho-Rb level was measured in this study [32]. As presented in immunoblot, the expression level of Cyclin D3 remained similar among the different groups ( Figure 4D,E). However, the activity of Cyclin D3/CDK4 complex decreased in response to DEX treatment with compromised phosphorylation of the Rb protein ( Figure 4D,E). Meanwhile, ANRIL depletion showed a similar effect as DEX treatment by downregulating the level of phosphorylated Rb protein ( Figure 4D,E). When p15 was depleted in the TM cells, the activity of Cyclin D3/CDK4 complex resumed in response to DEX, supported by the increased level of phosphorylated Rb ( Figure 4D,E). The immunoblot assay results matched the cell cycle analysis in TM cells ( Figure 4B,C). In conclusion, DEX decreased ANRIL expression, and increased p15 level in TM cells. Overexpressed p15 protein suppressed Cyclin D3/CDK4 activation and Rb phosphorylation, and eventually blocked cell cycle progression. ANRIL/p15 Signaling Promotes IOP Elevation in Response to DEX Based on these findings, we planned to verify the function of ANRIL/p15 signaling in mice TM in response to DEX. To this end, we introduced a knockdown of ANRIL or p15 in mice eyes. Repeated delivery of intraocular siRNAs ensured the stable depletion of ANRIL or p15 in mice eyes, as measured by immunofluorescence (IF) labeling in mouse anterior segment slices ( Figure 5A). The upregulation of p15 in the DEX group was shown in the IF images. The increased p15 expression was mainly localized in the TM region (outlined with white dashes in Figure 5A). Meanwhile, we witnessed a significant increase of IOP in DEX group, which is the most direct phenotype to infer the TM dysfunction ( Figure 5B). As shown in Figure 1A, and again in Figure 5B for comparison, there was a significant increase in IOP in the DEX group, which was elevated from 12.5 to 15.3 mmHg after 4 weeks of treatment, and continually increased to 23.4 ± 1.04 mmHg by the end of Week 12. Intraocular ANRIL depletion mice without DEX treatment presented a similar pattern of IOP increase (IOP = 22.1 ± 0.89 mmHg in siANRIL group at Week 12, compared with control group p < 0.05). Although the extent of IOP increase did not show a statistical significance of 0.5 at 12 weeks between DEX and siANRIL groups, the IOP elevation in siANRIL mice acquired a flatter growth rate compared with DEX group. When p15 was knocked-down, IOP elevated from 11.6 to 18.1 mmHg after 6 weeks of DEX exposure. However, starting at Week 6, the IOP in DEX + sip15 group gradually recovered to normal levels. At the end of Week 12, IOP DEX + sip15 group (13.1 ± 0.66 mmHg) showed no statistical significance compared with control group (p = 0.139). This result suggests the protective effects of p15 depletion against continuous DEX exposure ( Figure 5A,B). The expression of p15 protein accumulated in mice anterior segment samples after exposure to DEX. p15 knockdown decreased p15 intraocular protein amount. The p15 level in the DEX + sip15 group was comparable to the control mice, suggested by IF. TM region is outlined with white dashes. n = 5 per group. (B) The IOP significantly increased in DEX-treated mice, starting from 15.3 mmHg at 4 weeks and continually increasing to 23.4 mmHg at 12 weeks. Intraocular ANRIL depletion presented a consistent increased IOP pattern (IOP = 22.1 ± 0.89 mmHg at 12 weeks, compared with control mice, p < 0.05). When p15 was knocked-down, IOP elevated in the very beginning period with DEX exposure (0-6 weeks) and then continually recovered to a normal level. Arrow indicates the starting of intracameral siRNA delivery on Day 21. Notes: CON = control; siANRIL = ANRIL knock-down; DEX = dexamethasone; sip15 = p15 knock-down. Scale bar = 10 µm. ANRIL/p15 Signaling Promotes SIG via TM Cell Senescence As shown above, ANRIL/p15 signaling promotes IOP elevation in response to DEX in vivo, and its essential role in regulating senescence has been established in cultured TM cells. Based on this evidence, we aimed to confirm the role ANRIL/p15 in mice eyes exposed to DEX. Therefore, anterior segment tissues and TM tissues from different mice groups were collected (details in Methods). Senescence-associated heterochromatin foci (SAHF) staining was used to label senescent cells in anterior segment slices. Cellular senescence was visualized with H3K9Me2 (H3) and HP-1α (HP1) antibody, as previously explained [33]. As shown in Figure 6A, an increased senescence rate was exhibited with evident expression of H3 (green) and HP1 (red) in response to DEX in anterior segment slices. ANRIL knock-down had a similar effect as DEX with accumulated H3 and HP1 signals in anterior segment slices compared with control mice; the increased senescent cells mostly resided in the TM region. When p15 was depleted, cell senescence rate was decreased even exposed to DEX with less H3 and HP1 ( Figure 6A) in TM region, which further validated the potential role of ANRIL/p15 signaling in regulating TM cellular senescence. The cellular senescence rate in TM tissue increased with evident expression of H3 (green) and HP1 (red) in DEX and siANRIL groups. When p15 was depleted, cellular senescence rate was decreased in response to DEX exposure. n = 5 per group. (B) In isolated mouse TM tissue, the expression of p15 protein and Cyclin D3 protein was measured in nuclear protein fractions, while Histone H3 served as internal control. The phosphorylated Rb protein was checked in cytoplasm fraction with total Rb protein as control. n = 6 per group. (C) Quantification of (B). Notes: CON = control; siANRIL = ANRIL knock-down; DEX = dexamethasone; sip15 = p15 knock down. Scale bar = 10 µm; ** p < 0.01 compared with vehicle; ## p < 0.01 compared with scramble RNA; NS = no significant difference. TM samples were carefully isolated for protein analyses. As indicated by immunoblot, the protein level of p15 was largely enriched in the TM tissue of DEX and siANRIL mice. The downstream effector, phosphorylated Rb protein, was reduced in DEX and siANRIL group. Meanwhile, p15 knock-down demonstrated protective effect against DEX exposure by regain the phosphorylation of Rb protein in mouse TM ( Figure 6B,C). These results were consistent with what we saw in cultured TM cells. Together, our results suggested that ANRIL/p15 regulated SIG progression through promoting TM cellular senescence in response to steroid both in cultured cells and in mice models. ANRIL/p15 Signaling Regulated TM Stiffness in HUMAN Samples To validate the role of ANRIL/p15 signaling in human TM, we analyzed the cell atlas of aqueous humor outflow pathways in normal human eyes from Zyl's study [34] (details in Methods). The results showed that p15 is widely expressed in most of the cell types in aqueous humor outflow pathways, including 4 cell types in TM functional unit (beam cells type A and B, juxtacanalicular tissue cells, and ciliary muscle cells) ( Figure 7A). However, the expression of ANRIL is cell-type-specific and enriched in the TM functional unit ( Figure 7B). Additionally, the co-expression relationship between p15 and ANRIL are supported by the heatmap ( Figure 7C). This evidence supported that the ANRIL/p15 expression is spatially enriched in cells of TM functional unit. Although detailed mechanism of TM dysfunction resulting in increased resistance remains nascent, a recent study showed the TM is~20 fold stiffer in glaucoma, suggesting a prominent role of TM mechanobiology [35]. We then analyzed the mRNA expression profiling of human TM cells [36] with definite degrees of stiffness, which is a direct indicator to infer TM dysfunction. As presented in Figure 7D, p15 expression retained a low level in a normal stiffness range (1.1 to 11.9 kPa). With the stiffness increasing into an abnormal range (>11.9 kPa), the p15 expression present a typical upward trend as a proliferative response. It also presented a typical decompensation stage after 34.1 kPa stiffness. This evidence showed a clear association between p15 expression and TM stiffness. Evidence of ANRIL Underlying Glaucoma Genetic Susceptibility We explore the possibility that ANRIL can contribute to the glaucoma genetic susceptibility. We first collected 17 SNPs that associated with glaucoma with genome-wide significance, p < 5 × 10 −8 , from the NHGRI GWAS Catalog [37] (www.ebi.ac.uk/gwas, accessed 16 April 2021) in the genomic region of ANRIL/p15 (Table 1). When overlaid with cis-eQTL of ANRIL from GTEx v8 with empirical genome-wide significance, we observed 2 SNPs (rs944800 and rs523096) that contribute to the expression level of ANRIL (Table 1). We prioritized rs944800 as the candidate variant given that TM cells manifested fibroblast-like phenotype in glaucoma etiology [38,39]. We then performed formal colocalization using GTEx eQTL and GWAS summary statistics of rs944800 (Methods) and our results supported that the expression of ANRIL is colocalized with rs944800 (PPH 4 = 68.8%) ( Figure 8A,B), with risk allele G associated with the decreased expression of ANRIL ( Figure 8C). This evidence supported that ANRIL is a candidate gene underlying the genetic susceptibility of glaucoma. (B) Regional association plots of eQTL (blue shadow) and GWAS (green shadow) within +/− 100 kb of rs944800 are presented. The horizontal line indicates genome-wide significant p-value for GWAS (5 × 10 −8 ) and genome-wide empirical p-value threshold for eQTL of ANRIL (2.3 × 10 −4 ). UCSC tracks of gene and annotation are displayed as the full mode in this region. (C) eQTL violin plots were directly adopted from GTEx v8 data through GTEx portal (gtexportal.org). p-value and slope (relative to G allele in rs944800, which is the glaucoma protective allele) were derived from linear regression with no multiple-testing correction. Discussion Our results supported that reduced ANRIL and increased p15 expression was responsible for TM cell senescence in SIG. When ANRIL was depleted by siRNA, we witnessed similar senescence phenotypes both in cultured TM cells and mice models. Suppression of p15 helps TM cells combat steroid-induced lesions. Steroids achieved effective regulation of cellular senescence by adjusting the expression level of ANRIL/p15 in TM cells. It should be noted that the promoting role of p15 in cellular senescence has been reported in pancreatic and hepatocellular cancers [40,41]. The expression of p15 was highly upregulated in these tumor tissues to facilitate G1 arrest of tumor cells and cellular senescence. ANRIL has also been implicated in various human diseases by regulating cellular senescence. As reported, the silencing of ANRIL reduces proliferation in fibroblasts and vascular smooth muscle cells [42], which might be the result of premature senescence. This evidence is consistent with our findings on reciprocal role of ANRIL and p15 in cellular senescence of TM cells in SIG. Although depleting ANRIL level in TM cells introduced similar phenotypic change and functional lesion as steroid, the less percentage of cells blocked in G1 suggested that the detrimental effect in siANRIL group was milder than steroid treatment ( Figure 4B-D). In mice, ANRIL knock-down introduced a slower increase of IOP ( Figure 5A) and less cellular senescence ( Figure 6A-C). This evidence did not challenge the essential role of ANRIL in the pathogenesis of SIG, but warrants further investigation of the ANRIL/p15 independent pathway in SIG. In addition to SIG, cellular senescence was reported to be essential for POAG and other types of glaucoma. For instance, a risk variant in SIX6 (His141Asn) was found to increase POAG susceptibility by increasing p16 transcription and retinal ganglion cell senescence [43]. An independent study also reported that serine/threonine kinase TANKbinding protein 1 is upregulated upon IOP increase, which induces cellular senescence in glaucoma patients [44]. This above evidence suggests the potential role of cellular senescence in different types of glaucoma. Notably, it remains unclear whether SIG damage can be mainly attributed to the production of TGF-β. TGF-β has been shown to induce or accelerate senescence and senescence-associated features in various cell types, including fibroblasts, bronchial epithelial cells, and cancer cells [45,46] by inducing cyclin-dependent kinase inhibitors p15, p21, and p27 [47,48]. This evidence introduces the possibility that ANRIL/p15 pathway may be a secondary response regulated by production of TGF-β when exposing to steroids. It would be interesting to disentangle these complexities in SIG in the future. It should be noted that the genetic component of glaucoma (rs944800, risk allele G) was colocalized with decreased expression of ANRIL in fibroblast cells. Given that TM cells manifested fibroblast-like phenotype in glaucoma etiology [38,39], we hypothesized that individuals with allele G in rs944800 may be susceptible to steroid, which results in decreased expression of ANRIL in TM cells and subsequently functional changes in cell senescence and TM dysfunction. Considering that GWAS susceptibility is not specific to SIG but to the general population, it will be interesting to test our hypothesis in other glaucoma-relevant exposures (e.g., stress) in future studies. As indicated by clinical guidelines [49], steroids discontinuation is the first step to manage SIG. Nevertheless, discontinuing steroids will significantly increase the risk of recrudescence among patients with autoimmune or inflammatory diseases. Hence, antiglaucoma medications and surgery are usually required for SIG patients but confers poor visual prognosis due to the limited efficiency [50]. Ophthalmologists and researchers are in urgent need to find solution to eliminate the ocular side effect of steroid treatment. As a potential alternative, the protective role of regulating ANRIL/p15 pathway can be implicated in clinical management of SIG to eliminate the ocular side effect of steroid. Notably, the efficiency of a novel ocular delivery nanosystem has been proved in managing retinal diseases, which hold great promise for translating our findings into clinical practice [51]. Overall, it is essential to characterize the mechanisms and functions of cellular senescence to achieve optimal therapies. Despite the aforementioned questions, this study has the potential to improve clinical management for SIG patients from the aspects of TM cell senescence. Our results already exhibited the protective effect of anti-senescence treatment with p15 depletion in both cultured cells and mice models with SIG. The broad implication of the current research could be achieved for other cellular senescence-related diseases. Establishment of the Steroid Induced Glaucoma (SIG) Model Six-week-old C57BL/6J mice were purchased from the Guangdong Medical Laboratory Animal Center. The present study complied with relevant legislation from the Institutional Animal Care and Use Committee of the Zhongshan Ophthalmic Center at Sun Yat-Sen University (No. 2018-014). We included 100 mice in this study, which were divided into 4 groups: 20 control mice received vehicle (PBS) eye drops, 20 mice received intraocular siANRIL delivery, 30 mice received topical DEX treatment, and 30 mice received both topical DEX and intraocular sip15. Establishment of the Steroid Induced Glaucoma (SIG) Model C57BL/6J mice were given topical 0.1% DEX (Sigma-Aldrich, St. Louis, MO, USA) or sterile PBS eye drops (vehicle) 3 times daily, starting at the age of 8 weeks [52]. Every day, the first dose was administered between 9:00 and 10:00 a.m., the second dose was administered between 1:00 and 2:00 p.m., and the third dose was administered between 6:00 and 7:00 p.m. A small eye drop (∼20 µL) was applied to both eyes. Mice were gently held for 30-40 s after drop administration to ensure effective penetration of the eye drops into the anterior chamber, and then released back into their cages. The DEX topical treatments were continued for a total of 12 weeks. Mice with topical PBS treatment served as controls. Intraocular Pressure (IOP) Measurement The IOP of mice was measured by a skilled technician, every three days [53]. In brief, mice were anesthetized with 2.5% isoflurane plus 100% oxygen for no longer than 2 min. In both eyes, IOP was measured using an ICare Rebound Tonometer (Tiolat, Oy, Helsinki, Finland). IOP was recorded between 10:00 a.m. and 2:00 p.m. In the topical DEX-treated group, only mice hat developed significant IOP increase (>21 mmHg) were considered SIG mice. Mice without a response to topical DEX, IOP recovered to baseline during DEX treatment, or present severe ocular (such as edema) or global lesions were excluded from the dataset. Intracameral Delivery of siRNA At week 3, the 60 DEX-treated mice were divided into 2 groups: one received topical DEX (DEX group, n = 30) and the other received both intracameral siRNA and topical DEX treatment (DEX + sip15 group, n =30) for another 9 weeks. Meanwhile, the ANRIL knock-down group received intraocular delivery of ANRIL siRNA without DEX treatment (siANRIL group). The in vivo siRNA reagent (Ribobio, Guangzhou, CN) was chemically modified and complexed before administrating to ensure the efficiency. A total of 20 µM siRNA (in 1 µL) was used in the intracameral injection. The mice were anesthetized with 2.5% isoflurane plus 100% oxygen. Before operation, mice corneas were topically anesthetized with 0.5% alcaine eye drops. A microinjection syringe (80308, 30 gauge, Hamilton, Reno, NV, USA) was loaded with 1 µL of siRNA complex or vehicle solution without any air bubbles. Under a stereoscopic microscope (M844; Carl Zeiss Meditec, Dublin, CA, USA), a needle was inserted into the anterior chamber of the eye through the limbus in anterior chamber (between cornea and iris), without collapsing the anterior chamber or damage Descemet's membrane. Slowly the solution was injected into the anterior chamber. The needle was gently withdrawn and no aqueous humor leakage or bleeding was verified. Tobramycin ointment was applied to prevent bacterial infection. The in vivo siRNA reagent was delivered into the anterior chamber weekly to ensure the intraocular knock-down efficiency by a well-trained and qualified technician. To eliminate the chance of damage or inflammation, we carefully divided the limbus to 12 equal parts, and changed the injection site in a clockwise direction. Five eyes presented mild to moderate corneal scarring or edema (n = 2 in siANRIL group, n = 1 in DEX group, n = 2 in DEX + sip15 group) and were excluded from microarray, morphology, histology, or IOP analysis, and only included in PCR or WB assay. The eyes that developed severe corneal scarring, neovascularization, edema, or other abnormalities were excluded from the experiment. Mouse Anterior Segment Isolation Mice were sacrificed at the end of 12 weeks of treatment, and intracardiac perfusion was conducted with 4% paraformaldehyde (Sigma, USA. Detailed procedures in ref. [54]). Mouse eyes were enucleated after perfusion. Anterior segments were dissected out carefully under a surgical microscope by an experienced ophthalmologist. The eyes were excised at 0.5 mm posterior to the equator after removing extraocular structures. Retina, choroid, vitreous, and lens were gently removed [55]. The remaining anterior segment was rinsed in PBS followed by 4% paraformaldehyde saline on gentle rotor for 4 h at room temperature. Fixed anterior segment was embedded in optimal cutting temperature compound and kept in −80 • C overnight before sectioning. Samples were sectioned along coronal planes at 10 µm thickness for staining. Mouse anterior segments were used in IF staining and SAHF labeling. Mouse TM Tissues Isolation To obtain TM tissues, the anterior portion (as previously isolated) was sliced into four radial wedges. With the TM side facing up, vertical cuts were made with a scalpel down from the TM toward the sclera. Two cuts were made along the anterior (0.5 mm posterior to Schwalbe's line) and posterior margins (0.5 mm anterior to the scleral spur) of TM [56]. The sclera was removed as much as possible under a microscope without disturbing the TM tissue strip (semi-transparent with some pigmentation). The full TM samples from separated eyes were used in the microarray profiling (n = 3 per group), flow cytometry analysis (n = 5 per group), and paraformaldehyde fixation (n = 5 per group). The rest of the TM samples were bisected, one half was pooled for real-time PCR assay, the other half was pooled for Western blot (n = 6 per group). Mouse TM Samples Microarray Profiling TRIzol Reagent (Invitrogen, USA) was used to extract total RNA from the TM samples. The Agilent Mouse lncRNA+ mRNA Array V1.0 was used for microarray profiling. |fold change| ≥ 2 and a Benjamini-Hochberg corrected p-value < 0.05 were used to designate differentially expressed RNAs. Biological meaning was assigned to dysregulated RNAs in response to DEX exposure using Panther biological system analysis. Fundus and Optical Coherence Tomography (OCT) Examination Mice were anesthetized with 1% pentobarbital sodium (i.p., 50 mg/kg, Sigma) and had their pupils dilated with topical tropicamide (Alcon, Belgium). Mice were placed on a rodent alignment stage and hypromellose (Methocel, OmniVision AG, Neuhausen, Switzerland) was applied as optical coupling medium for better imaging resolution. To avoid corneal dryness, 0.9% sterile saline was administered topically to cornea throughout the examination procedure. Fundus examination was conducted using a retinal imaging system (MicrolV, Phoenix, AZ, USA) [57]. CDR was measured by an experienced researcher three times. RNFL thickness was measured using a Heidelberg Spectralis HRA + OCT system (Heidelberg Engineering, Heidelberg, Germany) according to established protocols [58]. Mouse TM Cells Maintenance and Treatment The mouse TM cells were obtained from Procell Life Science & Technology Co., Ltd. (Wuhan, China). In brief, TM tissue was micro-dissected from C57BL/6 mice aged between 2-3 weeks old, using sterile precautions. Isolated TM tissue were then placed on a gelatin-coated 6-well plate to facilitate migration of TM cells. Cell cultures were maintained in complete mouse TM cell medium (Procell, Wuhan, China), which contained 5% (v/v) heat-inactivated fetal bovine serum, 1% TM cell growth supplement, and 1% penicillin/streptomycin. TM cell cultures were kept under 5% CO 2 and 95% air condition without media changes for 2 weeks. When the tissue adhered to the plate, the media was replaced with fresh media every 2 days. Once the primary culture reached confluency in the 6-well plate, the cells were gently lifted and transferred to a gelatin-coated T-25 flask to allow the primary culture to expand [56]. Once the cell confluency reached 70%, it was ready to use in research and recognized as passage 1. All experiments were performed on cells when they reached about 70-80% confluency within 7 passages. Two independent lots of commercial mouse TM cells were obtained and used in this study, which ensured the representativity of the results generated using mouse TM cells. To confirm the identification of this commercial cell line, expression of Neuron-specific enolase (NSE) was validated with immunohistochemistry ( Figure S1A) [56,59]. In addition, mouse TM cells were treated with 0.1 µM DEX for 3-4 days [60]. An increased expression of myocilin (a marker for TM cell identity) was reported after DEX treatment ( Figure S1B). DEX Treatment in TM Cells Mouse TM cells were plated at 40-50% confluency for DEX treatment. TM cells were treated with 0.1 µM DEX (Sigma-Aldrich, St. Louis, MO, USA) for 3 days. Cell cultures remained subconfluent during the 3-day treatment. Two independents lots of mouse TM cells were used as biological replicates. Gene Knockdown in TM Cells TM cells were plated around 70% confluency for gene knock-down. The siRNA and the scramble were designed and synthesized by the Ribobio Company (Guangzhou, China). The 10 µg/mL siRNAs were complexed in transfection buffer and reagent complexes (Ribobio). After incubation for 30 min, the siRNA mixtures were added dropwise to the culture medium. 48 hours' transfection was conducted to maximize the knockdown efficiency. Luciferase Report Assay TM cells were grown in 96-well plates and attained a confluency of roughly 70% before the experiment. To examine the interaction between ANRIL and p15, mouse p15 (NC_000070.6) gene was cloned upstream of the luciferase reporter. Applied Biological Materials Inc. provided the mouse ANRIL cDNA plasmid (Richmond, Canada). The ANRIL siRNA and the scramble were designed and synthesized by the Ribobio Company (Guangzhou, China). TM cells were co-transfected with luciferase reporter plasmid and ANRIL plasmid or ANRIL siRNA using lipofectamine TM 3000 (Invitrogen, Thermo Fisher, Waltham, MA, USA) according to the manufacturer's instructions. Negative controls included PcDNA vectors or scramble siRNA. Cells were lysed 24 h after transfection with 100 µL passive lysis buffer (Dual Luciferase Reporter Assay kit, Promega) into each well. For the luciferase reporter test, 20 µL of cell lysate were employed. Luminescence was quantified in 10 min. A quality control was performed using stable production of the reporter construct, pp15-Luc, or the pGL-3 basic luciferase vector (Promega, Madison, WI, USA). To standardize transfection efficiency, the renilla reporter (RLuc, Promega) plasmid was employed. Cell Cycle Analysis Propidium iodide (PI) staining was used to examine the cell cycle distribution of TM cells. Briefly, TM samples were minced into 3 to 4 mm pieces with scissors. TM pieces were added into 4 mg/mL collagenase I (Thermo Fisher Scientific, Waltham, MA, USA) solution (in PBS) and incubated at 37 • C for 2-4 h with gentle agitation until no visible chunks remained. After digestion, cell suspension was spin down at 1000 rpm at 4 • C. Supernatant was removed, and the cell pellet was resuspended in cold PBS and filtered through a 70-µm strainer to prepare single cell suspension. The 1 × 10 6 TM cells were collected and permeabilized with 70% ethanol, then treated with 200 mg/mL RNase A (Sigma) and 50 mg/mL PI (Sigma) for 10 min before analysis [61]. For cultured TM cells, the same protocol was followed upon obtaining the cell suspension. Single-cell populations were gated and the percentage of cells in different stages of the cell cycle was determined using Flowjo (Treestar, Inc., Ashland, OR, USA) implementing the Watson (pragmatic) model. The samples were incubated with primary antibodies at 4 • C overnight, then for 1 h at room temperature with a species-compatible secondary antibody. The sources and dilutions of antibodies are listed in Table 2. Cell nuclei were stained with 50 ng/mL 4 ,6diamidino-2-phenylindole (DAPI, CST) for 5 min. Images were captured by a Zeiss LSM 510 confocal laser scanning microscope and processed by Adobe Photoshop CS8. In mouse anterior segment slices, the TM region was outlined with white dashes (Figures 5A and 6A) according to the morphology [56]. Immunohistochemistry Labeling TM cells were seeded on chamber slides (ibidi, Fitchburg, WI, USA) at roughly 70% confluency. Slides were Fixed with 4% paraformaldehyde, permeabilized with 0.2% Triton X-100, and blocked with 1% BSA. Then, the samples were incubated with anti-NSE antibody (Abcam, ab79757) at 4 • C overnight, followed with anti-rabbit IgG, AP-linked antibody (CST, #7054) for 1 h at room temperature. After removing unspecific binding with wash buffer twice, slides were incubated with alkaline phosphatase substrate working solution (VECTOR, Vector ® Blue) for 20-30 min in the dark. Images were collected with a Zeiss Episcope and processed by Adobe Photoshop CS8. Nuclear Extracts Cultured TM cells or isolated TM tissues were gently resuspended in 500 µL cytoplasmic extract buffer (10 mM HEPES, 60 mM KCl, 1 mM EDTA, 0.075% (v/v) NP40, 1 mM DTT and 1 mM PMSF, adjusted to pH 7.6) and incubated on ice for 30 min. Mixed solution were then centrifuged at 14,000 rpm for 1 min at 4 • C to obtain the nuclear pellets. Nuclear pellets were resuspended in 100-200 µL nuclear extract buffer (20 mM Tris Cl, 420 mM NaCl, 1.5 mM MgCl 2 , 0.2 mM EDTA, 1 mM PMSF and 25% (v/v) glycerol, adjusted to pH 8.0) for 30 min on ice with vortex. The extracts were centrifuged at 14,000 rpm for 15 min at 4 • C, and the supernatant was collected for further analyses. Real-Time PCR TRIzol Reagent was used to extract total RNA from both TM samples and cultured cells (Invitrogen). The cDNA was synthesized using the PrimeScript RT Master Mix (TaKaRa, Kusatsu, Japan). The primers are listed in Table 3. Quantitative analysis was performed using real-time PCR using the SYBR Advantage qPCR Premix Master Mix (TaKaRa) according to a standard protocol. The cycle threshold (Ct) value of ANRIL and p15 was measured and normalized to β-actin. The comparative Ct method was used to evaluate the expression levels. Western Blot Total protein was isolated from TM cells and tissues with RIPA lysis buffer (Beyotime, Shanghai, China). Nucleus protein was isolated from nucleus extracts. Protein samples were electrophoresed on 10% polyacrylamide gels according to a standard procedure. The expression of target proteins was normalized to histone H3 using ImageJ Software. The primary antibodies and dilutions are presented in Table 2. Bioinformatic Analyses of Human TM Expression Profile A total of two datasets under accession numbers GSE146188 [34] and GSE123100 [36] were downloaded from the Gene Expression Omnibus database (http://www.ncbi.nlm.nih. gov/geo/ (accessed on 1 July 2020). The GSE146188 is analyzed in the Broad Institute's Single Cell Portal (https://singlecell.broadinstitute.org/single_cell/study/SCP780 (accessed on 1 July 2020) to compute the expression level of mRNA and lncRNA (Illumina HiSeq 2500). The p15 expression (RPKM) of TM cells in different-level stiffness is analyzed from GSE123100 (GSE123100_cell_expressed_gene_RPKM.txt.gz) and the ANRIL expression is not available in this dataset. Colocalization between GWAS and eQTL Signals Given the linkage disequilibrium (LD) discrepancy between the GTEx v8 (~85% European) [54] and Asian GWAS summary statistics of rs944800 [62], we performed the colocalization analysis using a LD-independent Bayesian framework Coloc [63]. We defined the flanking regions to 200 kilobase (kb) on either side of lead GWAS variant rs944800. The PPH 4 (posterior probability of both traits being associated with the same causal variant) was calculated, and those loci with PPH 4 > 0.6 were defined as the GWAS-QTL colocalized events [64]. Statistics and Reproducibility Unless otherwise stated, the experimental results in the figures were indicative of at least three independent repeats. All data were given as means with a standard deviation of one standard deviation (SD). One-way analysis of variance (ANOVA) was performed, followed by the Bonferroni multiple comparison tests and two-way ANOVA using GraphPad Prism data analysis software (version 7.0; GraphPad Software). Statistical significance was defined as a p-value of less than 0.05. The association between RNA expression and clinical parameters was tested using Pearson's technique. Author Contributions: P.W. and Y.Z. designed the research; P.W., C.D., Y.L. and S.H. collected the data and conducted the study; P.W. and E.L. analyzed the data; P.W. and E.L. co-wrote the manuscript; P.W., Y.Z., Y.L., C.D., J.Z., S.H. and E.L. critically revised the manuscript; All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Not applicable. Data Availability Statement: The data that support the findings of this study are available from the corresponding author, Y.Z., upon reasonable request.
2022-05-15T05:23:10.815Z
2022-04-27T00:00:00.000
{ "year": 2022, "sha1": "fd592cf62f0a06c6945732c0b5683757f1efae0b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "fd592cf62f0a06c6945732c0b5683757f1efae0b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234932049
pes2o/s2orc
v3-fos-license
KENNETH BURKE’S COUNTER-SPECTACLE AND THE PROBLEM OF UNITY IN POLITICAL CULTURE The spectacle was prominent in public displays and mass meetings in midtwentieth-century Russia and Germany as a quest for unity in political culture. In Russia, it was countered by Mikhail M. Bakhtin’s novelistic dialogue, polyphony, heteroglossia, and carnival. In Germany, it appeared in its most grotesque form in Adolf Hitler’s Mein Kampf, which proclaims Hitler’s quest for German national unity and celebrates his National Socialist mass meetings, which created the appearance of a false unity imposed by force of arms. In the United States, Hitler’s spectacle was critiqued in Kenneth Burke’s review of Mein Kampf and continually challenged throughout his life’s work. Burke’s review critiques Hitler’s strategy of attempting to unite Germany by dividing it from those who opposed him, in particular non-German ethnic groups. Burke was engaged in sociopolitical issues throughout his lifetime, and his work offers theories and principles aimed at diversity in unity in political culture and offered as a counterforce to Hitler’s spectacle of a false unity—a counter-spectacle in the form of identification, dramatism, dialectical and aesthetic transcendence, and a satiric mock portrait of a false unity. Guy Debord's concept of the spectacle has recently experienced a resurgence of interest [Bunyard 2018;Penner 2019], concurrent with the rise of political cultures that celebrate spectacles of apparent unity even as they marginalize populations that seem to threaten their hegemony [Blow 2019;In the Hall 2018;Karni Haberman 2019;Kellner 2017;Reevell 2018;Zaretsky 2017]. But the spectacle has deep roots and an ugly place in twentieth -century history as a political strategy for promoting a false unity. As such, it has been challenged in both Russia and the United States, most notably in the works of Mikhail M. Bakhtin and Kenneth Burke, whose lessons are worth recalling in our own time. 1 Writing under the shadow of Stalinist Russia, the spectacle of the Show Trials, and the univocality of the Soviet propaganda machine, Bakhtin was circumspect and politically disengaged, but his portraits of multivocality as dialogue, polyphony, heteroglossia, and carnival are well known as counterpoints to the Soviet monologue, and his portrait of the carnivalesque spectacle, in particular, 1 Burke scholars have noted similarities in Bakhtin's and Burke's ideas but note as well the differences in their discursive styles [Adams 2017;Henderson 2017;Lucke 2017]. Henderson observes that both writers offer similar perspectives on history and society, as captured in Bakhtin's various metaphors for the novel's "multiple voices" and Burke's metaphor of the parlor's "unending conversation" [Henderson 2017]. But, according to Henderson, their writing styles differ, Bakhtin's being more "propositional," Burke's more "performative." Bakhtin is a "traditional intellectual," whose writing is "professional, scholarly, and conservative." Burke is an "organic intellectual," whose writing responds to "the exigencies of his historical moment." These stylistic differences are grounded in part in Bakhtin's and Burke's very different life histories. Bakhtin lived in Stalinist Russia; had an academic education through the doctorate but due to political tensions was denied the degree [Clark Holquist 1984: 27 -30, 322 -25]; had a limited circle of intellectual friends and acquaintances ; suffered imprisonment and exile in his own country ; eventually taught in various universities but under close political scrutiny ; and suffered from lifelong illnesses, which required amputation of his right leg 261,. Referencing his works from the 1930s and early 1940s, Clark and Holquist write: "The major contemporary implicitly addressed in these writings was not one of his peers among the intelligentsia but Stalinist culture itself. Bakhtin used his ostensible subject matter as a medium to convey his critique of Stalinist ideology" [Ibid.: 266,268]. Burke lived alternately in New York City and Andover, New Jersey; had a non -traditional, largely self -education [Selzer 1996: 20 -21, 60 -63]; had a large circle of intellectual friends and acquaintances, including many of the major literary figures of his time ; and throughout his life was thoroughly engaged in sociopolitical issues [George Selzer 2007;Weiser 2008: 58 -145]. Referencing "The Rhetoric of Hitler's 'Battle'," in particular, Weiser observes: "Burke seized the opportunity to demonstrate more strongly than ever before that his exposition/exhortation 'type of criticism' could have ramifications beyond the aesthetic and into the sociopolitical world. It could both expose the truths of language and provide the attitude necessary to take action" [Weiser 2008: 64]. stands in direct opposition to the Show Trials and the relentless Stalinist monologue. Burke, in contrast, was thoroughly engaged in sociopolitical issues throughout his life. Writing under the shadow of Nazi Germany but from a greater physical distance, he was direct and explicit in his opposition to the Nazis' strategies for promoting a false unity and the spectacle of their mass meetings, and the entire body of his work not only repudiates their false unity but offers alternatives that embrace a true unity of multiple and diverse perspectives. Burke expresses his opposition to the Nazi spectacle in his review of the 1939 English translation of Adolf Hitler's Mein Kampf, "The Rhetoric of Hitler's 'Battle'," but he also observes the persuasive force of the work and offers his assessment as a cautionary note against other works of this kind [Hitler 1939;Burke 1973: 191 -220]. Hitler scorns the shameful spectacle of the bourgeois open meetings, with multiple and conflicting voices; admires the grand spectacle of the Marxist mass meetings, which illustrate their unity of purpose; and boasts about his own National Socialist mass meetings, with a false unity imposed by force of arms [Hitler 1939: 715 -31, 739 -49]. In "Hitler's 'Battle'," Burke assesses Hitler's unification strategies, not least his ability to identify with the German people by dividing them from the others that he so despises [Burke 1973: 202 -7]. But Burke's opposition to the spectacle extends far beyond his rejection of Hitler's false strategies of unification and encompasses his fundamental concept of a unity that embraces diversity and respects the multiplicity of different and potentially competing perspectives. Indeed the entire body of his work may be read as a counterpoint to the spectacle of a false and enforced unity-a counter -spectacle in the form of identification, dramatism, symbolic bridging, dialectical and aesthetic transcendence, and perhaps even satire as a mock image of a false unity [Burke 1969a;Burke 1969b;Burke 1971;Burke 1984;Clark 2014;Crable 2014;Weiser 2008;Wolin 2001;]. The Society of the Spectacle Debord's commentaries on the spectacle emphasize the inescapable presence of the mass communication of his time, especially television, and its negative consequences for individuals [Bunyard 2018;Debord 1998;Debord 1995;Penner 2019]. Bunyard observes that Debord's The Society of the Spectacle is not so much about the spectacle itself as it is about history: "it is a book that describes a society that has become detached from its capacity to consciously shape and determine its own future" [Bunyard 2018: 4]. Penner notes in particular the consequences of the spectacle for the social separation and passivity of individuals and its increasing threat as an all -controlling and totalizing force [Penner 2019: 22 -33]. Penner therefore proposes a re -envisioning of the spectacle as a radically democratic practice in digital spaces and cites Bakhtin's work as a theoretical basis for such a revision [45][46]. Writing from a Marxist perspective, Debord asserts that modern industrial society has generated "an ever -growing mass of image -objects," with the spectacle as its chief product [Debord 1995: 15]. As a consequence, individuals have become isolated and separated since "all contact between people now depends on the intervention of . . . 'instant' communication," which is "essentially one -way" [Ibid.: 24]. As they have become separated from each other, they have also become mere passive recipients of the spectacle as "the ruling order discourses endlessly upon itself in an uninterrupted monologue of self -praise" and spectators assume an attitude of "passive acceptance" of the spectacle's "monopolization of the realm of appearances" [Ibid.: 12,24]. Debord's later Comments on the Society of the Spectacle observes the increasingly controlling and seemingly inescapable reach of "the integrated spectacle," which is "simultaneously concentrated and diffuse," so that individuals can no longer "lastingly free themselves from the crushing presence of media discourse and of the various forces organized to relay it," mostly notably industry and government [Debord 1998: 4, 7]. The consequence of these various forces is a seeming "unification" of society via a spectacle that "unites what is separate, but it unites it only in its separateness" [Debord 1995: 3, 29]. Penner cites Debord's insistence on "real communication" and "real dialogue" as a corrective to the spectacle and finds in Bakhtin's works a theoretical grounding for his re -envisioning of the spectacle as a radically democratic practice [Debord 1998: 187;Penner 2019: 71]. Bakhtin's theories thus should be read as dialogic practices that not only oppose the authoritarian and monological discourses that convey a false unity but also promote a true unity that embraces multiple perspectives and multiple voices. Novelistic Multivocality The relentless print and visual propaganda of the Stalinist political regime has been extensively documented [Bonnell 1997;Brooks 2000;Groys 2003;Haskins Zappen 2010]. Groys explains this propaganda machine as a whole when he asserts that the goal of Stalinist painting and architecture was "the creation of societal homogeneity and the exclusion of the other" on the principle of the "law of unity and the battle of opposites'," which proclaims a single official ideology and situates political enemies in a struggle against each other [Groys 2003: 96]. Bakhtin describes this relentless propaganda as "monologism" and envisions various forms of opposition in his portrayals of novelistic "multi -voicedness" as dialogic, polyphonic, heteroglossic, and carnivalesque [Bakhtin 1984a: 8, 16, 20, 285 -86, 292 -93;Morson Emerson 1990: 49 -54, 130 -33, 139 -49, 231 -68, 309 -17, 433 -70]. Bakhtin's multivocality is not only a mode of discourse, however, but also a concept of the world as a unity that preserves and respects the multiple perspectives and multiple voices within it. In Problems of Dostoevsky's Poetics, Bakhtin captures this unity of multiple perspectives in his distinction between monologue as the unified truth of a "single and unified consciousness" and dialogue as a unified truth "that requires a plurality of consciousnesses" [Bakhtin 1984a: 81]. He provides an illustration of this dialogic unity in his characterization of Dostoevsky's polyphonic novel as "a plurality of independent and unmerged voices and consciousnesses, a genuine polyphony of fully valid voices . . . , with equal rights and each with its own world," which "combine but are not merged in the unity of the event" [Ibid.: 6]. Referencing Albert Einstein's theory of relativity, Bakhtin compares this polyphony to "the complex unity of an Einsteinian universe" but adds parenthetically that "the juxtaposition of Dostoevsky's world with Einstein's world is, of course, only an artistic comparison and not a scientific analogy" [Ibid.: 16]. Bakhtin's concepts of heteroglossia and carnival provide similar portraits of such a multiplicity of perspectives and discourses within a unified world. In "Discourse in the Novel," Bakhtin explains heteroglossia as the co -existence within any language of "socio -ideological contradictions between the present and the past, between differing epochs of the past, between different socio -ideological groups in the present, between tendencies, schools, circles and so forth" [Bakhtin 1981: 291]. This heteroglot language is dialogized when any one language is viewed from the perspective of another and thereby enters into "a critical interanimation of languages" [Bakhtin 1981: 296;Morson Emerson 1990: 143]. As in the polyphonic novel, heteroglot language is captured in literary language (and especially novelistic rather than poetic language) as a complex unity, which "is not a unity of a single, closed language system, but is rather a highly specific unity of several 'languages' that have established contact and mutual recognition with each other" [Bakhtin 1981: 295]. In Rabelais and His World, Bakhtin develops a portrait of the carnival as a reversal of the traditional medieval spectacle and as such not a spectacle of observers but of participants: "Footlights would destroy a carnival, as the absence of footlights would destroy a theatrical performance. Carnival is not a spectacle seen by the people; they live in it, and everyone participates because its very idea embraces all the people" [Bakhtin 1984b: 7]. The carnival too is a complex unity, a community of diverse participants formed in opposition to traditional social structures: "In the framework of class and feudal political structure this specific character could be realized without distortion only in the carnival and in similar marketplace festivals. They were the second life of the people, who for a time entered the utopian realm of community, freedom, equality, and abundance" [Ibid.: 9]. The Quest for Political Unity Burke's work provides a similar counterpoint to authoritarian and hegemonic discourses-a counter -spectacle to oppose the official discourses that promulgate and perpetuate a sense of false unity that marginalizes and silences diverse perspectives and potentially conflicting voices. Burke was well aware of both the threat and the persuasive force of these official discourses, most horrifically and graphically illustrated in Hitler's Mein Kampf, which asserts a false unity defined as German racial superiority and uniformity; marked by contempt and even hatred for other ethic and political groups; sustained by the State as guardian of the community so defined; and most prominently displayed in the spectacle of mass meetings, with unity imposed by force of arms [Hitler 1939]. Hitler deplores the lack of unity of the German people and imagines what a unified Germany might have become: "If, in its historical development, the German people had possessed this group unity as it was enjoyed by other peoples, then the German Reich would today probably be the mistress of this globe" [Ibid.: 598]. He believes that such a unity must be pure, however, and uncontaminated by other racial groups: "Every race -crossing leads necessarily sooner or later to the decline of the mixed product, as long as the higher part of this crossing still exists in some racially pure unity" . This drive toward German unity is motivated by his contempt for other ethnic and racial groups, especially Jews, and it motivates also his hostility toward his political opponents, including the bourgeoisie, the Social Democratic Party, the Austrian Parliament, and Marxist sympathizers. Hitler directs his most bitter contempt toward other ethnic and racial groups, especially as represented by Austria and its Jewish population: "I detested the conglomerate of races that the realm's capital manifested; all this racial mixture of Czechs, Poles, Hungarians, Ruthenians, Serbs, and Croats, etc., and among them all, like the eternal fission -fungus . . . of mankind-Jews and more Jews" [Ibid.: 160]. And the Jews, he asserts, are a race, not a nation: "The Jews were always a people with definite racial qualities and never a religion, only their progress made them probably look very early for a means which could divert disagreeable attention from their person" [Ibid.: 421]. But Hitler is also contemptuous of his political opponents for their opposition to German national unity. The bourgeoisie, including Hitler's own father, arose from the working classes, but the political parties that represented them resisted attempts to improve their working conditions and remedy their social abuses [Ibid.: 30 -32, 59 -60]. As a consequence, the bourgeois parties drove workers toward the Social Democratic Party, which Hitler despises for its "very fear of the actual raising of the workers from the depths of their present cultural and social misery" and for "its hostile attitude towards the fight for the preservation of the German nationality" [Ibid.: 51, 64]. The Austrian Parliament was both contentious and also hostile toward German nationalism, as represented by the Pan -German movement. It was "a gesticulating mass, shrieking in all keys, wildly stirred, presided over by a good -natured old uncle who, by the sweat of his brow, tried to re -establish the dignity of the House by violently ringing a bell and by alternately kind and earnest remonstrances" [Ibid.: 98]. It opposed the Pan -German movement, which did not have the support of either the masses or the Catholic Church, had to rely on Parliament for support, and thus lost its future: "As soon as the Pan -German movement, because of its parliamentary position, began to place the weight of its activity upon parliament instead of upon the people, it lost its future and won cheap successes of the moment" [Ibid.: 137]. Marxism had a broad international agenda and so, too, was aligned against German nationalism: "Therefore Marxism itself is nothing but the transmission, carried out by the Jew Karl Marx, of a long existing attitude and conception, conditioned by a view of life, to the form of a definite political creed: international Marxism" [Ibid.: 578 -79]. Its Jewish sympathizers were proponents of the "Marxist doctrine of irrationality," practitioners of a twisted form of the Marxist dialectic, and masters of "international capital" [Ibid.: 81, 331]. As such, they were not only racially inferior but also politically abhorrent. To promote his agenda of German national unity in the face of the forces that he perceived to be aligned against him, Hitler enlisted the State as his greatest ally and mass meetings as his most powerful weapon. The State he considered "not an assembly of commercial parties in a certain prescribed space for the fulfillment of economic tasks, but the organization of a community of physically and mentally equal human beings for the better possibility of the furtherance of their species as well as for the fulfillment of the goal of their existence assigned to them by Providence" [Ibid.: 195]. And the mass meeting he considered more powerful than the contentiousness and incessant wrangling of the bourgeoisie-both spectacles but of a very different sort. The mass meeting, he claims, "is necessary if only for the reason that in it the individual, who in becoming an adherent of a new movement feels lonely and is easily seized with the fear of being alone, receives for the first time the pictures of a greater community, something that has a strengthening and encouraging effect on most people" [Ibid. : 715]. The bourgeois meetings maintained a pretense of "mutual discussion" as a bridge to "mutual understanding," lest the world be offended by "the shameful spectacle of the internal German fratricidal quarrel . . . ugh!" [Ibid. : 727]. The Marxist mass meetings, in contrast, projected "a powerful appearance at least outwardly" and thus illustrated "how easily a man of the people succumbs to the suggestive charm of such a grand and impressive spectacle" [Ibid.: 731]. Inspired by the Marxist mass meetings, Hitler boasts about his own National Socialist meetings, with a unity of purpose imposed by force. The conduct of the meetings was authoritarian: "We did not ask anyone graciously to tolerate our lecture, and, from the beginning, no one was guaranteed an endless discussion, but it was simply stated that we were the masters of the meeting, that consequently we had the authority, and that everyone who would dare to make only so much as one interrupting shout, would mercilessly be thrown out" [Ibid.: 728]. It was also ruthless and brutal, as Hitler openly boasts: "The dance had not yet started when my Storm Troopers, that was their name from that day on, attacked. Like wolves, in groups of eight or ten, again and again they pounced upon their opponents and actually began to beat them out of the hall. Hardly five minutes had passed that I did not see one of them that was not covered with blood" [Ibid.: 748]. Such, as Hitler proudly proclaims, is the ugly spectacle of an intolerant, enforced, and false unity. The Problem of a False Unity Burke recognizes Hitler's quest for unity via German nationalism as his life's ambition and the central theme of Mein Kampf. Scholars have noted Burke's early formulations of concepts such as identification and scapegoating in his review of Mein Kampf and his opposition to the false unity asserted in times of war in The Philosophy of Literary Form [Burke 1973: 202 -7, 448 -50;George Selzer 2007: 201 -2;Weiser 2008: 60 -67;Wolin 2001: 126]. But the review may serve as well as a preface to the entire body of Burke's work-an exploration of the strategies for the imposition of a false unity for which his own quest for a unity that respects and encompasses diversity serves as a response-a counter -spectacle and antidote to Hitler's toxic life and work. 2 Burke begins his review of Mein Kampf with a cautionary note and then states the central theme of the book and maps the strategies that, as he cautions, can be so effective, even if driven by malice and prejudice. The cautionary note is an advisory to guard against the medicine man's poison: "Here is the testament of a man who swung a great people into his wake. Let us watch it carefully . . . to discover what kind of 'medicine' this medicine -man has concocted, that we may know, with greater accuracy, exactly what to guard against" [Burke 1973: 191]. The central theme is the sinister pursuit of a false unity: "Hitler found a panacea, a 'cure for what ails you,' a 'snakeoil,' that made such sinister unifying possible within his own nation" [Ibid.: 192]. And such a unifying panacea is so darkly and deeply sinister because it is not just a unity of like -minded people but also and especially a division from those who are different: "Men who can unite on nothing else can unite on the basis of a foe shared by all" [Ibid.: 193]. But for maximum effectiveness such an enemy must be one, not many, and Hitler selects "an 'international' devil, the 'international Jew'" as his unified enemy and the primary target of his vilification: "So, we have, as unifying step No. 1, the international devil materialized, in the visible, point -to -able form of people with a certain kind of 'blood,' a burlesque of contemporary neo -positivism's ideal of meaning, which insists upon a material reference" [Ibid.: 194]. And the Jews are vilified, especially, for their mastery of "Jew finance" and their twisted version of the Marxist "dialectics" [Ibid.: 197,204]. The unifying enemy thus identified, the unifying strategies inevitably follow as both divisive and unifying. Hitler's divisive strategies Burke characterizes as "inborn dignity," "projection," "symbolic rebirth," and "commercial use" . Inborn dignity is the natural superiority of the "Aryan" race and, consistent with Hitler's strategy of divisiveness, a presumption of the natural inferiority of other races, especially Jews [Ibid.: 202]. Projection is, in essence, scapegoating, the "purification by disassociation" from others and the loading of evils on the backs of those others [Ibid.: 202]. Symbolic rebirth is an aspect of the "projective device of the scapegoat" and the "doctrine of inborn racial superiority" [Ibid.: 203]. It is a "rebirth" in the sense that it provides "a 'positive' view of life" and a feeling of "moving forward, towards a goal" [Ibid.: 203]. Commercial use is "a noneconomic interpretation of economic ills" with the Jews, as the masters of international finance, as the chief villain and scapegoat [Ibid.: 204]. These divisive strategies intersect in Hitler's own twisted version of the Marxist Jews' alleged dialectics: "A people in collapse, suffering under economic frustration and the defeat of nationalistic aspirations, . . . have little other than some 'spiritual' basis to which they could refer their nationalistic dignity . . . of superior race" [Ibid.: 205]. Hitler's primary unifying strategy is the spectacle of his speeches at mass meetings with deliberate provocations and harsh retributions. Again, Hitler's strategy of divisiveness helps to explain and (at least in his own mind) justify his strategy of unification. The problem is the "'babel' of voices," best exemplified by the Viennese parliamentary "wrangle" [Ibid.: 200]. It is "the many conflicting voices of the spokesmen of the many political blocs" that had arisen "from the fact that various separatist movements of a nationalistic sort had arisen within a Catholic imperial structure formed prior to the nationalistic emphasis and slowly breaking apart" [Burke 1973: 200]. In contrast to this parliamentary wrangle, Hitler celebrates his own speeches of unification at his mass meetings. Against the wrangle, "we get a contrary purifying set; the wrangle of the parliamentary is to be stilled by the giving of one voice to the whole people" [Ibid.: 207]. That one voice is the key to the identification of the leader and the people: "Hitler's inner voice, equals leader -people identification, equals unity" [Ibid.: 207]. And that one voice produces the spectacle of the mass meetings, as Hitler boasts that he "would . . . fill his speech with provocative remarks, whereat his bouncers would promptly swoop down in flying formation, with swinging fists, upon anyone whom these provocative remarks provoked to answer" [Ibid.: 212 -13]. Such was "the power of spectacle" of the mass meetings as "the fundamental way of giving the individual the sense of being protectively surrounded by a movement, the sense of 'community'" [Ibid.: 217]. Against such a false spectacle, Burke develops strategies for a unity that respects and embraces difference and diversity. The Counter -Spectacle of Diversity in Unity Burke's strategies for diversity in unity offer a counterpoint and a counterforce against the spectacle of a false unity. Burke dismisses Aristotelian spectacle as mere costuming and Machiavellian spectacle as a strategy of a manipulative "'administrative' rhetoric" [Burke 1969a: 231;Burke 1969b: 158]. Instead, recognizing Hitler's strategy of identification with his people via a division from other peoples, he offers his counter -spectacle-a complex array of theories and principles that accept the reality of divisions and seek not to exclude but to embrace them in an overarching unity. Like Bakhtin's concepts of multivocality, these theories have various names and develop over time throughout the body of this work. They are broadly conceptual but also readily applicable to both political culture and everyday life. Burke scholarship has long recognized identification as a key concept in his work [Crusius 1999: 120 -21 As explained in A Rhetoric of Motives, identification can unite people by promoting their common interests, but it can also divide them by promoting an individual's or a group's own interests-though the distinction between one's own and others' interests and the degree of conscious deliberation are not always clear. Identification assumes division and an ambiguous line between the one and the other. It is thus an invitation to rhetoric-the art of persuasion-and as such it is also an invitation to manipulation, which can be either partially or wholly conscious and deceitful. Identification is a joining of one's interests with those of another: "A is not identical with his colleague, B. But insofar as their interests are joined, A is identified with B" [Burke 1969b: 20]. The identification of one with another does not, however, negate the other: "In being identified with B, A is 'substantially one' with a person other than himself. Yet at the same time he remains unique, an individual locus of motives. Thus he is both joined and separate, at once a distinct substance and consubstantial with another" [Ibid.: 21]. Such a conjoining of interests thus assumes a division: "Identification is affirmed with earnestness precisely because there is division. Identification is compensatory to division" [Ibid.: 22]. As such, it is an invitation to rhetoric: "If men were not apart from one another, there would be no need for the rhetorician to proclaim their unity" [Ibid.: 22]. But identification and division have a complex and ambiguous relationship and thus invite rhetoric but so also manipulation, whether or not deliberate and deceitful. The ambiguous relationship invites rhetoric: "Put identification and division ambiguously together, so that you cannot know for certain just where one ends and the other begins, and you have the characteristic invitation to rhetoric" [Ibid.: 25]. Rhetoric invites cooperation: it is "the use of language as a symbolic means of inducing cooperation in beings that by nature respond to symbols" [Ibid.: 43]. But it also invites manipulation, which might or might not be deliberate. It might be a manipulation that "we impose upon ourselves, in varying degrees of deliberateness and unawareness, through motives indeterminately self -protective and/or suicidal" [Ibid.: 35]. Or it can hover at "the edge of cunning": "A misanthropic politician who dealt in mankind -loving imagery could still think of himself as rhetorically honest, if he meant to do well by his constituents yet thought that he could get their votes only by such display" [Ibid.: 36]. Or it can be deliberately deceitful: "For if an identification favorable to the speaker or his cause is made to seem favorable to the audience, there enters the possibility of such 'heightened consciousness' as goes with deliberate cunning" [Ibid.: 45]. Thus the "wavering line between identification and division" is "forever bringing rhetoric against the possibility of malice and the lie" [Ibid.: 45]. Given the tension inherent in the relationship between identification and division, Burke increasingly turns to transcendence to address the problem of diversity in unity. But identification nonetheless serves as an initial step on the path to transcendence. As Burke observes retrospectively, "if identification includes the realm of transcendence, it has, by the same token, brought us into the realm of transformation, or dialectic" [Burke 1951: 203]. Like identification, transcendence is compensatory to division, or difference, but, unlike identification, transcendence seeks to escape manipulation and deceit by fully engaging, and respecting, multiple and potentially conflicting perspectives. Bryan Crable captures this difference in the phrase "transcendence by perspective" [Crable 2014: 4], which suggests that the perspectives are as much a counterpart to transcendence as division is to identification. In A Grammar of Motives, Burke offers an elaborate framework for the analysis of these multiple perspectives, which he calls "dramatism" [Burke 1969a: xxii]. Like Bakhtin, Burke was aware of the multiple per-spectives of relativity theory [Burke 1984: 310], and, like Bakhtin also, he seeks to bring these multiple perspectives and their interrelationships together in a complex unity that he calls transcendence [Burke 1969a: xv -xxiii, 3 -20, 125 -320, 420 -30, 503 -5;Burke 1969b: 53 -54, 181 -333]. Dramatism tracks human motives "in a perspective that, being developed from the analysis of drama, treats language and thought primarily as modes of action" [Burke 1969a: xxii]. Its key terms are act, scene, agent, agency, and purpose: In a rounded statement about motives, you must have some word that names the act (names what took place, in thought or deed), and another that names the scene (the background of the act, the situation in which it occurred); also, you must indicate what person or kind of person (agent) performed the act, what means or instruments he used (agency), and the purpose. [Ibid.: xv] A single, simple act can illustrate this complex of motives: "The hero (agent) with the help of a friend (co -agent) outwits the villain (counter -agent) by using a file (agency) that enables him to break his bonds (act) in order to escape (purpose) from the room where he has been confined (scene)" [Ibid.: xx]. But the individual motives that constitute the act are also complexly interrelated as, for example, scene -act, scene -agent, etc. . This complex mix of motives is evident in Burke's analysis of "the philosophic schools" and most notably in his appraisal of contemporary sociopolitical issues . Like Bakhtin's explanation of dialogized heteroglossia, Burke's analysis shows how one person's perspective on these issues can be viewed from another's: "And to consider A from the point of view of B is, of course, to use B as a perspective upon A" [Ibid.: 504]. The Marxists' "dialectical materialism," for example, is a mix of scene -act motives. On the one hand, it is primarily historical and material, hence scenic, since Karl Marx derived "the character of human consciousness in different historical periods from the character of the material conditions prevailing at the time" [Ibid.: 200]. On the other hand, The Communist Manifesto's insistence on revolution is clearly a motive to act: The Communists "openly declare that their ends can be attained only by the forcible overthrow of all existing social conditions," and the Manifesto's "entire logic is centered about an act, a social or political act, the act of revolution, an act so critical and momentous as to produce a 'rupture' of cultural traditions" [Ibid.: 207,209]. Hitler's promotion of the State as guardian of the community shows a similar mix of agency -purpose motives. On the one hand, its agency (pragmatism) can be read as a devious strategy of inducement to join the cause of German nationalism; on the other hand, its purpose (a crude form of mysticism) can be read as the pursuit of the ultimate goal of a quality social life: "Was it crass pragmatism (in using the philosophy of the State purely as a rhetoric for inducing the people to acquiesce in the designs of an elite) or crude mysticism (in genuinely looking upon the power and domination of the State as the ultimate end of social life)?" [Ibid.: 290]. Analyses of these "mutually related or interacting perspectives" will produce a "perspective of perspectives" and will pose a problem since any one person can see another only "from his particular position, or point of view, or in his particular perspective (necessarily a restricted perspective, since it represents but one voice in the dialogue, and not the perspective -of -perspectives that arises from the coöperative competition of all the voices as they modify one another's assertions, so that the whole transcends the partiality of its parts)" [Ibid.: 89,503]. Burke addresses this problem via "the Socratic transcendence" of the early Platonic dialogues [Ibid.: 420 -30], which provide a model for his own version of dialectical transcendence. As Crable observes, however, Burke's concept of transcendence also has other meanings [Crable 2014: 6 -10, 18 -25]. In his early formulation in Attitudes Toward History, transcendence is an individual's resolution of differences of perspective within him/herself, such as differences in values, differences between the self and the community, or differences between the sleeping and the waking self [Ibid.: 8 -9]. 3 Such a resolution is both natural and curative, as a way of "creating unity from the divisive materials of human experience" [Ibid.: 9]. It is effected via a conceptual process of symbolic bridging: "When objects are not in a line, and you would have them in a line without moving them, you may put them into a line by shifting your angle of vision" [Burke 1984: 224]. This conceptual shifting permits an individual to reconcile differences in perspective via transcendence, the adoption of a new perspective that resolves the differences: "When approached from a certain point of view, A and B are 'opposites.' We mean by 'transcendence' the adoption of another point of view from which they cease to be opposite" [Ibid.: 336]. In Burke's late formulation in an essay on Ralph Waldo Emerson in Language as Symbolic Action, transcendence is fundamental to the human condition and is "an ever -present feature of human symbol -use" [Crable 2014: 21]. It is a dialectical process whereby symbol use is always reaching for something beyond itself: "Whether there is or is not an ultimate shore towards which we, the unburied, would cross, transcendence involves dialectical processes whereby something HERE is interpreted in terms of something THERE, something beyond itself" [Burke 1966: 200]. As such, transcendence as symbol -use is always open -ended and, as Bakhtin might say, "unfinalized" [Bakhtin 1984a: 12]. Given these limitations, Burke's solution to the problem of diversity in unity lies in neither of these early or late formulations but in his concept of dialectical transcendence as previewed in A Grammar of Motives and developed more fully in A Rhetoric of Motives. [Burke 1969a: 420 -30, 503 -5;Burke 1969b: 53 -54, 181 -333;Crable 2014: 10 -18;Crusius 199: 179 -82;Weiser 2008: 106 -8, 130 -34;]. Dialectical transcendence is "dialectical" because it derives a complex unity from the interplay of multiple, potentially competing perspectives. As Crable explains, "the twin movements of the 'Upward Way' and the 'Downward Way' . . . trace the movement from a plurality of competing voices, through increasing levels of abstraction, to the arrival at a new, unifying principle" [Crable 2014: 14]. In A Rhetoric of Motives, as in the earlier Grammar, these competing voices are not only individual but also broadly sociopolitical and ideological. They are the voices of "the Scramble, the Wrangle of the Market Place, the flurries and flare -ups of the Human Barnyard, the Give and Take, the wavering line of pressure and counterpressure, the Logomachy, the onus of ownership, the Wars of Nerves, the War" [Burke 1969b: 23]. And they are resolved via a dialectical process that leads, or can lead, to transcendence, as envisioned originally by Plato and proceeding thus: First, the setting up of several voices, each representing a different 'ideology,' and each aiming rhetorically to unmask the opponents; next, Socrates' dialectical attempt to build a set of generalizations that transcended the bias of the competing rhetorical partisans; next, his vision of the ideal end in such a project; and finally, his rounding out the purely intellectual abstractions by a myth, in this case the chiliastic vision. The myth would be a reduction of the 'pure idea' to terms of image and fable. By the nature of the case, it would be very limited in its range and above all, if judged literally, it would be 'scientifically' questionable. [Ibid.: 200] This scientifically questionable myth would not necessarily represent a point of closure, however, since it "might then be said to represent a forward -looking partisanship, in contrast with the backward -looking partisanship of the 'ideologies'" and thus might serve as a point of departure that seeks "a new dialectic by a method that transcended the partiality of both the ideologies and the myth" [Ibid.: 200]. This forward/backward -looking partisanship is significant because it illustrates the transformative quality of the dialectical process of transcendence, a process that is both upward and downward. On the one hand, the upward journey produces, or can produce, a momentary unity of the competing ideologies; on the other hand, the downward journey shows how those competing ideologies have been transformed by the upward journey: Here are the resources of the Upward Way, by the via negativa, with the possible reversal of direction, a returning to the flatlands in a Downward Way. (On the return the system will contain a principle of transcendent unity which was reached at the culmination of the way up, and henceforth pervades all the world's disparate particulars, causing them to partake of a common universal substance.) [Ibid.: 311] The competing ideologies themselves may thus be transformed by the dialectical process of transcendence, and the individual perspectives may be revisited from the broader point of view of the "perspective -of -perspectives" [Burke 1969a: 89]. Thus is achieved an "ultimate identification" that avoids the potential for manipulation and deceit in identification conceived as merely a joining of interests via persuasion [Burke 1969b: 328, 333]. But dialectical transcendence as an ultimate identification is an ideal. The reality is that dialectic is a recursive and iterative process that may but does not necessarily always lead to a transcendent unity of diverse perspectives, a diversity in unity. In the Rhetoric, Burke explains how dialectic may build an attitude of openness toward a transcendent unity even if not always an immediate and positive outcome. In the context of the parliamentary wrangle, a "'dialectical' order" may lead to mere concession and compromise and so may "leave the competing voices in a jangling relation with one another" . But an "'ultimate' order" might place these competing voices "in a hierarchy, or sequence, or evaluative series" so that the voices might be motivated by a "guiding idea" or "unitary principle," and thus the "somewhat formless parliamentary wrangle" might be "creatively endowed with design" . The voices might not accept this design, but it might nonetheless have the "contemplative effect" of reorienting the voices toward the struggles of politics and more open to the possibility of compromise [Ibid.: 188]. In a similar vein, in the later "Linguistic Approach to Problems of Education," Burke proposes "an educational ladder" whereby students might be led from indoctrination to exposure to others' views for the purpose of combatting them to genuine appreciation for those views to engagement with those views as voices in a dialogue [Burke 1955: 283]. At this fourth stage in the process, all voices deemed to be relevant to discussion of an issue would need to be represented as ably as possible, not merely for the purpose of "fair play" but so that "the various voices, in mutually correcting one another, will lead toward a position better than any one singly" . At the least, at this stage, the voices in the dialogue would be affected by the other voices, would learn from them, and might thereby correct or enrich their own beliefs. Beyond these ongoing and thus incomplete dialectical processes, transcendence as an aesthetic experience may also be possible but may be only momentary and fleeting. Gregory Clark provides an instance of "aesthetic transcendence" that succeeds, if only momentarily, where dialogue fails [Clark 2014: 180]. This instance is a conflict that he observed in workshop with a jazz sextet wherein a saxophonist and a trumpeter were at odds in their approach to jazz music. The saxophonist was a jazz historian, the trumpeter a proponent of innovation and experimentation. The two were at odds throughout the workshop until a public performance, the two successively improvising and then accompanying each other, with a momentarily satisfying result that nonetheless left them as much as ever at odds with each other. Such a process of aesthetic transcendence, Clark argues, might not be as permanent as a transcendence achieved dialogically, but it might succeed, at least momentarily, where dialogue fails. Transcendence as diversity in unity is also merely illusory when it is the product of a forced and false unity, as Burke illustrates in "Towards Helhaven: Three Stages of a Vision," a mock, satiric portrait of an Earth colony on the moon [Burke 1971]. Burke remained committed and engaged in sociopolitical issues throughout his life and, not least, in his later years, in problems of technology and the natural environment. 4 Burke perceived technology and nature as almost entirely, though not inescapably, at odds with each other since technology is both in nature and at odds with it: "The various toxic waste dumps are in nature; all Counter -Nature (much of it advantageous) is in nature. It is 'unnatural' only in the sense that, thanks to the symbol -guided 'labors' of Technology, we have altered the nature of our environment as no other animal's mere 'presence' in the world has been remotely able to do" [Burke 1984: 426 -27]. But technology per se is not the villain. The problem is rather "Technologism" and the mutual entanglements between technology and social, political, and economic structures [Burke 1972: 53]. Technologism is the use of technology to solve the problems created by technology: "As distinct from mere technology, 'Technologism' would be built upon the assumption that the remedy for the problems arising from technology is to be sought in the development of ever more and more technology" [Ibid.: 53]. And technology, moreover, is situated within a complex sociopolitical environment, "as all of us in the United States today share, however variously, the situation characterized by the present conditions of technology, finance, and sociopolitical unrest" [Burke 2016: 263]. Burke responds to this technology -nature split with satire, by engaging "the entelechial principle" but doing so "perversely, by tracking down the possibilities or implications to the point where the result is a kind of Utopia -in -reverse" [Burke 1974: 315]. In "Towards Helhaven," he envisions this Utopia -in -reverse in the form of an imaginary community that brings technology and nature together in a false unity-false because it is merely asserted and not developed via the dialectical process that leads both upward and downward to a true transcendence and a diversity in unity. "Towards Helhaven" portrays an Earth that is consumed with "Hypertechnologism," industrial progress, massive energy consumption, depletion of natural resources, and global pollution to the point that a radical revolution would be required before "the adventurous ideals of exploitation that are associated with modern industrial, financial, and political ambitions could be transformed into modes of restraint, piety, gratitude, and fear proper to man's awareness of his necessary place in the entire scheme of nature" time grappling with the meanings and ramifications of technological behavior. He recognized the waste and destruction that technologies entailed, and these problems constituted one of the central themes that Burke confronted throughout his writings: 'Big Technology'" [Hill 2009]. According to Hill, Burke's self -assigned role "in response to Big Technology was that of world -transforming critic aiming to counter the problems derived from 'Counter -Nature.'" [Burke 1971: 19]. As an escape from this ravaged Earth, Helhaven offers a visionary lunar community that unites technology and nature in an idyllic paradise: HELHAVEN, the Mighty Paradisal Culture -Bubble on the Moon. Safer than any Sea Meadows venture (even under the Arctic ice). More nearly attainable than a Martian project, HELHAVEN, the Ultimate Colony, merging in one enterprise, both Edenic Garden and Babylonic, Technologic Tower. And paradox of paradoxes: This Final Flight will have been made possible by the very conditions which made it necessary. [Ibid.: 21]. But this technology -nature unity is the product of Burke's satiric imagination and mere assertion and thus provides only a mock image of a true transcendence. Moreover, this imagined unity is evidently also a satiric mock imitation of the dialectic's Upward and Downward Ways-a mockery because it leads ever outward and upward and thus forecloses the downward way that revisits diverse perspectives from the perspective -of -perspectives of a transcendent unity: But in any case, let there be no turning back of the clock. Or no turning inward. Our Vice -President has rightly cautioned: No negativism. We want AFFIRMATION-TOWARDS HELHAVEN. ONWARD,OUTWARD,and UP! [Ibid.: 25] This satiric portrait of a Utopia -in -reverse nonetheless suggests a path to a true transcendence: a dialectical process that engages and respects multiple perspectives with hope that the competition among conflicting perspectives might lead to a broader perspective -of -perspectives and thereby a true diversity in unity. Dialectical transcendence is not a simple solution to the problem of unity in political culture. It is a process that sometimes yields only limited positive outcomes, is sometimes momentary and fleeting, and may also be a false image of a true unity. These limitations notwithstanding, dialectical and perhaps also aesthetic transcendence offer promise of a true diversity in unity-a counter -spectacle to challenge the spectacle of false unities imposed by force or effected by partially or wholly deliberate deceit and manipulation. Political unity born of nationalism and partisanship is a false unity that promotes marginalization, stokes the flames of hatred of ethnic and gendered minorities, and produces spectacles as appearances of unity that veil an underlying refusal to respect difference and diversity and engage others in meaningful political discourse and mutual accommodation. Bakhtin's portraits of multivocality are fundamental dialogic principles disguised as literary criticism, and Burke's counter -spectacle is an aggregation of principles and procedures that may seem remote and abstract but nonetheless provides some basic and essential lessons for a political discourse that embraces differences in the interest of a more genuine and lasting unity that preserves and respects but transcends individual and partisan interests.
2021-05-22T00:05:11.125Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "913596a9bcae9045f69511a3634bdc1123b4b0eb", "oa_license": null, "oa_url": "https://doi.org/10.22455/2541-7894-2020-9-151-173", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f3c98af71480d4018613f3707e8a02df19154a99", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Art" ] }
248339112
pes2o/s2orc
v3-fos-license
Assessment of the feed additive consisting of Lactococcus lactis DSM 11037 for all animal species for the renewal of its authorisation (Chr. Hansen A/S) Abstract Following a request from the European Commission, EFSA was asked to deliver a scientific opinion on the assessment of the application for renewal of Lactococcus lactis DSM 11037, a technological additive to improve ensiling of forage for all animal species. The applicant has provided evidence that the additive currently on the market complies with the existing conditions of authorisation. There is no new evidence that would lead the FEEDAP Panel to reconsider its previous conclusions. Thus, the Panel concludes that the additive remains safe for all animal species, consumer and the environment under the authorised conditions of use. Regarding user safety, the additive is not a skin and eye irritant but should be considered a respiratory sensitiser. In absence of data, the Panel cannot conclude on the skin sensitisation potential of the additive. There is no need for assessing the efficacy of the additive in the context of the renewal of the authorisation. Additional information The additive Lactococcus lactis DSM 11037 is currently authorised for use in feed for all animal species in the European Union (1k2081). 3 EFSA issued one opinion on the safety and efficacy of this product when used in forages for all animal species (EFSA FEEDAP Panel, 2011). 2. Data and methodologies Data The present assessment is based on data submitted by the applicant in the form of a technical dossier 4 in support of the authorisation request for the use of Lactococcus lactis DSM 11037 as a feed additive. The European Union Reference Laboratory (EURL) considered that the conclusions and recommendations reached in the previous assessment are valid and applicable for the current application. 5 Methodologies The approach followed by the FEEDAP Panel to assess the safety and the efficacy of active substance is in line with the principles laid down in Regulation Assessment The product consisting of viable cells of L. lactis DSM 11037 is authorised as a technological additive (functional group: silage additives) for use in forages for all animal species. This assessment regards the renewal of the authorisation of L. lactis DSM 11037 for the above-mentioned species. Characterisation of the additive The product consists of~30% bulk (range 22-40%) containing the active agent and a cryoprotectant, 8% silica as an anticaking agent and 52-70% maltodextrin as a carrier. The information submitted regarding the manufacturing process lists the increase in the content of anticaking agent from 2% to 8% compared to the information submitted when the first authorisation was granted. The additive is currently authorised with a minimum content of the active agent Lactococcus lactis DSM 11037 of 5 9 10 10 colony forming units (CFU) per gram of additive. Analysis of five recent batches showed compliance with the specifications (mean: 4.28 9 10 11 CFU/g additive, range 3.8-4.7 9 10 11 CFU/g additive). 7 Specifications are set for coliforms, yeasts and filamentous fungi (< 1,000 CFU/g), Salmonella spp. (no detection in 25 g) and Escherichia coli (< 10 CFU/g). 8 Analysis of the above-mentioned batches of the additive showed compliance with these limits 7 with the exception of Salmonella spp. detection for which the Panel notes that despite the specification, only data from analyses of 5 g instead of 25 g in three batches were provided. 9 Three recent batches were tested for aflatoxin B1, mercury (Hg), lead (Pb), cadmium (Cd) and arsenic (As) concentrations. Results showed the following mean values: 0.0058 mg Hg/kg (range: 0.0040-0.0077 mg/kg), 0.024 mg Pb/kg (range: 0.022-0.026 mg/kg), 0.039 mg Cd/kg (range: 0.035-0.045 mg/kg) and 0.033 mg As/kg (range: 0.028-0.036 mg/kg). 10 Aflatoxin B1 in all batches was below the limit of quantification (0.46 µg aflatoxin B1/kg). 10,11 The detected amounts of the above-described impurities do not raise safety concerns. In order to establish the impact of the change of the amount of anti-caking on the physicochemical properties of the additive, the applicant has provided new data on dusting potential and particle size distribution. The dusting potential was measured in three recent batches (mean 1,178 mg/m 3 ; range 1,010-1,300 mg/m 3 ). 12 Results on the particle size distribution by laser diffraction showed that~47% of the additive consists of particles with diameters below 100 µm, 32% below 50 µm and 8% below 10 µm. 13 No other new data were provided regarding the physico-chemical properties or stability of the additive. Since the changes introduced in the additive are not expected to have a significant effect on these characteristics, the data described in the previous opinion (EFSA FEEDAP Panel, 2011) still apply. Characterisation of the active agent The active agent is deposited in the Deutsche Sammlung von Mikro-organismen und Zellkulturen (DSMZ) with accession number DSM 11037. 14 The taxonomical identification was confirmed by The bacterial strain was subjected to antimicrobial susceptibility testing using broth microdilution method and including the set of antimicrobials recommended by EFSA (EFSA FEEDAP Panel, 2018). 17 All the minimum inhibitory concentration values were equal or below the corresponding EFSA cut-off values for L. lactis. Therefore, the strain is considered to be susceptible to all the relevant antimicrobials. The WGS of the strain was interrogated for the presence of antimicrobial resistance genes No hits of concern were identified. Conditions of use The additive is currently authorised for use in silage for all animal species. Under other provisions of the authorisation, it is specified that: • in the directions for use of the additive and premixture, indicate the storage temperature and storage life. • minimum dose of the additive when used without combination with other microorganisms as silage additives: 1 9 10 8 CFU/kg fresh material. • for safety: it is recommended to use breathing protection and gloves during handling. The applicant has requested to maintain the same conditions of use. Safety for the target species, consumers and environment In the previous opinion, the Panel concluded that, following the Qualified Presumption of Safety (QPS) approach to safety assessment, the use of this strain in the production of silage was considered safe for target species, consumers and the environment (EFSA FEEDAP Panel, 2011). In the context of the current application, the identity of the strain as Lactococcus lactis was confirmed and evidence was provided that the strain does not show acquired antimicrobial determinants for antibiotics of human and veterinary importance. Consequently, the conclusions already reached are still valid and Lactococcus lactis DSM 11037 is considered safe for the target species, consumers and the environment. Safety for the user In the previous assessment (EFSA FEEDAP Panel, 2011), the Panel concluded regarding user safety: 'Given the proteinaceous nature of the active agent and in the absence of evidence to the contrary, the additive should be considered to have the potential to be a skin and respiratory sensitiser'. The applicant submitted a total of four studies to address the safety for the user: three on skin irritation (two in vitro and one in vivo) and one on eye irritation. The skin irritation potential of the additive was tested in vitro under GLP principles and according to OECD TG 439. 20,21 A first study did not allow to assign a definitive UN GHS category (Category 1 or Category 2). 20 A second study showed no skin irritation potential under the test conditions chosen (UN GHS Classification 'No Category'). 21 The in vivo skin irritation potential of the additive was tested in a valid study performed according to OECD TG 404, which showed that it is not a skin irritant. 22 The eye irritation potential of the additive was tested in vitro in a study under GLP principles performed according to OECD TG 492, which showed that is not an eye irritant (UN GHS Classification 'No Category'). 23 No information on skin sensitisation was provided; therefore, the FEEDAP Panel cannot draw conclusions on the skin sensitisation potential of the additive. The applicant declares that no adverse effects on the health of manufacturing workers or users of the additive have been reported since the approval of the additive. 24 Extensive Literature Search The applicant performed a literature search to provide evidence that the additive remains safe under the approved conditions for target species, consumers, users and the environment. 25 The literature search was conducted in PubMed and EBSCO EDS (included the databases Academic Onefile, Food Science Source and AGRIS) and covered the period 2010-2020. 26,27 Search terms included the active agent and aspects related to the safety for animals, humans and the environment. The literature search retrieved a total of 26 results that were full text screened, from which three were further included in the review process but none considered relevant for the herein application because they referred either to other strain (two hits) or to biogenic amine production (one hit). Conclusions on safety The FEEDAP Panel concludes that there is no new evidence that would lead it to reconsider the previous conclusions that Lactococcus lactis DMS 11037 is safe for the target species, consumers and the environment under the authorised conditions of use. The additive is not a skin and eye irritant, but it is considered a respiratory sensitiser. No conclusions can be drawn on the potential of the additive to cause skin sensitisation. Efficacy The present application for renewal of the authorisation does not include a proposal for amending or supplementing the conditions of the original authorisation that would have an impact on the efficacy of the additive. Therefore, there is no need for assessing the efficacy of the additive in the context of the renewal of the authorisation. Conclusions Based on the QPS approach to safety assessment, Lactococcus lactis DSM 11037 is presumed safe for the target species, consumers and the environment. There is no new evidence that would lead the FEEDAP Panel to reconsider its previous conclusions. Thus, the Panel concludes that the additive remains safe for all animal species, consumer and the environment under the authorised conditions of use. The additive is not an eye or skin irritant but should be considered a respiratory sensitiser. In absence of data, the FEEDAP Panel cannot draw conclusions on the skin sensitisation potential of the additive. There is no need for assessing the efficacy of the additive in the context of the renewal of the authorisation. 5. Documentation provided to EFSA/Chronology Date Event
2022-04-23T15:15:55.729Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "ca1f3e2c77c22e9a68acd84866c6f72cb4940c72", "oa_license": "CCBYND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5e3514cffb28ec1f605e9fe7149b5a96690e83d0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54476647
pes2o/s2orc
v3-fos-license
Circadian Expression of Migratory Factors Establishes Lineage-Specific Signatures that Guide the Homing of Leukocyte Subsets to Tissues Summary The number of leukocytes present in circulation varies throughout the day, reflecting bone marrow output and emigration from blood into tissues. Using an organism-wide circadian screening approach, we detected oscillations in pro-migratory factors that were distinct for specific vascular beds and individual leukocyte subsets. This rhythmic molecular signature governed time-of-day-dependent homing behavior of leukocyte subsets to specific organs. Ablation of BMAL1, a transcription factor central to circadian clock function, in endothelial cells or leukocyte subsets demonstrated that rhythmic recruitment is dependent on both microenvironmental and cell-autonomous oscillations. These oscillatory patterns defined leukocyte trafficking in both homeostasis and inflammation and determined detectable tumor burden in blood cancer models. Rhythms in the expression of pro-migratory factors and migration capacities were preserved in human primary leukocytes. The definition of spatial and temporal expression profiles of pro-migratory factors guiding leukocyte migration patterns to organs provides a resource for the further study of the impact of circadian rhythms in immunity. In Brief Leukocytes continuously circulate throughout the body. He et al. demonstrate that trafficking patterns of major leukocyte subsets occur in a rhythmic manner dependent on the timeof-day-dependent expression of lineageand tissue-specific factors. This influences the inflammatory response and leukemic tumor burden and translates to the migration behavior of human primary lymphocytes. INTRODUCTION Leukocytes exit the blood by undergoing extensive interactions with endothelial cells. This sequence of events is known as the leukocyte adhesion cascade (Butcher, 1991;Ley et al., 2007;Muller, 2016;Springer, 1994;Vestweber, 2015;Wagner and Frenette, 2008). Circulating leukocytes first tether along endothelial cells by engaging P-selectin glycoprotein ligand-1 (PSGL-1) with E-and P-selectin presented on the endothelium. This process brings the cells in closer proximity to the vessel wall and slows them down to roll along endothelial cells, using PSGL-1 as well as L-selectin to interact with endothelial selectins. During this step, leukocytes come in contact with chemokines presented on the endothelial cell surface. Chemokines engage chemokine receptors on leukocytes, leading to Gai-mediated inside-out signaling of integrins. Integrins extend into a high-affinity conformation and mediate the firm adhesion of leukocytes. Lymphocyte function-associated antigen-1 (LFA-1) (CD11a/CD18 or aLb2 integrin), macrophage-1 antigen (Mac-1) (CD11b/CD18 or aMb2 integrin), and very late antigen-4 (VLA-4) (CD49d/CD29 or a4b1 integrin) play a major role in this step and interact with members of the immunoglobulin superfamily on endothelial cells, primarily intercellular adhesion molecule-1 (ICAM-1), ICAM-2, and vascular cell adhesion molecule-1 (VCAM-1). In the final step, adherent leukocytes crawl along the vessel wall, probe for adequate sites for crossing the endothelium, and emigrate from the vascular lumen into the parenchyma in a process termed transmigration. The requirement of specific molecules in the leukocyte emigration process is highly dependent on the tissue context and the leukocyte subset involved (Schnoor et al., 2015). Interactions between receptor-ligand pairs of pro-migratory molecules governing the subset-specific migration process of leukocytes to specific organs have been referred to as a homing code (Marelli-Berg et al., 2008;Rot and von Andrian, 2004;Springer, 1994). Although some of these molecular binding partners are known and have been discussed above, the trafficking requirements of many leukocyte subsets are unclear. This is particularly true for the steady state because leukocyte infiltration into organs has mostly been studied in inflammatory scenarios. Recent data point to the influence of time of day on the number of leukocytes present in the circulation (Casanova-Acebes et al., 2013;Druzd et al., 2017;Nguyen et al., 2013;Scheiermann et al., 2012;Shimba et al., 2018;Suzuki et al., 2016). These circadian rhythms, occurring within a period of approximately 24 h, are A B C (legend on next page) critical in aligning the body to the usual recurring cycles of the environment (Arjona et al., 2012;Curtis et al., 2014;Dibner et al., 2010;Labrecque and Cermakian, 2015;Man et al., 2016;Scheiermann et al., 2018;Scheiermann et al., 2013). The number of leukocytes circulating in blood is largely dependent on two factors: mobilization into blood from organs such as bone marrow, which increases cellularity in blood (input); and emigration from blood into organs, decreasing cellularity in blood (output). Here, we investigated the hypothesis that leukocyte subsets migrate to organs at specific times of the day. By employing this diurnal rhythmicity as a functional screening tool in combination with a systematic approach of adoptive transfer and homing assays, we detected a time-resolved code of promigratory factors for the specific migration behavior of leukocyte subsets to organs. Lineage-specific genetic ablation of the circadian clock demonstrated that endothelial-cell-and leukocyteautonomous oscillations are critical in these processes. These rhythms are relevant in inflammation and determine leukemic tumor burden at specific times. Human primary leukocytes exhibit a similar time-resolved code of pro-migratory factors. The circadian patterns of expression of pro-migratory factors defined here present a resource for the further exploration of how the immune system has adapted to the recurring cycle of the environment and to the relevance of this adaptation in health and disease. Rhythmic Emigration of Leukocyte Subsets from Blood Circulating white blood cell (WBC) counts oscillate in murine blood such that they exhibit a peak 5 hr after the onset of light (also known as Zeitgeber time [ZT] 5, i.e., 5 hr after lights on [12 p.m.] in a 12 hr/12 hr light/dark environment) and a trough in the evening (ZT13 or 8 p.m., 1 hr after lights off) ( Figure 1A). Numbers of neutrophils, B cells, CD4 and CD8 T cells, natural killer (NK) cells, NK T cells, eosinophils, and inflammatory and non-inflammatory monocytes showed similar peaks and troughs with a 2-to 7-fold change in numbers between the peak and trough depending on the subset ( Figure 1A, Figure S1A, and data not shown). We investigated whether a rhythmic leukocyte emigration process could explain the observed oscillations in blood. As an initial screen, we performed ''negative'' homing assays, where 1 hr after adoptively transferring leukocytes intravenously (i.v.), we quantified the number of transferred cells remaining in the blood to assess emigration of cells across the whole organism. To additionally investigate the influence of a rhythmic microenvironment in this process, we harvested donor cells at one time and transferred them simultaneously into recipients that were kept in shifted light cycles. We saw a clear diurnal rhythm given that the number of labeled donor cells remaining in the blood after transfer was lowest in an evening environment (ZT13) and highest in the morning (ZT1) for all investigated subsets ( Figure 1B and Figure S1B). This demonstrated that in the evening more cells had left the blood, for example, by migrating into tissues or by firm contact with the vasculature, both of which effectively removed them from the circulation. Furthermore, it demonstrated a strong influence of rhythmicity in the microenvironment on leukocyte recruitment and numbers in blood in general. We next assessed the role of rhythmicity in leukocytes in this process. This time, donor cells were harvested from mice kept in shifted light cycles and simultaneously injected into recipient mice at one time of the day. In this scenario as well, ''evening'' cells showed the highest emigration behavior, and ''morning'' cells generally showed the lowest ( Figure 1C). We confirmed these observations by performing reciprocal emigration assays where ''morning'' or ''evening'' cells were co-injected into ''morning'' or ''evening'' recipients, respectively, with differential color labeling ( Figure S1C). These data thus demonstrated that both microenvironment and leukocytes co-contribute to rhythmic leukocyte exit from the circulation, a broad phenomenon that peaks in the evening for all subsets investigated. Tissue-Specific Oscillations in Endothelial Cell Adhesion Molecules Because the microenvironment is a strong driver of rhythmic leukocyte emigration from blood ( Figure 1B), we performed a screen of multiple organs for oscillatory expression of adhesion molecules on endothelial cells, the initial points of contacts for leukocytes in the emigration process. To achieve this, we harvested multiple organs (thymus, spleen, lymph node, liver, skin, gut, lung, and Peyer's patches) from mice over six time points of the day. We then performed quantitative fluorescence microscopy imaging assays on sections from each organ, which allowed us to minimize variability and compare expression patterns across tissues within the same mice at the same time. This approach yielded a highly tissue-specific temporal expression map for endothelial cell adhesion molecules ( Figure 2A and Table S1). Integrating the profiles from all expressed molecules across all organs over time revealed a peak in expression in the evening ( Figure 2B). This indicated that endothelial cells within the body (or at least within the eight organs assessed as proxy) had a distinctly higher leukocyte recruitment capacity at this time. This was in line with the negative homing data, which represented highest leukocyte emigration in the evening from blood across the whole organism ( Figure 1B). Specifically, ICAM-1 was expressed in every vascular bed analyzed, VCAM-1 was expressed in all but the skin, and both exhibited peaks in expression in the evening ( Figure 2C). ICAM-2 was expressed in all organs except spleen and skin, whereas P-selectin (A) Total leukocyte and leukocyte subset counts over 24 hr. Zeitgeber time (ZT, time after light onset) 1 is double plotted to facilitate viewing (n = 9-62 mice; oneway ANOVA). WBC, white blood cell; NK, natural killer; IM, inflammatory monocyte; NIM, non-inflammatory monocyte. (B) Diagram of adoptive-transfer assay with rhythmic recipients. Shown are numbers of adoptively transferred donor cells present in the blood of recipient mice 1 hr after transfer over 24 hr. Data are normalized to ZT5 levels (n = 3-25 mice; one-way ANOVA). (C) Diagram of adoptive-transfer assay with rhythmic donors. Shown are numbers of adoptively transferred donor cells present in blood of recipient mice 1 hr after transfer over 24 hr. Data are normalized to ZT5 levels (n = 3-17 mice; one-way ANOVA). *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. All data are represented as mean ± SEM. See also Figure S1. was expressed in all investigated organs but the liver. Expression for both molecules peaked in the evening; however, this was not statistically significant ( Figure S2A). In functional assays, we next focused on the molecules that showed oscillations and robust expression levels across organs because these molecules were likely to be critical in mediating rhythmic homing for many of the investigated subsets. Indeed, chronopharmacological blockade with an antibody directed against VCAM-1 in the morning or at night resulted in increased numbers of adoptively transferred cells in the circulation, signifying reduced tissue homing. This additionally ablated their day-night oscillation (Figures 2D and 2E). Blockade with an anti-ICAM-1 antibody increased numbers of neutrophils, T cells, eosinophils, and non-inflammatory monocytes and ablated their rhythmicity but had no effect on inflammatory monocytes ( Figures 2D and 2E). This was confirmed genetically with Icam1-deficient recipients (Figures 2F and 2G). Blocking of ICAM-2, E-selectin, or P-selectin, on the other hand, had little or no effect on leukocyte cellularity or oscillations in blood ( Figure 2D). Importantly, we generally observed a much more pronounced blocking effect when antibodies were administered in the evening than when they were administered in the morning for both adoptively transferred and endogenous leukocyte populations (Figure 2E and Figure S2B). This established the functional importance of time of day and an oscillatory expression of endothelial cell adhesion molecules for the emigration of leukocyte subsets from blood. Leukocyte Subset-Specific Oscillations in Pro-migratory Factors Given that we had additionally identified rhythmicity in leukocytes to govern the emigration process ( Figure 1C), we next screened blood leukocyte subsets for an oscillatory expression of adhesion molecules and chemokine receptors. Using flowcytometry analyses across four to six time points of the day, we observed oscillations in adhesion molecules and chemokine receptors, which varied between subsets. Together, they provided a unique rhythmic molecular signature for each lineage (Figure 3A, Figures S3A-S3C, and Table S2). Focusing on the molecules that exhibited the broadest expression and most robust oscillation patterns, we performed functional blocking experiments by using antibodies or functional inhibitors. Numbers of adoptively transferred leukocytes increased in blood most prominently after injection of antibodies directed against CD49d (a4-integrin) or L-selectin ( Figure 3B). In contrast, no effect was observed when PSGL-1 or the single b1or b2-integrin subunits were blocked ( Figure 3B and data not shown). Analogous to targeting endothelial cell adhesion molecules, we again observed a stronger effect when antibodies were administered in the evening than when they were administered in the morning ( Figure 3B). We next assessed the functional relevance of oscillatory expression of chemokine receptors on the surface of leukocyte subtypes. Pre-treatment of morning or evening cells with pertussis toxin before adoptive transfer blocked leukocyte homing and ablated its rhythmicity, indicating leukocyte chemokine receptors to be critically involved in this process (data not shown). Specifically, strong effects on numbers and oscillations of adoptively transferred and endogenous leukocyte populations were observed when AMD3100, an antagonist against CXCR4, was administered (Figures 3C-3E and Figure S3D). In this scenario, leukocyte oscillations ceased in all assessed subtypes, which was also observed when cells were pre-treated with the antagonist ex vivo before adoptive transfer ( Figure S3E), with the exception of inflammatory monocytes ( Figure 3C). In contrast, blocking other chemokine receptors, including CXCR2 and CCR4 as well as CXCR3, CCR2, and CCR1, did not yield major effects ( Figure 3C and data not shown). These data demonstrate the critical requirement of leukocyte adhesion molecules and CXCR4 in the rhythmic leukocyte migration process. In line with these findings, we observed an oscillation of Cxcl12 mRNA expression and the CXCR4 ligand in both bone marrow and the lung ( Figure S3F). Of importance, this process could be blocked pharmacologically in a time-of-day-dependent manner through the targeting of pro-migratory factors on endothelial cells or leukocytes ( Figure 3F and Figure S3G). Diurnal Homing Capacity of Leukocyte Subsets to Specific Organs We next investigated to which organs leukocyte subsets homed over the course of the day. Adoptive transfer of morning or evening cells into phase-matched morning or evening recipients, respectively, demonstrated more leukocyte trafficking to organs in the evening, in line with our data obtained from blood (Figure 4A and Figure S4A). This excluded excessive phagocytosis or death of leukocytes at specific times as a major contributor to the diurnal effects seen in blood in the employed short time frame of 1 hr. We confirmed this by performing reciprocal homing assays where we co-injected morning or evening cells into morning or evening recipients, respectively, by using differential color labeling ( Figure S4B). Specifically, we observed more homing to bone marrow, lymph node, spleen, liver, and lung ( Figure 4A and Figure S4A). We observed very little homing to other investigated tissues, such as skin, thymus, and gut, in the investigated time (B) Integration of all expressed molecules over all organs across the day (n = 3-6 mice with 6 time points measured each; one-way ANOVA). (C) Integration of ICAM-1 and VCAM-1 expression over all organs across the day (n = 3-6 mice with 6 time points measured each; one-way ANOVA). (D) Adoptive transfer of donor cells to recipients treated with functional blocking antibodies directed against the indicated molecules at ZT1 and ZT13. Results are presented as percentages of injected cells (n = 4-12 mice; one-way ANOVA followed by Dunnett comparison to control groups and unpaired Student's t test for comparisons between ZT1 and ZT13 groups). To the lung, more homing of neutrophils, inflammatory monocytes, B cells, eosinophils, and CD8 T cells was observed (Figure 4A). To the bone marrow, more homing of neutrophils, B cells, inflammatory monocytes, and NK cells was seen, (Figure S4A) and to the spleen, more homing of neutrophils, B cells, and NK cells was detected ( Figure 4A). Because homed cells could have either traversed the endothelium or remained adherent in the vasculature of the respective organ (which would both remove them from the circulation), we additionally assessed the specific location of transferred cells within tissues. To achieve this, we co-injected an anti-CD45 antibody just before perfusion and tissue harvest. This allowed us to distinguish between i.v.-CD45-labeled leukocytes (adherent cells) and non-labeled leukocytes (extravasated cells). In the liver and lung, the vast majority of transferred cells were present in the vasculature ( Figure 4B). In contrast, leukocytes in bone marrow, lymph node, and spleen had predominantly traversed the endothelium ( Figure 4B and Figure S4C). We confirmed these data by performing imaging analyses of organ whole mounts to visualize the precise location of cells in three dimensions (Figures 4C and 4D). This approach allowed us to additionally assess their locations with respect to organ-intrinsic structures, demonstrating that in the spleen more transferred cells were present in the red pulp than in the white pulp and that cells were extravascular in both areas ( Figure 4C and Figure S4D). Together, these data clearly demonstrate a leukocyte-subset-specific capacity in the rhythmic migration to distinct organs. Chronopharmacological Targeting of Leukocyte Homing to Tissues We next used the identified targets among pro-migratory factors to assess which leukocyte subset was dependent on which molecule to migrate to which tissue. We built on our previous observations on the time-dependent inhibition of leukocyte emigration from blood ( Figure 3F). We therefore performed the experiments in the evening (ZT13) to maximize the outcome of potential blocking effects and thus detect an influence of molecules that might not have been previously implicated in mediating the migration of leukocyte subsets to specific organs ( Figures 5A-5I). As expected for the bone marrow, anti-CXCR4 treatment had the overall strongest blocking effect on all investigated subtypes, with the exception of inflammatory monocytes (Figure 5A). Specifically, this treatment decreased numbers of extravasated cells but increased numbers of adherent leukocytes inside the vasculature, indicating a specific role in the extravasation process in this tissue (Figures 5B and 5I and Figure S5A). VCAM-1 inhibition, on the other hand, showed effects in the extravasation of CD4 and CD8 T cells and both the adhesion and extravasation of B cells given that for the latter, both extravascular and intravascular cell numbers were reduced ( Figure 5A and Figure S5A). Blocking ICAM-1 reduced numbers of extravasated B cells, neutrophils, and CD8 T cells ( Figure 5A). In the lymph node, we observed the most dramatic effect with an antibody directed against L-selectin, which reduced the numbers of all investigated subsets at the step of adhesion and extravasation ( Figures 5C and 5D and Figure S5B). CD11a and ICAM-1 blockade exhibited a similar, albeit slightly weaker effect, indicating their potential co-dependence in this tissue (Figures 5C and 5D and Figure S5B). In the spleen, blockade of L-selectin exhibited the strongest effect, particularly on CD8 T cells with additional effects on B cells, neutrophils, and CD4 T cells ( Figures 5E and 5F). Blocking CD11a exhibited specific effects on the ability of B cells to transmigrate given that the number of extravasated cells was reduced and the number of adherent cells was increased ( Figures 5C and 5D and Figure S5C). Anti-ICAM-1 treatment inhibited B cell and inflammatory monocyte immigration ( Figure 5E). In the lung, numbers of adherent leukocytes could be reduced by blockade of VCAM-1 or ICAM-1 (all subsets except neutrophils), CXCR4, CD49d, or CD11a (B cells, CD4 T cells, and IMs) ( Figure 5H). (B) Adoptive transfer of ZT1 and ZT13 donor cells to recipients treated with functional blocking antibodies directed against the indicated molecules at ZT1 and ZT13. Cell numbers are normalized to ZT1 and ZT13 controls (n = 3-12 mice; one-way ANOVA followed by Dunnett comparison to control groups and unpaired Student's t test for comparisons between ZT1 and ZT13 groups). (C) Adoptive transfer of donor cells to recipients treated with antagonists against the indicated molecules at ZT1 and ZT13 (n = 3-10 mice; one-way ANOVA followed by Dunnett comparison to control groups and unpaired Student's t test for comparisons between ZT1 and ZT13 groups). (D) Fold change of donor cells remaining in recipient blood at ZT1 and ZT13 after anti-VCAM-1 and anti-ICAM-1 antibody treatment, respectively, in comparison with numbers of isotype antibody controls. (n = 3 or 4 mice; one-way ANOVA followed by Dunnett comparison to control groups and unpaired Student's t test for comparisons between ZT1 and ZT13 groups). (E) Endogenous blood leukocyte numbers after CXCR4 antagonist treatment (n = 3 or 4 mice; one-way ANOVA followed by Dunnett comparison to control groups and unpaired Student's t test for comparisons between ZT1 and ZT13 groups). (F) Overview of functional blocking effects on adoptively transferred leukocyte subsets in blood targeting the indicated molecules at ZT1 and ZT13 (n = 3-12 mice; one-way ANOVA followed by Dunnett comparison to control groups). *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001; #, ##, ###, #### indicate significance levels analogous to those of control groups. All data are represented as mean ± SEM. ns, not significant. See also Figure S3 and Table S2. (D) Quantification of numbers and localization of total transferred cells is based on whole-mount imaging of organs (n = 3 mice; unpaired Student's t test). *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. All data are represented as mean ± SEM. See also Figure S4. Together, these data demonstrate that pro-migratory factors on endothelial cells and leukocytes govern time-of-day-dependent migration, thereby identifying a circadian signature that determines leukocyte migration to tissues ( Figure 5J and Figure S5D). Lineage-Specific Clock Deficiency Ablates Migration Rhythms We next investigated the relevance of a functional clock in the rhythmic trafficking behavior of immune cells. We first focused on the microenvironment by using Cdh5 CreERT2 Bmal1 flox/flox mice to specifically delete the circadian gene brain and muscle Arnt-like protein-1 (Bmal1, also known as Arntl) in endothelial cells (Wang et al., 2010). Bmal1 is a core component of the cellular clockwork and the only single gene whose deficiency causes an ablation of circadian rhythmicity (Storch et al., 2007). Using these mice as recipients, we performed homing experiments and quantified the amount of adoptively transferred cells that remained in the blood. Interestingly, whereas control mice showed fewer transferred cells in the circulation in the evening, mice with Bmal1-deficient endothelial cells showed no difference between time points for all subsets examined (Figure 6A). We next investigated whether rhythmic homing to tissues was also ablated. Indeed, in the two organs displaying the strongest oscillations, the lung and liver, time-of-day differences were lost ( Figure 6B). These observations were associated with strongly reduced evening expression of ICAM-1 and VCAM-1 in the liver and lung, respectively, of mice with Bmal1 À/À endothelial cells ( Figure 6C). This genetically demonstrates the relevance of oscillations in the microenvironment and indicates that within tissues, the endothelial-cell-specific clock plays a critical role in governing rhythmic leukocyte recruitment. To assess the influence of clocks in leukocytes in this phenomenon, we used Cd19 Cre Bmal1 flox/flox mice as donors to evaluate the homing capacity of clock-deficient B cells. Transferred Bmal1 À/À B cells exhibited no more time-dependent homing to the spleen or lymph nodes of wild-type recipients ( Figure 6D). In addition, Bmal1 À/À B cells displayed diminished oscillations in the clock gene Nr1d1 (Rev-Erba) ( Figure 6E) and reduced surface amounts of CD11a and CD49d ( Figure 6F), as well as CCR7 and CXCR5 (but not CXCR4 or L-selectin) ( Figure S6A and data not shown). Using Lyz2 Cre Bmal1 flox/flox mice to target the clock in myeloid cells, we also observed a lack of oscillations in the migration behavior of donor neutrophils to the spleen of wildtype recipients ( Figure 6G). Bmal1 À/À neutrophils ( Figure 6H) and monocytes ( Figure S6B) displayed altered Nr1d1 expression. Neutrophils exhibited lower expression of PSGL-1 ( Figure 6I), whereas monocytes showed altered amounts of L-selectin (Sell), CCR2, and CD18 integrin ( Figures S6B and S6C). Together, these data genetically demonstrate that both endothelial cell and leukocyte clocks are critically required for a rhythmic homing process by regulating the expression of pro-migratory factors. Relevance of Rhythmic Leukocyte Trafficking in Inflammation and Leukemia We next explored the relevance of oscillatory leukocyte trafficking for the immune response by using a systemic inflammatory challenge with intraperitoneally injected lipopolysaccharide (LPS). After acute stimulation, leukocyte counts in blood exhibited a dramatic drop, but time-of-day differences of leukocyte subsets, which exhibited lower numbers in the evening for all investigated subsets, were preserved ( Figure 7A). This indicates the importance of rhythmic leukocyte migration for the strength of the immune response given that, indeed, tissue infiltration into the peritoneal cavity was rhythmic ( Figure S7A). In addition, the administration of antibodies directed against VCAM-1, ICAM-1, or CD49d was able to block this effect in a subset-specific manner, whereas anti-CD11a treatment exhibited no effect. Even after the use of inflammatory challenge, antibodies exerted a stronger inhibitory effect on leukocyte emigration from blood at night ( Figures 7A and 7B). Together, these data indicate the relevance of oscillatory leukocyte migration in determining the strength of the immune response. We additionally explored a disease model by using a leukemia cancer model where tumor burden is measured in blood. We used both a syngeneic and a xenogeneic model of acute myeloid leukemia (AML) and B cell acute lymphoblastic leukemia (B-ALL). In the syngeneic model, CD45.1 + wild-type mice were injected i.v. with 5 3 10 6 CD45.2 + C1498 (AML) or BS50 (B-ALL) cells either in the morning (ZT1) or in the evening (ZT13). After 1 week, we measured numbers of circulating AML or B-ALL blasts at midday on the basis of CD45.2 expression to allow the assessment of the influence of the time of day of administration only and not the harvest. Interestingly, the rate of engraftment, defined as more than one blast per microliter, was much higher in the evening for both models (5/16 in the morning versus 13/16 in the evening), and circulating blasts were significantly higher at night than in the morning ( Figure 7C). This indicates that homing and engraftment of leukemic cells is strongly time-of-day dependent. Because rhythms in the host immune response might influence tumor burden in a time-of-day-dependent manner, we additionally employed a xenogeneic model by using immune-deficient nonobese diabetic (NOD) scid Il2rg À/À (NSG) mice, which have no functional adaptive immune system and lack functional NK cells. We injected 5 3 10 6 NALM-6 cells, a human B-ALL cell line, either in the morning or in the evening. After 1 week, numbers of circulating human blasts at midday were significantly higher when cells had been injected at night ( Figure 7D), confirming that time-ofday-dependent recruitment and engraftment are highly relevant for tumor burden. one-way ANOVA followed by Dunnett comparison to the control group). Localization of transferred cells in the bone marrow (B), lymph node (D), and spleen (F) is normalized to control localization (n = 4-8 mice; one-way ANOVA followed by Dunnett comparison to the control group). (I) Images and quantification of localization of donor cells in bone marrow after CXCR4 blockade. Boxes indicate exemplary cells whose localization within tissues is additionally shown in the z direction (n = 3 mice; unpaired Student's t test). Scale bars: 10 mm. (J) Overview of functional blocking effects on leukocyte recruitment to organs, targeting the indicated molecules (n = 4-8 mice; one-way ANOVA followed by Dunnett comparison to the control group). *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. All data are represented as mean ± SEM. See also Figure S5. Rhythms in Human Leukocyte Migration We finally investigated the presence of oscillations in leukocyte trafficking for humans. Using flow cytometry of human blood harvested over five time points of the day, we found that total WBC was oscillatory but had an inverted pattern compared with the one observed in mice, namely higher numbers in the evening (7 p.m.) and a trough in the morning (Figures S7B-S7C), in line with previous observations (Born et al., 1997). Within leukocyte subsets, we observed the strongest oscillations in B cells ( Figure 7E) as well as CD4 and CD8 T cells ( Figure S7C (B) Adoptive transfer of donor cells to the liver and lung of control recipients or recipients with Bmal1-deficient endothelial cells 1 hr after transfer at ZT1 and ZT13 (n = 5 mice; unpaired Student's t test). (C) Expression of endothelial cell ICAM-1 and VCAM-1 in liver and lung of control mice and mice with Bmal1-deficient endothelial cells at ZT1 and ZT13 (n = 4-7 mice; one-way ANOVA). (D) Adoptive transfer of control or Bmal1-deficient B cells to the spleen and lymph node of wild-type recipients 1 hr after transfer at ZT1 and ZT13 (n = 6 mice; unpaired Student's t test). (E) qPCR analysis of N1rd1 mRNA expression in isolated control and Bmal1-deficient B cells (n = 3 mice; one-way ANOVA). (F) CD11a and CD49d expression on control and Bmal1-deficient B cells in blood at ZT13 (n = 8 mice; unpaired Student's t test). (G) Adoptive transfer of control or Bmal1-deficient neutrophils to the spleen of wild-type recipients 1 hr after transfer at ZT1 and ZT13 (n = 6 mice; unpaired Student's t test). (H) qPCR analyses of N1rd1 mRNA expression in isolated control and Bmal1-deficient neutrophils (n = 3 mice; one-way ANOVA). (I) PSGL-1 expression on control and Bmal1-deficient neutrophils in blood at ZT13 (n = 4 or 5 mice; unpaired Student's t test). *p < 0.05, **p < 0.01, ***p < 0.001. All data are represented as mean ± SEM. ns, not significant. See also Figure S6. (legend continued on next page) and other populations showed a similar trend ( Figure S7C). Similar to murine cell numbers, neutrophil numbers peaked during the day ( Figure S7C). With the exception of neutrophils, higher amounts of subsets in blood were inversely correlated with CXCR4 amounts ( Figure 7F and Figure S7D), indicating that molecules we found to be of key importance in mice were also most likely responsible for driving rhythmic leukocyte migration processes in humans. We therefore assessed the rhythmic migratory capacity of human primary B cells (Bradfield et al., 2007) because these showed the strongest oscillations among subsets. Interestingly, blood B cells harvested in the morning exhibited a significantly higher rate of transmigration across human umbilical vein endothelial cells (HUVECs) than cells harvested from the same donors at night ( Figure 7G). Strikingly, this process could be blocked efficiently in the morning by the CXCR4 antagonist AMD3100 or an antibody directed against LFA-1, whereas no significant effect was observed at night ( Figures 7H-7K). Thus, these data demonstrate that human leukocytes have a rhythmic homing capacity that peaks at inverse times compared with those in mice but uses analogous molecules in the process. Altogether, we describe here a circadian signature that guides rhythmic leukocyte homing in mice and humans in steady-state, inflammation, and disease conditions. DISCUSSION Here, we have shown a broad and rhythmic program that governs the migration patterns of leukocyte subsets throughout the body over the course of the day. We have determined an organ-and leukocyte-subset-specific functional rhythmic signature of pro-migratory factors on endothelial cells and leukocytes. Rhythmicity in both the endothelium and leukocyte contributes to this process given that a genetically induced lack of a functional clock in either ablates time-of-day differences. We have thus identified an extensive, time-of-daydependent trafficking zip code that guides migration of leukocytes to organs. The process of leukocyte migration to tissues has long been studied, and multiple molecules have been implicated (Ley et al., 2007;Muller, 2016;Vestweber, 2015;Wagner and Frenette, 2008). Yet, no broad systematic approach has been undertaken for investigating the effects of multiple molecules, leukocyte subsets, and organs, particularly under non-inflammatory, steady-state conditions and with respect to the time of day. We incorporated the element of time to identify potential phases of the day when leukocyte migration to tissues and its blockade would show maximal effects. We determined this time to be the evening, or more precisely 1 hr after lights off, when mice are at the beginning of the behavioral activity phase. The broad expression profile of VCAM-1 across all organs and its functional implications for the migration of many leukocyte subsets in steady-state conditions were unexpected given that previous studies had generally associated the molecule with inflammatory scenarios (Schnoor et al., 2015). Indeed, our data indicate that during the day, when most other studies were probably performed, VCAM-1 is hardly expressed and plays no functional role. However, expression of VCAM-1 increases over the day and exhibits a function in the evening. Thus, our approach allowed us to identify roles for molecules that had previously not been implicated in the migration of specific leukocyte subsets to organs. For each organ and leukocyte subset, we detected a very distinct molecular homing signature. Blood leukocyte counts and bone marrow recruitment were strongly governed by CXCR4, which affected the migration of all investigated leukocyte subsets, with the exception of inflammatory monocytes. In other organs, the dependency on CXCR4 was reduced and much more subset specific. In addition to the known effect of VCAM-1 and VLA-4 in bone marrow homing, targeting ICAM-1 exhibited a broad inhibitory effect on many investigated subsets, with the exception of CD4 T cells and IMs. In addition, we also detected a role for L-selectin in neutrophil homing to this tissue. To our knowledge, ICAM-1 and L-selectin have previously not been associated with bone marrow homing given that the classical homing receptors on the endothelium consist of VCAM-1, E-selectin, and P-selectin (Mazo et al., 1998). In the lymph node, an almost complete lack of homing was observed for all leukocyte subsets when L-selectin was blocked, in agreement with previous reports (Arboné s et al., 1994;Gallatin et al., 1983). L-selectin was also the dominant molecule regulating leukocyte migration to the spleen, particularly for CD8 T cells, which is an unexpected finding given that previous reports have not implicated a role for this molecule in this organ (Nolte et al., 2002). Also, in the lymph node we found a significant number of neutrophils and inflammatory monocytes, subsets that have generally not been investigated in this tissue under steady-state conditions (Gorlino et al., 2014;Hampton and Chtanova, 2016). Trafficking of neutrophils to the lymph node was dependent on L-selectin, CD11a, CXCR4, ICAM-1, and CD49d, whereas IMs relied mostly on L-selectin. The presence of small but detectable populations in this tissue expands the known trafficking routes of these subsets, particularly of the short-lived neutrophils. The functional relevance of their presence in steady-state conditions in lymph nodes for immune functions remains to be elucidated. Although to our knowledge details on molecules mediating adhesion in the liver and lung under steady-state conditions are currently lacking for neutrophils and monocytes (Doyle et al., 1997;Lee and Kubes, 2008;Looney and Bhattacharya, 2014;Moreland et al., 2002;Rossaint and Zarbock, 2013), we found a small role for VCAM-1 in neutrophil recruitment in the liver. This finding is of physiological relevance because it is the likely explanation for the observed higher numbers of transferred neutrophils in blood when VCAM-1 is blocked. VCAM-1, ICAM-1, VLA-4, and CXCR4 strongly regulated homing of monocytes to (J and K) Example of the TEM capacity of human B cells from one patient at 11 a.m. and 7 p.m. after AMD3100 (J) or anti-LFA-1 treatment (K) plotted over time (n = 4 assays; two-way ANOVA with Tukey post-test). *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001; #, ##, ###, #### indicate significance levels analogous to those of the LPS groups. All data are represented as mean ± SEM. ns, not significant. See also Figure S7. the liver, a finding very similar to that in the lung, where additional effects on LFA-1 were observed. Although the functional importance for VLA-4, VCAM-1, and ICAM-1 has been shown for CD8 T cells in the liver before (Bertolino et al., 2005;John and Crispe, 2004), we have now extended these observations to inflammatory monocytes. An interesting effect that we observed was that targeting VLA-4 increased numbers of B cells in this tissue. Also, blockade of L-selectin increased neutrophil adhesion. This indicates that in a scenario of leukocytosis induced by targeting VLA-4 (most likely due to effects in the bone marrow), B cells accumulate in the liver in an unspecific manner. In contrast, whereas blocking VCAM-1 showed a similar reduction on B cell homing to the bone marrow, more B cells accumulated in the spleen in this scenario. This demonstrates the distribution dynamics and the highly tissue-and subset-specific nature of the leukocyte homing process. The high number of organs, leukocyte subsets, and molecules investigated prevented us from performing functional migration analyses for tissues where little homing occurred. In addition, combining the effects of multiple antibodies or antagonists to assess overlapping functions was outside the scope of this study. Our initial screening procedure in blood allowed us to detect the more dramatic, organism-wide effects, whereas smaller, tissue-specific effects might not have been detectable in some cases. We additionally focused here on events occurring at the blood-tissue interphase as the gate-keeping mechanism for leukocyte infiltration to tissues and thus did not investigate broad chemokine profiles of organs (with the exception of Cxcl12), something that would be strongly dependent on other tissue-resident cells, such as fibroblasts (Parsonage et al., 2005). Another important factor that we did not investigate is the heterogeneity of leukocyte subpopulations in blood. Neutrophils are probably the most heterogeneous population with respect to their age as a result of their relatively short lifespan compared with that of other subsets. Thus, most neutrophils present in blood in the morning are probably just mobilized from the bone marrow and thus represent young cells, whereas in the evening this subset has already aged significantly (Casanova-Acebes et al., 2013;Zhang et al., 2015). In contrast, most lymphocytes are a mixture of cells released from the thymus or bone marrow and cells that have reached the blood from the lymph and thus have spent a significant amount of time before in tissues. Our markers did not allow for further specification within individual leukocyte subsets. Therefore, which subsets are more prominently affected by our pharmacological interventions could not be addressed. At least for T lymphocytes, however, we have previously observed that naive, effector memory and central memory cells behave very similarly with respect to their oscillations in the blood (Druzd et al., 2017). These questions should be addressed in follow-up studies focusing on specific tissues and leukocyte subsets. We found that time of day for antibody and antagonist administration had a great impact on their efficacy in steady-state conditions and after inflammatory challenge. In fact, for VCAM-1 and L-selectin, as well as for some subsets for CD49d, CXCR4, and ICAM-1, we hardly observed any effect in the morning. This provides a potential window for therapy with respect to targeting leukocyte trafficking at specific times. Current clinical therapeutic approaches targeting leukocyte surface molecules include natalizumab (Tysabri), a neutralizing antibody that acts against the a 4 subunit of VLA-4 and is used in the treatment of multiple sclerosis (Krumbholz et al., 2012). This drug is currently given at one time point of the day, but future studies should investigate the effect of timing drug administration in this scenario. Relapses in patients with multiple sclerosis are negatively correlated with the abundance of the night-signaling hormone melatonin in the serum (Farez et al., 2015). This is linked to a seasonal exacerbation of symptoms in the spring (Farez et al., 2015). Multiple sclerosis could additionally have a diurnal component to it given that the experimental autoimmune encephalomyelitis (EAE) animal model of multiple sclerosis shows strong time-of-day dependency in disease severity (Druzd et al., 2017;Sutton et al., 2017). An important finding of this study is the fact that homing and engraftment capacities of leukemic cancer cells lines, both murine and human, are highly time-of-day dependent, given that administration of cells in the evening dramatically increased tumor burden a week later. This was not dependent on potential rhythmic immunogenicity of the graft because similar effects were observed in immune-deficient and immune-competent animals. Thus, time of day is an important factor for leukemia burden in mouse and human models of this disease. These observations are of particular relevance given that we showed that rhythms in leukocyte homing extend to humans, where an inverse rhythmicity in blood leukocyte counts, particularly for lymphocyte populations, was observed. The variability between subjects was surprisingly small, given that the genetic differences between individuals are vastly greater than those between inbred mice used and that feeding and lighting schedules had not been synchronized. The strongly time-of-daydependent transmigration capacity of human B cells could be blocked by the targeting of CXCR4, which we demonstrate to be rhythmically expressed on this subset, as well as that of LFA-1. This indicates the benefit of a chronotherapeutic approach for targeting either protein in the clinic. Indeed, targeting the CXCR4-CXCL12 axis with G-CSF has already been demonstrated to more strongly mobilize hematopoietic stem and progenitor cells in the afternoon (Lucas et al., 2008). The inverse rhythmicity of human oscillations has thus far been linked to the altered behavioral rhythms of mice and humans such that it yields higher levels during the behavioral rest phase in both nocturnal (mice) and diurnal (humans) species. Recent data, however, demonstrate that rhythmicity in blood leukocyte counts can be decoupled from behavior and relies on reactive oxygen species in a manner independent of the microenvironment (Zhao et al., 2017). Our data demonstrate that both endothelial cells and leukocytes themselves co-govern rhythmic leukocyte migration but that lack of a clock in either is sufficient in disturbing it. The observations that a high number of pro-migratory factors display diurnal oscillations point to a role of the circadian clock in their regulation. Many of these factors exhibit binding sites for transcription factors BMAL1 and CLOCK in their promoter regions, which warrants further systematic investigations into the direct clock control of these molecules. Interplay between cell-intrinsic and -extrinsic signals appears to regulate total blood cellularity, most likely by modulating both the mobilization of cells into the circulation and the emigration, the latter of which was the subject of the present study. CONTACT FOR REAGENT AND RESOURCE SHARING Reagents used in this study are available from the commercial sources listed. Further information and requests for other materials should be directed to and will be fulfilled by the Lead Contact, Christoph Scheiermann (christoph.scheiermann@med.unimuenchen.de or christoph.scheiermann@unige.ch) Mice Male C57BL/6N mice aged 7-8 weeks were purchased from Charles River Laboratories (Sulzfeld, Germany). Bmal1 flox/flox , Cd19cre, Lyz2cre transgenic mice were purchased from Jackson Laboratories, and crossbred to target B cells and myeloid cells, respectively. Cdh5-creERT2 mice were obtained as a gift from Ralf Adams (Max-Planck-Institute for Molecular Biomedicine, M€ unster) via Eloi Montanez (LMU, Munich) and were given intraperitoneal tamoxifen injections for five consecutive days to induce Cre recombinase expression. Mice were then used for experiments 2-3 weeks after. NSG and C57BL/6J CD45.1 mice were obtained from Jackson Laboratory and Charles River respectively and bred in a pathogen-free environment. Experimental mice were male and used at 6-12 weeks of age. Mice were maintained in a 12 h light: 12 h dark cycle with ad libitum access to food and water. For some experiments, mice were put in cabinets to change the light phase in order to perform experiments with animals on different light schedules at the same time. All animal procedures were in accordance with the German Law of Animal Welfare or the French laws and protocols and approved by the Regierung of Oberbayern or French animal ethics committees, respectively. Humans Eight healthy volunteers (four males and four females) aged 25-40 years donated blood for human blood counts experiment. Three healthy volunteers aged 26-48 years donated blood for the human B cell transmigration assay. Experiments were approved by the ethics committee of the LMU Munich and the University of Geneva. All volunteers gave written consent to participating in the study. METHOD DETAILS Flow cytometry Mice were anesthetized by inhalation of isoflurane. Blood was collected by bleeding into EDTA-coated capillary tubes. Leukocyte counts were obtained using an IDEXX ProCyte DX cell counter. Erythrocytes were lysed by red blood cell (RBC) lysis buffer (0.8% NH 4 Cl) 2 times, for 5 min each. Abdominal fluid was collected with a syringe by flushing with 5 mL PBS. Spleens were harvested from animals and processed through a cell strainer (40 mm, Thermo Fisher Scientific). Bone marrow cells were harvested from either one femur only or two femurs and two tibias by flushing the bone gently with cold PBS. Lung and liver were first cut into small pieces in DPBS, supplemented with calcium and magnesium (Sigma) and then incubated for 1 h in digestion buffer with collagenase IV (1 mg/ml, C5138, Sigma) and DNase I (0.2 mg/ml, Applichem) at 37 C with gentle agitation. After digestion, cells were filtered through a cell strainer (40 mm, Thermo Fisher Scientific) and resuspended in 5 mL RBC lysis buffer for 5 min. After centrifugation, the supernatant was removed. Leukocytes were resuspended in PBS supplemented with 2% fetal bovine serum (GIBCO) and 2mM EDTA, then stained with fluorescence-conjugated antibodies for 30 min on ice. After washing with PBS, cells were resuspended in DAPI (Biolegend) buffer, and analyzed by flow cytometry using a Gallios Flow Cytometer (Beckman Coulter). Human blood was collected into EDTA-coated tubes (SARSTEDT, Germany) at 8am, 11am, 3pm, 7pm and 11pm. Human blood was prepared as described above for mouse cells and stained with antibodies at room temperature for 30 min. After washing with PBS, cells were resuspended in DAPI (Biolegend) buffer, and analyzed by flow cytometry using a Gallios Flow Cytometer (Beckman Coulter). Functional blocking experiments and induction of inflammation To investigate the role of specific pro-migratory molecules in the rhythmic homing process, antibody or antagonist experiments were performed in combination with adoptive transfer assays. Blocking antibodies or chemokine antagonists were diluted into working concentrations (see table below) with PBS or 5% DMSO with 1% Tween80 (Sigma) and injected i.v. or i.p. to recipient mice 2 h before injection of donor cells. Cells were then processed as described above. In order to induce systemic inflammatory conditions, LPS (L4516, Sigma) was injected i.p. (10 mg/kg) to recipient mice at the same time as the injection of blocking antibodies. For CXCR4 ex vivo blocking, donor cells were pre-incubated with AMD3100 (300ug/ml) for 1 hour at 37 C in RPMI 1640 (Sigma) plus 10% FCS (Sigma) and stained with CFSE for 20 min. Working concentration of blocking antibodies and functional blockers. Adoptive transfer assays To investigate the emigration of leukocytes from blood, adoptive transfer experiments were performed and donor cells remaining in blood were measured as a negative indicator of how many cells had migrated into tissues. First, donor cells were obtained from bone marrow and spleen from donor mice. Single cell suspensions were obtained by flushing bone marrow with cold PBS and smashing spleen gently through a cell strainer (40 mm, Thermo Fisher Scientific). Cells were lysed with RBC lysis buffer for 5 min and resuspended in cell incubation buffer (PBS, 0.2%BSA, 2mM EDTA) and counted on a cell counter (ProCyte DX cell counter). 10 7 bone marrow cells and 10 7 spleen cells were mixed as donor cells for one recipient mouse. Donor cells were labeled with 1.5 mM CFSE (Thermo Fisher Scientific) or 0.1 mM CellTracker Deep Red dye (Thermo Fisher Scientific) for 20 min at 37 C. In adoptive transfer experiments using donors of different phases, donor cells from two time zones (10 7 mixed donor cells per time zone) were labeled differently and injected into one recipient. After one hour, blood and organs were harvested from recipient mice and processed as described above for flow cytometry analyses. In some experiments, injections of an anti-CD45 antibody (clone I3/2.3) were followed by perfusion in order to distinguish between cells adherent to the vascular endothelium or located in the extravascular space. Injection of donor cells was performed as described before. After 56 min, 10 ml anti-CD45 (clone I3/2.3) in 200 ml PBS were injected intravenously to recipient mice. 4 min later, mice were sacrificed using an overdose of isoflurane and perfused with PBS via first the left ventricle and then the right ventricle in order to perfuse the whole body and the lung, respectively. Cell isolation and Q-PCR Spleen B cells were purified from Cd19cre Bmal1 flox/flox and littermate control mice using the EasySep mouse B cell isolation kit (STEMCELL Technologies) according to the manufacturer's protocol, and purity (> 92%) was accessed by flow cytometry. Bone marrow monocytes and neutrophils were purified from Lyz2cre Bmal1 flox/flox and littermate control mice using a monocyte isolation kit and a neutrophil enrichment kit (STEMCELL Technologies), respectively. Purity of monocytes was about > 93% and of neutrophils around > 82%. RNA was extracted from isolated cells using the RNeasy Plus mini Kit (QIAGEN, Hilden Germany) following the manufacturers' instructions. Total organ RNA extraction was performed with TRIzol (QIAGEN). Tissues were homogenized with a homogenizer (SpeedMill PLUS, Analytic Jena). RNA clean-up was performed using the RNeasy Plus mini Kit (QIAGEN, Hilden Germany) and following the manufacturers' instructions. RNA samples were analyzed using a NanoDrop2000 (Thermo Scientific) to determine RNA concentration and quality. RNA was stored at À80 C. For reverse transcription, 150-200 ng RNA for isolated cells and 2 mg RNA for organs were used with the High Capacity cDNA Reverse Transcription Kit (Applied Biosystems). cDNA samples were stored at À20 C prior to use in quantitative PCR (Q-PCR). Q-PCR was performed with a StepOnePlus Real-Time PCR System (Applied Biosystems) in 96-well plates with SYBR green compatible primers at 60 C. Duplicates or triplicates were performed for each Q-PCR sample. The total reaction volume was 10 ml, containing 5 ml SYBR green, 1 ml primer mix (5 mM), 2 ml H 2 O and 1.5 ng cDNA for cells or 20ng cDNA for organs. Gene expression levels were normalized to the housekeeping gene Gapdh. Immunofluorescence staining To measure adhesion molecule expression levels on endothelial cells, organs were placed in OCT (TissueTec), frozen at À80 C and sectioned with a thickness of 10 mm on a cryostat (Leica). Sections were fixed with cold methanol for 10 min at room temperature, incubated in PBS containing Triton X-100 (0.5%), and normal goat serum (20%). Sections were stained with antibodies and incubated at 4 C overnight. Images were obtained using a Zeiss Axio Examiner.D1 microscope equipped with 405, 488, 563, and 655 nm LED excitation light sources. All quantifications were performed using mask analyses with the Zeiss software based on PECAM-1 expression. Quantifying expression of other fluorescent channels within this mask was then performed. Areas smaller than 10 mm 2 were excluded from analysis to minimize non-specific signals. Protein expression levels were presented as mean fluorescence intensity (MFI) within the mask area and after subtraction of the respective isotype controls. Levels of expression below the isotype threshold for all assessed time-points or the majority of time-points was termed no or low expression, respectively. For visualizing the precise localization of injected donor cells in adoptive transfer assays, mice were injected intravenously with donor cells and additionally with 40 ml of an anti-PECAM-1 antibody (clone 390) in 160 ml PBS prior to organ harvest. 5 min later, mice were anesthetized by inhalation of isoflurane, and perfused with PBS. Organs were put in OCT and frozen at À80 C. Images were obtained using a Zeiss Axio Examiner.Z1 confocal spinning disk microscope equipped with 405, 488, 561, and 640 nm laser sources using both tissue sections and whole mounts of organs. Quantification was performed using ImageJ (version 1.51n) and slidebook (version 6, Intelligent Imaging Innovations, 3i). Transmigration assay of human B cells Non-synchronized HUVECs were cultured in chamber slides for 2-3 days and then treated for 24 h using a chronic activation protocol (Bradfield et al., 2007). The first stage of activation consisted of overnight TNFa (1000 U/ml) and IFNg (500 U/ml) stimulation. B cells (80%-95%) were purified from EDTA-treated blood collected from healthy donors using a negative selection kit (Miltenyi Biotec). The flow assay set-up consisted of a heated microscope chamber (37 C) and a calibrated pump where flow was generated over attached HUVEC monolayers by perfusing wash buffer, or a B cell suspension. The flow rate was set to represent small venules/capillaries (0.05 Pa). Assays were initiated with a second stage HUVEC activation, where CXCL12 (1 mM) was perfused over the monolayer for 15 min (step 1). Wash-buffer was then pumped for 10 min over the HUVECs to remove any unbound CXCL12 before the B cell suspension was perfused over the HUVECs for 5 min (step 2) followed by 90 min of wash-buffer (step 3). Throughout steps 2-3, images of the captured B cells were taken using phase-contrast microscopy, and a high-resolution camera. Individual images were recorded every 30 s and compiled into short movie sequences, allowing analysis of individual B cells over large areas. B cells adherent to the surface of the HUVECs showed a phase-white appearance, whereas those that had transmigrated showed a phase-black appearance. Adhesion events were recorded as the total of number of cells per unit field (mm 2 ). Transmigration events were presented as a percentage of total B cells captured from flow per unit field. All experiments were carried out using quadruplicate fields and presented as a mean value with + standard error measurements (±SEM). QUANTIFICATION AND STATISTICAL ANALYSIS Data was analyzed using Prism 7 (GraphPad) and presented as mean ± standard error of mean (SEM). A p value < 0.05 was considered as statistically significant. Comparisons between two groups were performed using unpaired Student's t test. One-way ANOVA analysis followed by Tukey's multiple comparison test was used for multiple group comparison. One-way ANOVA analysis followed by Dunnett's test was used for comparison between control and treatment groups. Human WBC counts were analyzed by repeatedmeasures one-way ANOVA. Mann-Whitney non-parametric analyses were performed for non-Gaussian distribution patterns in leukemia tumor burden. Reciprocal 'negative' homing assays with ZT5 and ZT13 donor cells labeled differently and co-injected into ZT5 and ZT13 recipients; n = 21-24 mice, one-way ANOVA followed by Tukey's multiple comparison test. *p<0.05, **p<0.01, ***p<0.001, ****p<0.0001. or ICAM-1 compared to ZT1 and ZT13 isotype-treated control groups; n = 7-11 mice, unpaired Student's t-test. *p<0.05, **p<0.01, ***p<0.001, ****p<0.0001. ns, not significant. Endogenous blood leukocyte numbers after treatment with chemokine receptor antagonists; n = 5-12 mice, one-way ANOVA followed by Dunnett comparison to control groups. (E) Adoptive transfer of donor cells treated ex vivo prior to transfer to recipients with antagonists against CXCR4 at ZT1 and ZT13. Data are normalized to ZT1 levels; n = 3 mice, unpaired Student's t-test. (F) Cxcl12 mRNA levels in bone marrow and lung, ZT1
2018-12-13T14:06:35.210Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "b153eab8ebcb1d288db40339c712df00aaf80a69", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S1074761318304485/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e53836c4ebaa050e2946ddef8be910e755c96010", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
129229247
pes2o/s2orc
v3-fos-license
Correlated Energy Exchange in Drifting Sea Ice The ice floe speed variations were monitored at the research camp North Pole 35 established on the Arctic ice pack in 2008. A three-month time series of measured speed values was used for determining changes in the kinetic energy of the drifting ice floe. The constructed energy distributions were analyzed by methods of nonextensive statistical mechanics based on the Tsallis statistics for open nonequilibrium systems, such as tectonic formations and drifting sea ice. The nonextensivity means the nonadditivity of externally induced energy changes in multicomponent systems due to dynamic interrelation of components having no structural links. The Tsallis formalism gives one an opportunity to assess the correlation between ice floe motions through a specific parameter, the so-called parameter of nonextensivity. This formalistic assessment of the actual state of drifting pack allows one to forecast some important trends in sea ice behavior, because the level of correlated dynamics determines conditions for extended mechanical perturbations in ice pack. In this work, we revealed temporal fluctuations of the parameter of nonextensivity and observed its maximum value before a large-scale sea ice fragmentation (faulting) of consolidated sea ice. The correlation was not detected in fragmented sea ice where long-range interactions are weakened. Introduction The Arctic sea ice cover (ASIC) is the open thermodynamic system that exhibits well-pronounced scaling properties [1][2][3][4][5].The ASIC dynamics is determined by the sea ice drift, which is caused, predominantly, by irregular wind forcing [6][7][8].The phenomenology of the sea ice drift and its impact on the climate change was reviewed by Morison et al. [9]. Overland et al. [10,11] regarded the drifting ice as a hierarchic system, the dynamic properties of which are self-similar at all scale levels.Rothrock and Thorndike [12] were the first who reported the power-law floe size distribution and formulated the concept of scale invariance of fragmented sea ice.More recent studies showed that both the energy [13] and temporal [4,5] parameters of the drift are characterized by power-law distributions that are well known in geodynamics (Gutenberg-Richter law [14]), as well as in other nonequilibrium processes in nature including human activity (see [15]). However, the power-law relations for frequent-vs-size distributions of energy-change-related events in open systems are essentially empirical relations, which cannot be derived in the framework of classical thermodynamics that is valid for closed equilibrium systems.The Boltzmann-Gibbs distribution implies the independence of individual "events" (local perturbations), since the effect of each of them on neighboring structures decays exponentially with the distance.Fast decreasing leads to the additivity of the contribution of individual events to the total process (i.e., dynamic extensivity).Long-range interactions between components in externally driven systems break their dynamic isolation, and cause deviations from a simple summation of transferred energy.In other words, in the case of power-law decaying, the correlation radius of each local perturbation exceeds the size of a single perturbed site, thus inducing a cooperative response of affected sites to the external forcing. The discrepancy between the results expected from the Boltzmann-Gibbs statistics and actual phenomenology was resolved by Tsallis [16,17] who developed the nonextensive statistical mechanics (NESM) as a generalization of classical statistics for the case of the nonexponential energy distribution in a nonequilibrium system characterized by long-range interrelation between energy release/exchange events. International Journal of Oceanography In the very recent years, the nonextensive statistics was successively applied for analyzing the seismic activity in relation with the problem of predictability of earthquakes [18][19][20][21][22][23][24][25].This work is aimed at expanding the applicability of NESM over another dynamic geophysical object, the ASIC, which is comparable with geostructures both in dimensions and character of mechanical processes. We shall demonstrate that the sea ice drift dynamics can be described adequately both by an empirical relation identical to Gutenberg-Richter law [14] and by analytical expressions deduced from the basic principles of the Tsallis statistics.We used in our considerations the concepts and models developed in geophysics for describing dynamic processes in geostructures.We believe that a common approach to studying large-scale phenomena in solid Earth and ASIC is justified by the morphological closeness of two-dimensional tectonic plates and ice floes, as well as by their very similar mechanical behavior (cracking, faulting, shearing, and stickslip motions).A similarity between dynamic processes in solid Earth and Artic ice pack was discussed in review [26]. The monitoring of motions of an "individual" ice floe was performed at the ice research camp North Pole 35 established on Arctic ice pack in 2008.The ice floe displacements were detected using the GPS technique in regular 2 min intervals.The obtained records were used for determining the variations of the kinetic energy of the ice floe during its spontaneous drift at approximately 86 • north of the Nova Zemla in the period of time January to March, 2008. Energy Distribution 2.1.NESM."Scaling" in the given context means the validity of the scaling equation that establishes the following dependence of events frequency on event energy: where N is the number of events characterized with the energy E; λ is the constant (scaling factor); b is the constant.The term "event" refers to any local structural or dynamic perturbation that resulted in a change of energy.For example, any signal from established seismograph is regarded as an evidence of fracture event accompanied with a certain release of elastic energy.In our case, we shall regard as an "event" every detected change in the kinetic energy of drifting ice floe.The correlation radius of so-defined event is determined by the physical distance, at which the energy change of the given floe affects the behavior of neighboring pack structures.Large correlation radius implies long-range interactions between components, such as collisions or disengagements, through which a local excitation is transferred to the environment.The only function that verifies (1) is a power law: where N(E > E ) is the number of events characterized with the energy E exceeding a threshold value E ; b is the constant.Equation ( 2) is an alternative designation of the celebrated Gutenberg-Richter law, log N(m > m ) ∝ −bm, where N is the number of earthquakes having magnitudes m greater than m .The power-law function means much slower decay of an event effect on adjacent sites than that which takes place in the case of exponential decay.This disturbs the entropic additivity of the process, which in the simplest case of two independent equilibrium subsystems A and B can be expressed as where S is the entropy. The main idea forwarded by Tsallis was to introduce into additive expression (3) a member that would take into account some interactions between subsystems through a "parameter of nonextensivity" q: Here, k is the Boltzmann constant; the right-hand cross-term characterizes the interaction between the subsystems.Accordingly, the classical logarithmic definition for probability of states in multisite system (here p i are the probabilities of w virtual configurations in the system) transforms into the power-law expression: where S q is the "nonextensive" entropy.Passing over details of the Tsallis' conjecture (see his book [17] for details), we note that the Boltzmann-Gibbs statistics responds the limit q → 1 ((6) transforms into (5)); for q < 1 the formalism imposes a high-energy cutoff, that is a limited statistics in equilibrium system (see an example below); finally, the value q > 1 signalizes the presence of long-range interactions in non-equilibrium dynamic system-this is the case of power-law dynamics, which is incompatible with the entropic additivity.Being the parameter q a measure of nonextensivity, its value characterizes indirectly a correlation radius of interactions in a non-equilibrium system. The nonextensive paradigm was applied in seismological [21][22][23], hydrological [27], climatological [28], and atmospheric [29] studies.Issuing from the Tsallis formalism, Sotolongo-Costa and Posadas [18] obtained a formula for magnitude distribution of earthquakes, which reproduced actual distributions in a wider range of magnitudes than the empirical Gutenberg-Richter relation [14].The formula was modified by Silva et al. [19] who replaced the linear energy density suggested in [18] by a more realistic volumetric relation, and obtained the following expression for the number of earthquakes with magnitudes, m, larger than a value m normalized to the total number of events N: where a is the volumetric energy density.In terms of the energy released (m ≈ 1/3 log E), the expression for the energy distribution in the nonextensive dynamic process takes the following form: Equations ( 7) and ( 8) were obtained on assumption of the proportionality between the probability of the given release of elastic energy in a fracture event and the size of formed fragment.In a similar way, the kinetic energy of drifting ice floe depends linearly on its size.In fact, ( 7) is a generalized form of (2) [30].Very recently, Balasis et al. [31] applied (7) for analyzing the solar flare and magnetic storm intensities and obtained the excellent coincidence between calculated and measured values without making any assumptions on the mechanism that governs the probability of the energy release.The authors argued for the universality of the nonextensive energy distribution in the form (7) for a wide range of nonlinear phenomena characterized by longrange interactions. The relation (8) allows one to determine the parameters q and a by fitting them to the plot N(E > E ) versus E constructed on the base of experimental data.Variations of the q-value reflect changes in the thermodynamic state of the system, and, correspondingly, in the degree of correlation of events.In this work, we utilized (8) for characterizing the sea ice drift dynamics and for a comparison of the results of nonextensive analysis with those obtained from the assessment of the scaling parameter b. Kinetic Energy. Nonuniform sea ice drift in the Arctic Ocean induces localized and extended ice pack fragmentations, which, in turn, affect the local drift rate through collisions and shearing of mobile ice floes.Therefore, the kinetic energy of individual fragments of ice pack is substantially interrelated with the fracture process, which affects their motions.The average speed of drift of consolidated sea ice is about 0.1-0.2m/s, while the speed of an individual ice floe varies in the range 0.02 to 0.4 m/s [32]. In this work, the local speed changes were derived from the data on the ice floe displacements measured using a couple of GPS transmitters that were placed on the ice floe at the vicinity of the camp North Pole 35 at the distance of about 180 m.The data were collected using a field PC in sampling intervals of two minutes.A detailed analysis of the accuracy of drift speed determination from the GPS data [4] showed that the standard error in values V measured in 2 min sampling intervals was about 0.009 m/s, while the changes in the drift speed, ΔV , were much larger.Only the values ΔV that exceeded the standard error of speed determination two times or more were taken into account in the subsequent data processing. Figure 1 shows a series of the values of drift speed V calculated from readings of one of the GPS transmitters during the period of time from January to March 2008.This period of observations covered two significant anomalies in "stationary" ice floe drift occurred on 1 to 6 February and on 5 to 8 March.A few 180 • turns in local drift direction were detected in these time intervals (Figure 2).The event of 5-8 March coincided in time with a large-scale ice pack fragmentation occurred at about 150 km from the research camp (Figure 3). The kinetic energy change, ΔE, is directly proportional to the speed change squared, ΔE ∝ ΔV 2 (to simplify formulae, we shall denote ΔE as E hereafter).The distributions of changes in the kinetic energy of the drifting ice floe are depicted in Figure 4 in 15-day intervals covering the period of the GPS monitoring. In left-hand panels of Figure 4(a), the experimental data were approximated by log-linear portions that represent the power-law dependences given by ( 2).One can see that the power-law behaviour takes place in all time windows with the exception of the period of time 16-21 March when the log-linear portion (if ever) does not cover even an order of magnitude (Figure 4(f)).In other words, the ensemble of interacting ice floes exhibits scaling properties in the sense of (1) most of the time. The power exponent, b, calculated from the slope of straight line drops significantly (from 3.20 to 2.30) in the period of time 16 to 29 February preceding the large-scale sea ice fragmentation. In the right-hand panels of Figure 4, the same experimental dependences N(E > E ) versus E were approximated using (8).The best-fitting procedure was applied to determine the parameters q and a. One can see that the found q-values exceed unity in all time intervals with a decrease in the value of this parameter down to q = 0.93 after the faulting of 5-8 March.The q-value less than unity means the prevalence of noncorrelated events International Journal of Oceanography which one could expect in the fragmented pack.As distinct from the extensive dynamics at q = 1, which is typical for a highly connected system that admits the occurrence of correlated behavior in limited time intervals (when the system deviates spontaneously from the thermodynamically equilibrium state [33]), the state characterized by q < 1 excludes the long-range energy exchange because of the disintegration of the system in individual, low-connected components with rare interactions (subextensivity [17]).Obviously, this is the case for sea ice dynamics after faulting. Discussion In this work, the data on the kinetic energy variation in drifting sea ice were analyzed using both the empirical relation ( 2) and the analytical function N(E > E ) (8) based on the nonadditive definition of entropy in (4). Both used fittings, that is those through the b-value and the q-value, approximate the experimental energy distributions with a certain inaccuracy.In the range of midand high-energy kinetics, the approximation through the bvalue represents well log-linear portions of the N versus E dependences.Low-energy portions of dependences follow neither (2) nor ( 8) but the q-value approximation reflects a trend to nonscale behavior of small events.This is explained by significant decay of low-energy excitations in dissipative media [34,35]. The q-value as a measure of nonextensivity [16] has a clear physical sense: it characterizes a deviation of the given dynamic system from the equilibrium state.To assess the relative degree of nonextensivity of the drift dynamics, one can compare the found q-values with those known for tectonic process.Summarizing the data reported in the literature [18,19,[22][23][24][25]30], we conclude that the seismicity-related q-values fall in the range 1.5 to 1.8 for different seismic zones.In our case, the q-value reached only 1.12 in consolidated pack, and dropped down to 0.93 after a cycle of large-scale sea ice fragmentation.This comparison points out a low level of long-range correlations in drifting ice.Motions of each individual floe affect insignificantly the dynamics of neighboring floes as compared to motions occurring in the system of tectonic plates.The fault formation resulted in the disturbance of the pack connectedness and, correspondingly, in noncorrelated drift dynamics (q < 1) in March.We suppose that the decrease of the q-value down to 1.02 on 2-15 February was also caused by a cycle of moderate (or local) pack fragmentation, which was not, however, detected in available satellite images. The found values of both the power exponent b and the parameter q are in good agreement between each other from the viewpoint of their physical meaning.According to the computational spring-block model developed by Olami, Feder and Christensen (OFC) [34], the b-value is correlated with the energy conservation in the system: the higher b-value, the lower conservation.In our case, we deal with high-b-value process.For reference, the literature data collected by Vallianatos [36] contain the following power-law exponents for various geomechanical processes: earthquakes 0.5 to 0.8; landslides 1.0 to 1.6; rockfalls 0.4 to 0.7.The values of this parameter shown in Figure 4 (b = 2.3-3.2) evidences a low energy conservation in drifting sea ice.In low-conservative systems, the energy exchange is restricted by neighboring sites, and the nonextensive process should be characterized by values of the parameter q close to unity. It is worthy to note that the variation of the q-value reflects the changes in the b-value with the opposite sign: the higher b, the lower q.In words, the lower energy conservation, the weaker long-range interactions.Thus, we obtained a reasonable agreement between the empirical and analytical approaches in descriptions of the nonextensive properties of the sea ice drift dynamics. Conclusion In this work, some models and concepts developed in recent years in tectonophysics were applied for assessing the actual thermodynamic state of drifting sea ice.A common approach to mechanical processes in solid Earth and Arctic ice pack is justified by a close similarity in the nature of dynamics of two-dimensional tectonic plates and ice floes, which is characterized by cracking, shearing, and stick-slip motions.We have shown that the nonextensive analysis based on the Tsallis statistics for open systems allows one to estimate deviations of ice pack from the equilibrium state, and these deviations signalize the enhancing of correlated dynamics with increased probability of avalanche-like energy release, such as strong icequakes.On the other hand, we detected a transient period of equilibrium state of sea ice after the formation of a fault consisting of highly fragmented sea ice.This thermodynamic volatility reflects changes in the dynamic connectedness of drifting sea ice. Figure 1 : Figure 1: Ice floe speed measured at the camp North Pole 35 from January 1 to March 31, 2008. Figure 2 :Figure 3 : Figure 2: Fragments of drift trajectory showing two series of the ice research camp North Pole 35 in periods of dramatic changes in the direction of the drift.Arrows indicate the prevailing wind directions.
2019-04-24T13:12:10.897Z
2011-10-17T00:00:00.000
{ "year": 2011, "sha1": "171d5d69307e4cf9ffd00685f3ce2660b5891e49", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/archive/2011/316289.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "135331c164eb7dd5fe42f4bd38e4536f4664b6e6", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Geology" ] }
269368087
pes2o/s2orc
v3-fos-license
Navigating Gender Nuances: Assessing the Impact of AI on Employee Engagement in Slovenian Entrepreneurship : Background: Our research delved into exploring various selected facets of AI-driven employee engagement, from the gender perspective, among Slovenian entrepreneurs. Methods: This research is based on a random sample of 326 large enterprises and SMEs in Slovenia, with an entrepreneur completing a questionnaire in each enterprise. Results: Findings suggest that there are no significant differences between male and female entrepreneurs in Slovenia regarding various aspects of AI-supported entrepreneurial management practice including the following: AI-supported entrepreneurial culture, AI-enhanced leadership, adopting AI to reduce employee workload, and incorporating AI tools into work processes. The widespread integration of AI into entrepreneurship marks a transition to a business landscape that values inclusivity and equity, measuring success through creativity, strategic technology deployment, and leadership qualities, rather than relying on gender-based advantages or limitations. Our research also focused on the identification of gender differences in path coefficients regarding the impact of the four previously mentioned aspects of AI on employee engagement. While both genders see the value in using AI to alleviate employee workload, the path coefficients indicate that female entrepreneurs report higher effectiveness in this area, suggesting differences in the implementation of AI-integrated strategies or tool selection. Male entrepreneurs, on the other hand, appear to integrate AI tools into their work processes more extensively, particularly in areas requiring predictive analytics and project scheduling. This suggests a more technical application of AI in their enterprises. Conclusions: These findings contribute to understanding gender-specific approaches to AI in enterprises and their subsequent effects on employee engagement. Introduction In a rapidly changing world where technological advancements are reshaping our everyday lives, entrepreneurs of all genders face unique challenges and opportunities.A key competency emerging as crucial for success in this dynamic environment, regardless of gender, is the ability to understand and apply systems thinking in conjunction with artificial intelligence (AI) [1][2][3].Integrating AI into business strategies is crucial for entrepreneurs of both genders as it levels the playing field and opens new avenues for innovation and success [2,4].AI provides tools for data analysis, predictive modeling, and operational efficiency that can transform existing patterns into actionable strategies and innovations, thereby ensuring equal opportunities for success for entrepreneurs regardless of gender [5].However, despite the optimistic views of AI as a democratizing force, it is important to consider diverse perspectives on its impact on different demographic groups, including gender.Research such as Szalavetz's study highlights the challenges of adopting AI in dependent market economies, illuminating the complexities of major power rivalries and their implications for innovation [6].These findings emphasize the need for a nuanced approach to AI in entrepreneurship, acknowledging both the potential to enhance innovation and the risks of exacerbating existing disparities.For instance, Friederici et al. [7], in their study of digital entrepreneurship in Africa, highlight the unique challenges and opportunities presented by the digital economy outside the Silicon Valley paradigm.They argue that while digital technologies offer unprecedented opportunities for innovation, their impact is deeply intertwined with local economic, social, and political contexts, which can significantly influence the effectiveness and reach of such technologies.Insights from their study underscore the importance of considering the broader ecosystem within which digital and AI innovations are deployed. AI has become a key factor in innovation and technological advancement in the modern business world.Its usage ranges from automating basic processes to enhancing strategic decision-making and leadership [8,9].In the context of entrepreneurship leadership, AI is not just a tool for optimizing operations but also a means for transforming traditional leadership styles and practices [9,10].AI is profoundly impacting the landscape of entrepreneurship, influencing how female and male entrepreneurs approach business challenges and opportunities [11].While AI offers tools for data analysis, predictive modeling, and operational efficiency, its utilization can differ between genders [10,[12][13][14][15][16]. Studies suggest that female entrepreneurs may prioritize AI for market research and customer experience enhancement, focusing on building relationships and understanding consumer needs [12,17].Conversely, male entrepreneurs often leverage AI more for operational efficiency and scaling business operations.These tendencies reflect broader patterns in entrepreneurial strategy and decision-making [13]. For several reasons, understanding the differences in how male and female entrepreneurs utilize AI is critical.Firstly, it helps design tailored support and resources more effectively for diverse entrepreneurial needs [18].Secondly, AI's ability to diminish biases in decision-making and resource allocation is crucial for fostering a balanced entrepreneurial ecosystem [19].Recognizing these distinctions also aids in appreciating the unique strengths that each gender brings to entrepreneurship [20].As AI technology advances, it becomes vital in bridging leadership and strategic gaps, enabling entrepreneurs of all genders to leverage their unique abilities for business success and innovation.This understanding is essential for nurturing a more inclusive and efficient entrepreneurial environment [1].Understanding the gender differences in entrepreneurs' use of AI is crucial for employee engagement.AI influences the work environment significantly, affecting employee engagement from both male and female entrepreneurs' perspectives [21].Female entrepreneurs often use AI to enhance relationship-building and address consumer needs, fostering an inclusive work culture.In contrast, male entrepreneurs may employ AI for operational efficiency, creating a more results-driven atmosphere [22][23][24].However, AI is also reducing these gender differences in leadership styles and usage.Its objective and data-driven nature promotes a uniform approach to leadership, enabling informed, unbiased decision-making that benefits all employees.This technological evolution is helping level the playing field, resulting in more gender-neutral practices in leadership and AI applications in business [14,19].Integrating AI into modern business practices is not just transforming operational efficiencies but transforming the structure of organizational leadership and employee engagement.Our research delves into this paradigm shift, focusing on how AI is being leveraged to enhance employee engagement in Slovenian enterprises.We particularly investigate gender dynamics in this context, exploring whether there are distinctive approaches between male and female entrepreneurs in utilizing AI for this purpose.The contribution of our research is multifaceted.The exploration of AI in the entrepreneurial context specifically highlights the intricate dynamics of its use among male and female entrepreneurs, focusing on AI-supported entrepreneurial culture, AI-enhanced leadership, adopting AI to reduce employee workload, and incorporating AI tools into the work processes and employee engagement.This nuanced approach not only reveals the unique challenges and opportunities that AI presents in the realm of business but also underscores the critical role of gender dynamics in the application of technological innovations.By understanding how male and female entrepreneurs differently leverage AI in enterprises and the subsequent effects on employee engagement, this study provides vital insights into the broader implications of AI in shaping inclusive and effective business practices. In this article, we focus on exploring the differences in the use of AI between female and male entrepreneurs.AI is becoming a key factor in entrepreneurship and plays a significant role in reducing traditional gender gaps.However, there is a lack of research directly addressing whether there are specific differences in the use of AI between female and male entrepreneurs.The contribution of this article is to fill this research gap.Our study aims to determine whether and how AI affects the reduction in or emphasis of gender differences in entrepreneurship.With this, we aim to understand the role of AI better in the entrepreneurial environment and its potential role in shaping a more equitable business world.Our research contributes to the growing discourse on the role of AI in business, offering novel insights into the gender dynamics of AI-enhanced employee engagement processes.Our findings provide valuable insights into how enterprises can better leverage AI's potential to improve business practices and enhance employee engagement, regardless of the entrepreneur's gender.By understanding these dynamics, enterprises can create more inclusive, effective, and innovative working environments, contributing to sustainable success and competitiveness. Female and Male Entrepreneurs The research conducted by Guzman and Kacperczyk [20] illuminates gender disparities within the entrepreneurial landscape, particularly in high-growth ventures.Their findings reveal that female-led ventures are significantly less likely than their male counterparts to secure external funding, with a 63-percentage point gap in venture capital acquisition.This gap originates not only from differential access to resources but is also entwined with gender biases and the initial growth orientation of startups.They highlight that 65 percent of this gender gap is due to women being less likely to establish ventures that signal growth potential to investors.Even after accounting for this orientation, a residual gap of 35 percent persists, suggesting significant ongoing investor biases against female entrepreneurs. Rugina and Ahl [25] discuss how women entrepreneurs in Central and Eastern Europe are perceived and influenced by both legacy socialist ideologies and emerging neoliberal market economies.They identify the following five prevailing constructs: women as untapped economic resources, casualties of gendered industrial cultures, lacking relevant skills, solutions to social problems, and in need of encouragement.These constructs underscore the complexities women face in navigating entrepreneurship within these regions. Vijayakumar's study [23] of 132 women entrepreneurs in South India explores the relationship between emotional intelligence and leadership styles.It reveals that educational level significantly impacts both emotional intelligence and leadership effectiveness, suggesting that higher emotional intelligence enables women to employ both transactional and transformational leadership styles more effectively. The insights from Sabharwal et al. [26] highlight the broader applicability of gendered leadership styles in the entrepreneurial context.They suggest that female entrepreneurs, like female MPA directors, may leverage transformational leadership to foster innovation, motivate teams, and navigate entrepreneurship challenges with a focus on collaboration and empathy. Lastly, the study by Rummana et al. [27], which examines 200 entrepreneurs in Bangladesh using the Technology Acceptance Model, explores gender differences in perceived usefulness, user-friendliness, and ICT usage.Their findings reveal that while male entrepreneurs generally display higher levels of flexibility and perseverance, female entrepreneurs report significantly higher perceptions of system usefulness and user-friendliness.Both genders associate higher ICT usage with innovativeness, underscoring the universal importance of leveraging technology for business advancement. AI-Supported Entrepreneurial Culture: Gender Perspectives in Entrepreneurial Contexts Women make significant contributions to the entrepreneurial landscape, though their involvement generally remains less than that of their male counterparts.The disparity in entrepreneurial activity between genders is influenced globally by diverse cultural norms and societal stereotypes [28].Integrating AI into this context has become pivotal for entrepreneurial success in the rapidly evolving business environment.AI-driven tools and systems profoundly impact entrepreneurial culture by influencing decision-making, communication flows, and employee engagement [29].For instance, AI can automate routine tasks, freeing time for creative and strategic endeavors, and thereby fostering innovation and efficiency [30].Furthermore, AI's data analytics capabilities provide insights into market trends and consumer behavior, enhancing business responsiveness and adaptability to market needs. While the benefits of AI in reshaping organizational culture are clear, the interaction with AI significantly varies between male and female entrepreneurs [10,12].Chae and Goh [11] found that female entrepreneurs effectively leverage digital entrepreneurship to enhance venture performance, especially when demonstrating high levels of specific innovativeness.They often use AI to improve customer experience and conduct market research, employing a more intuitive and empathetic approach to meet customer needs effectively [8].Conversely, the application of AI across businesses, particularly in shaping organizational culture, tends to be consistent across gender lines.Both male and female entrepreneurs recognize the value of AI in enhancing efficiency, improving decision-making processes, and fostering a culture of innovation and adaptability [4]. This uniform approach to AI integration suggests a shared understanding of its strategic importance, transcending traditional gender norms.AI is increasingly viewed as a tool for universal empowerment [2], with capabilities like data analytics, machine learning, and automation providing equal opportunities for all entrepreneurs to enhance their business processes.As such, AI acts as a catalyst for creating an inclusive and dynamic work environment where decisions are data-driven and processes are optimized for efficiency [31].Therefore, the following hypotheses are proposed: H1.0:There is no statistically significant difference in AI-supported entrepreneurial culture between male and female entrepreneurs. H1.1: There is a statistically significant difference in AI-supported entrepreneurial culture between male and female entrepreneurs. AI-Enhanced Leadership in Entrepreneurship: Bridging the Gap between Female and Male Entrepreneurs When delving into the distinctions between male and female leadership styles, research indicates [26,32] a tendency for women to lean more toward transformational leadership.This style, often characterized by motivation, inspiration, and a focus on team-building and collaboration, contrasts with the more traditional leadership approaches commonly seen in male leaders [26,32].Men in leadership roles frequently emphasize aspects such as administration, policy formulation, setting organizational priorities, communicating their strategic vision to stakeholders, and embodying the roles of advocates and role models [26,33].Female entrepreneurs tend to exhibit a more empathetic leadership style, which can foster a more inclusive and supportive work environment.This approach enhances team collaboration and encourages a more holistic understanding of stakeholder perspectives.In contrast, male entrepreneurs often prioritize their roles' structural and strategic aspects, focusing on operational efficiency, policy implementation, and goal-oriented strategies [34][35][36].However, modern approaches to leadership, such as transformational leadership, emphasize the importance of adaptability, empathy, and collaboration-qualities often attributed to female leaders [37].AI promises to bridge these differences.With the aid of data analytics, machine learning, and advanced information processing, AI enables entrepreneurs, regardless of gender, to make more informed and objective decisions [8].AI can contribute to reducing bias and promote a more inclusive working environment.Historically, the discourse around leadership and employee engagement has been influenced by gender norms and biases.However, with the advent of AI, there is a potential to transcend these traditional barriers [5,38].AI's data-driven and objective frameworks provide a unique opportunity to assess and address employee needs and engagement strategies in a more egalitarian and unbiased manner [39].According to this, the following two hypotheses are proposed: H2.0:There is no statistically significant difference in AI-supported leadership styles between male and female entrepreneurs. H2.1: There is a statistically significant difference in AI-supported leadership styles between male and female entrepreneurs. Bridging the Gender Divide: Adopting AI to Reduce Employee Workload in Entrepreneurship The incorporation of AI into various business practices has marked a significant transformation, introducing a broad spectrum of applications across numerous industries.AI's impact is extensive and varied, including streamlining repetitive tasks, markedly improving customer interactions, and notably increasing workforce efficiency.Crucially, AI plays a pivotal role in reducing human error and foreseeing potential crises, thereby reshaping the way businesses operate [40].The concept of algorithmic management is becoming more prominent in the business world.This approach involves the use of algorithms for handling managerial functions [41].Businesses are increasingly entrusting algorithms with responsibilities such as selecting personnel, distributing tasks, organizing schedules, and evaluating employee performance [1].New technologies have the potential to both enhance and detract from work design, significantly impacting various aspects of the employee experience.These innovations can lead to improved work structures and processes, boosting employee health, well-being, and engagement and enhancing overall performance [41].Also, Nahar [42] found that men and women perceive and use technology differently.For instance, men are often more confident in adopting new technologies and perceive them as more useful than women, who may show hesitancy and lack confidence in using such technologies.This difference in perception and usage can significantly impact how male and female entrepreneurs integrate technology into their business processes, including those aimed at reducing their employees' workload. H3.0:There is no statistically significant difference in adopting AI to reduce employee workload between male and female entrepreneurs. H3.1: There is a statistically significant difference in adopting AI to reduce employee workload between male and female entrepreneurs. Artificial Intelligence in Entrepreneurship: A Comparison of Incorporating AI Tools into Work Processes between Male and Female Entrepreneurs Technology is increasingly instrumental in supporting women entrepreneurs in their quest for social innovation, functioning in two key capacities.Firstly, technology itself can be the medium of social innovation, where innovative technological solutions directly address social challenges.This involves developing new technologies or leveraging existing ones to create positive social change.Secondly, technology is an enabler, providing the tools and platforms for women entrepreneurs to implement their social innovation ideas.This includes using technology for communication, data analysis, and reaching wider audiences to drive social change [15].A study focusing on entrepreneurs in Malaysia [43] found distinct differences in the use of information and communication technology (ICT) between male and female entrepreneurs.It noted that male entrepreneurs tended to be more flexible and persevering, while risk-taking propensity was a more significant determinant of technology usage among female entrepreneurs.Interestingly, both male and female entrepreneurs associated innovativeness with technology usage, but female entrepreneurs showed higher perceptions of the system's usefulness and ease of use.However, overall ICT usage and the use of basic and advanced systems for administrative, planning, and control purposes did not significantly differ based on gender.A more recent exploration of gender differences in technology use, as discussed by the European Institute for Gender Equality [44], reveals nuanced patterns.It suggests that societal norms and relations, which are influenced by technological transformations, shape the relationship between gender and technology.Women often face higher anxiety than men regarding IT use, leading to lower self-efficacy and perceptions of technology requiring more effort.This is compounded by gender norms affecting self-perceptions of technological proficiency.Furthermore, women are generally more concerned about digital technologies and tend to have more negative perceptions of them compared with men.Despite this, the ownership and use of digital technologies hold significant potential for the economic empowerment of women and increasing gender equality.According to Giuggioli and Pellegrini [39], the use of AI in entrepreneurship shows significant impacts and opportunities, regardless of the entrepreneur's gender.AI technologies, integral to innovations like the Internet of Things, Augmented Reality, and blockchain, are transforming entrepreneurial processes, including venture creation, decision-making, and operational performance enhancement.Hence, the following two hypotheses are proposed: H4.0:There are no statistically significant differences in the incorporation of AI tools into work processes between male and female entrepreneurs. H4.1: There is a statistically significant difference in the incorporation of AI tools into work processes between male and female entrepreneurs. AI Ethical Considerations in Entrepreneurship AI has emerged as a critical element in decision-making across various sectors, emphasizing the need for thorough ethical scrutiny.Fundamental to the responsible use of AI is transparency and fairness in algorithmic decision-making, ensuring users understand the basis of AI's conclusions, and addressing potential biases through ethical foresight and inclusive technology development [45][46][47].Ethical challenges, particularly concerning privacy, surveillance, and transparency, necessitate a delicate balance between technological advancement and the protection of employee rights [46].Gender considerations add complexity, as biases in AI algorithms may reinforce workplace gender disparities [48,49].Moreover, AI's automation potential raises concerns about job displacement, especially in roles vulnerable to automation, prompting the need for strategies to reskill and upskill employees to ensure an inclusive transition to AI-driven practices [50,51].Incorporating AI into business operations extends beyond technical issues to encompass cultural and organizational adjustments.Adapting to AI requires shifts in organizational structures, cultures, and mindsets, overcoming resistance to change, misunderstandings about AI's benefits, and the alignment of AI with strategic objectives.These challenges often vary by gender, influenced by differing leadership styles, resource access, and prevailing gender norms [47,49,52]. Comparative Analysis Model of AI Utilization in Male and Female Entrepreneurship to Increase Employee Engagement In enterprises where at least half of the executive positions are held by women, there is a noticeable enhancement in employee engagement and belief in the company's mission.According to Renzulli [53], such enterprises score higher in how inspired employees feel, showing greater autonomy and engagement and a higher likelihood to recommend the company's products.This evidence underscores the positive impact of female leadership on workplace dynamics and performance. In today's digital age, integrating AI into business strategies is essential for keeping pace with rapid changes and managing complex information flows [54].AI equips entrepreneurs with crucial insights for effective decision-making and strategy development, which is vital for maintaining competitiveness in a dynamic market [18,19].Gilal et al. [55] emphasize that successful leadership relies on personal traits rather than gender, advocating for a gender-balanced approach in technology development to ensure products and services are equitable. AI's role extends beyond enhancing decision-making; it also drives business development and encourages traditional businesses to adopt new technologies [56,57].This transformation is not just about improving operational efficiency but also about fostering a more engaged and innovative workplace [58,59].Deloitte's research [60] highlights that effective AI integration depends on organizational culture, trust, data fluency, and change management.When properly managed, AI can relieve entrepreneurs of administrative burdens, freeing them to focus on employee development and engagement strategies, thereby improving overall workplace morale and career development [61].Thus, we proposed two hypotheses as follows: H5.0:There is no statistically significant difference in the path coefficients in using AI to enhance employee engagement between male and female entrepreneurs. H5.1: There is a statistically significant difference in the path coefficients in the use of AI to enhance employee engagement between male and female entrepreneurs. Figure 1 presents a comparative analysis of AI utilization to enhance employee engagement in enterprises led by male and female entrepreneurs.In the following, we examined the path coefficients, which present the strength and direction of the relationships between various constructs, such as AI-supported entrepreneurial culture, AI-enhanced leadership, adopting AI to reduce employee workload, incorporating AI tools into work processes, and employee engagement.The path coefficients helped us better understand how and to what extent the constructs in the model influence each other.This allowed us to determine which variables have stronger or weaker effects as when comparing male and female entrepreneurs.We identified which gender has higher values in these connections, providing insights into how male and female entrepreneurs leverage AI differently to increase their employees' engagement.This comparison will shed light on potential differences in approaches to implementing AI strategies between genders and how these approaches affect employee engagement in their enterprises. Data and Sample This research involved a randomly selected sample of 326 SMEs and large enterprises in Slovenia.To be classified as a small-sized enterprise, a company must meet at least two of the following criteria within a fiscal year: (1) maintaining an average workforce of no Data and Sample This research involved a randomly selected sample of 326 SMEs and large enterprises in Slovenia.To be classified as a small-sized enterprise, a company must meet at least two of the following criteria within a fiscal year: (1) maintaining an average workforce of no more than 50 individuals, (2) generating annual net sales of up to EUR 8 million, and (3) having total assets worth up to EUR 4 million, under the provisions of ZGD-1 [62].A medium-sized enterprise must meet two of these criteria: (1) the average number of employees during the financial year does not exceed 250, (2) net sales revenue does not exceed EUR 40 million, and (3) the value of assets does not exceed EUR 20 million.In contrast, large enterprises are defined by exceeding the following specific thresholds: (1) employing more than 250 people on average during a fiscal year, (2) achieving net sales revenue of over EUR 40 million, and (3) owning total assets exceeding EUR 20 million, as delineated in ZGD-1 [62].The sample's makeup showed a distribution with 52.76% being large enterprises and 47.24% SMEs.The participant composition in this study included 54.91% male and 45.09% female entrepreneurs.The distribution of entrepreneurs by length of employment in the sample was as follows: 24.85% of entrepreneurs had employment length ranging from 11 to 20 years.The most significant portion of the sample, 42.02%, fell within the employment length of 31 to 40 years.Those with an employment length of 21 to 30 years comprised 31.29% of the total sample.Respondents with an employment duration of over 41 years constituted 1.84% of the sample. Research Instrument We used a questionnaire that was a closed-type 5-point Likert-type scale.Items for the AI-supported entrepreneurial culture construct were adopted from Dabbous et al. [63], and items for the AI-enhanced leadership and employee engagement construct were adopted from Wijayati et al. [4].Items for the adoption of AI to reduce employee workload construct were adopted from Qiu et al. [64].Items for the incorporation of AI tools into work processes construct were adopted from Wamba-Taguimdje et al. [1] and Niederman [65]. Statistical Analysis To address the complexity of assessing the impact of AI on employee engagement across gender nuances in Slovenian entrepreneurship, this study employed a variety of statistical tests, each chosen for its specific ability to analyze the data in a way that aligns with the research objectives.The Mann-Whitney U test and SEM were pivotal in this analytical process, offering insights into the nuanced relationships and differences that may exist between male and female entrepreneurs' use of AI.In the initial phase, the Kolmogorov-Smirnov and Shapiro-Wilk tests were conducted to assess the normality of the data distribution.Given that the results indicated the data were not normally distributed (p < 0.001), we opted for a non-parametric approach for two independent samples-the Mann-Whitney U Test was used to examine statistically significant differences between entrepreneurs based on gender.The Mann-Whitney U Test was utilized to compare differences between male and female entrepreneurs on various aspects of AI-supported entrepreneurial management practices (AI-supported entrepreneurial culture, AI-enhanced leadership, adopting AI to reduce employee workload, and incorporating AI tools into work processes). We also used structural equation modeling for our data analysis.Within this framework, we constructed a model and analyzed the differences regarding constructs between male and female entrepreneurs (Figure 1).Moreover, we compared the model between male and female entrepreneurs and analyzed the path coefficients.Thus, SEM was chosen for its ability to analyze complex relationships between observed and latent variables.This method allowed us to construct a model that reflects the understanding of how AI impacts employee engagement, incorporating multiple variables and their inter-relations simultaneously.The use of SEM enabled us to test the hypothesized model of the impact of AI on employee engagement, accounting for the mediating effects of gender differences and providing a comprehensive understanding of the direct and indirect relationships between variables.We used structural equation modeling (SEM) and employed WarpPLS 8.0 as our software tool of choice.The selection of WarpPLS 8.0 was informed by its multitude of benefits and distinctive features not commonly found in other programs.A principal benefit we recognized is its ability to clearly define non-linear relationships between pairs of latent variables [66].For assessing validity, we scrutinized both the average variance extracted (AVE) and the composite reliability (CR) [66].The variance inflation factor (VIF) was utilized to detect any multicollinearity issues [67].Moreover, both the Mann-Whitney U test (Tables 1-4) and SEM are valid in analyzing the research data as they allow for a nuanced examination of the differential impacts of AI on male and female entrepreneurs and their practices concerning employee engagement.These methods complement each other, with the Mann-Whitney U test providing a preliminary understanding of the differences between genders, and SEM offering a detailed analysis of the complex relationships and underlying factors contributing to these differences.This dual approach ensures that the findings are robust, reliable, and reflective of the intricate dynamics at play in the impact of AI on employee engagement within the context of Slovenian entrepreneurship.Additionally, we adhered to the quality criteria for indicators outlined in Table 5 to evaluate the reliability of our model.Also, Table 6 shows key quality assessment indicators of the research model. Results In the following, we present descriptive statistics and the results of the Mann-Whitney U test for each construct in related to the use of artificial intelligence between male and female entrepreneurs.Table 1 shows descriptive statistics and the Mann-Whitney test results for the AI-supported entrepreneurial culture construct.The results in Table 1 show no significant differences between male and female entrepreneurs in AI-supported entrepreneurial culture.On average, both male and female entrepreneurs agree that their companies' policies are clearly defined, their companies' management provides information to employees in a timely manner, and employees are familiar with all the services/products they offer/produce in the enterprise.In addition to these items, male entrepreneurs, on average, show marginally higher agreement than female entrepreneurs that their enterprises' culture is very responsive and changes easily.In the context of using AI technology in any part of the business, again, males reported a higher average agreement than females; however, on average, both genders agree with this statement.Males are usually more confident when responding to such questions.On the other hand, female entrepreneurs achieve a higher average agreement regarding the statement that employees fully understand the goals of their enterprise.The Mann-Whitney U test for all items is p > 0.05, which implies that the differences in the responses between male and female entrepreneurs are not statistically significant.This implies that both male and female entrepreneurs share similar views regarding the incorporation of AI into their entrepreneurial culture.Thus, we accepted the hypothesis H1.0:There is no statistically significant difference in AI-supported entrepreneurial culture between male and female entrepreneurs. Both male and female entrepreneurs may have similar levels of exposure to and access to AI technologies, leading to a common understanding and approach to integrating AI within their business practices.Also, uniformity in educational programs or professional training for entrepreneurs may result in a shared level of knowledge and skill in applying AI to business, leading to similar responses.There may be increasing gender parity in entrepreneurship, with women attaining similar levels of empowerment, resource access, and opportunities as their male counterparts, which is reflected in their similar perspectives on AI [15,34,42].Table 2 presents descriptive statistics and the Mann-Whitney Test results for the AI-enhanced leadership construct. The results in Table 2 show that on average, male entrepreneurs exhibited a marginally higher level of agreement with the statements compared with female entrepreneurs.Males are usually more confident when responding to such questions.Notably, both male and female entrepreneurs displayed a shared perspective on the development of a clear vision for AI in their departments (p > 0.05).This convergence suggests a general agreement across genders on the importance and approach to strategic AI planning.Similarly, when it comes to understanding and resolving business problems using AI, the responses from both genders indicated a comparable level of competence (p > 0.05).This implies mutual confidence in leveraging AI for effective problem-solving, regardless of gender.Furthermore, the results show that entrepreneurs of both genders are equally adept at anticipating future business needs and proactively designing AI solutions.This forward-thinking approach signifies a gender-neutral perspective in AI integration for future business requirements.Additionally, the ability to work collaboratively with various stakeholders, such as data scientists, employees, and customers, to identify opportunities that AI might bring was perceived similarly by both male and female entrepreneurs (p > 0.05).This trend points toward a common acceptance of collaborative methods in maximizing the benefits of AI across different entrepreneurial environments.An important finding of this study is the similar view of male and female entrepreneurs on the presence of strong leadership supporting AI initiatives and their commitment to AI projects within their organizations.This reflects a widespread recognition of the pivotal role of leadership in successfully adopting and integrating AI technologies.Moreover, this study highlights that both genders view their organizations as having a culture of open communication and effective problem-solving, especially in AI-related contexts (p > 0.05); however, in this case, female entrepreneurs achieve a higher average agreement.This could indicate a broader trend in transparency and agility in dealing with AI challenges in the entrepreneurial world.Lastly, the provision of necessary training for dealing with AI applications was seen similarly by entrepreneurs of both genders.This underscores the recognized importance of education and skill devel-opment in fostering effective AI implementation.Thus, we accepted the hypothesis H2.0:There is no statistically significant difference in AI-supported leadership styles between male and female entrepreneurs.Overall, these findings suggest a notable parity between male and female entrepreneurs in their approach and attitude toward AI-enhanced leadership.This parity reflects a broader trend in the entrepreneurial world, where gender differences are diminishing in the face of technological advancements and the evolving role of leadership in business innovation.Table 3 presents descriptive statistics and the Mann-Whitney Test results for the adoption of AI to reduce employee workload construct.In Table 3, which examines the construct of Adopting AI to reduce employee workload, a subtly higher average agreement with the related statements is observed among female entrepreneurs compared with their male entrepreneur counterparts.While the median response for both male and female entrepreneurs was 4.00, the mean value of statements from female entrepreneurs was slightly higher.This suggests that female entrepreneurs may perceive AI as more effective in reducing the workload on administrative staff.When it comes to AI's capability to take orders and complete tasks, which is a direct measure of workload reduction, again, female entrepreneurs reported marginally higher mean scores.This could suggest that they are more optimistic about the practical applications of AI in day-to-day business activities, valuing its potential to enhance operational efficiency.In the realm of AI aiding in searching and analyzing information, female entrepreneurs again manifested a marginally higher mean response.This perspective could indicate a keen awareness of AI's time-saving and efficiency-boosting capabilities, particularly in handling data-intensive tasks.Lastly, the belief that AI can help in getting jobs done and save employees' work time was also more pronounced among female entrepreneurs.This slightly higher agreement aligns with a pragmatic view where the immediate benefits of AI in enhancing workplace productivity are particularly valued.The inclination of female entrepreneurs to express a slightly higher agreement with these statements about AI's role in reducing workload might stem from a variety of factors.For example, Jahnavi and Perwez [68] and Castrillon [69] summarize that it could be influenced by a more pronounced emphasis on efficiency and work-life balance or a pragmatic approach toward technology adoption focusing on tangible benefits.Additionally, it might reflect a more people-centric approach in business management, where the well-being and efficiency of employees are given priority.The Mann-Whitney U test shows that there are no statistically significant differences between male and female entrepreneurs in adopting AI to reduce employee workload (p > 0.05); thus, we confirm hypothesis H3.0:There is no statistically significant difference in adopting AI to reduce employee workload between male and female entrepreneurs.Table 4 shows descriptive statistics and the Mann-Whitney test results for the incorporation of AI tools into work processes construct.The results in Table 4 indicate that both male and female entrepreneurs, on average, agree that their enterprise has a digital transformation strategy, including AI adoption, followed by their enterprise utilizing AI technologies for work design to plan new tasks and use predictive analytics tools to improve work.Male entrepreneurs display a slightly higher average level of agreement with all items related to incorporating AI tools into work processes, except for using chatbots to improve work and using AI technologies for work design, where female entrepreneurs have a marginally higher average agreement.For example, Chae and Goh [11] found that male entrepreneurs are more likely to engage in digital entrepreneurship.This trend could be attributed to factors such as the historical predominance of males in tech-related fields, gender-specific network and resource access, and societal attitudes toward technology and risk-taking.The results of the Mann-Whitney U test indicate that there are no statistically significant differences between genders (p > 0.05) in the incorporation of AI tools into work processes (Table 4).The proactive incorporation of AI tools into work processes underscores a strategic alignment with digital transformation goals, showcasing a commitment to leveraging advanced technologies to drive business success.This alignment is evident across male and female entrepreneurs, reflecting a shared understanding of AI's strategic importance in modern business practices [11,38,39].Therefore, we confirmed hypothesis H4.0:There are no statistically significant differences in incorporating AI tools into work processes between male and female entrepreneurs.In the second step of our study, structural equation modeling was utilized to estimate path coefficients, focusing on the differences between female and male entrepreneurs in relation to AI constructs and its impact on employee engagement, for the model presented in Figure 1.We compared the structural models for both genders, analyzing the path coefficients to gain insights into the influence of various constructs in the model on each other.This analysis not only illuminated the distinct approaches and perceptions of male and female entrepreneurs toward AI but also highlighted specific areas where interventions can be made to enhance AI adoption and its benefits across genders.By pinpointing these differences, this study provides actionable insights for developing more inclusive AI strategies that accommodate female-and male-led enterprises' unique needs and strengths.Table 5 serves as an integral component of this research, showcasing the results of a factor analysis conducted to assess various AI-related constructs within the organizational setting.It systematically presents the reliability of measurement scales by Cronbach's alpha, sampling adequacy via KMO and Bartlett's test, commonalities, and factor loadings for each item under study. The results presented in Table 5 reveal that both the measure of sampling adequacy and Bartlett's test of sphericity for each variable confirm the appropriateness of applying factor analysis.The commonalities for all five constructs exceed 0.40; thus, no variables were discarded.Moreover, all factor loadings are higher than 0.60.All measurement scales demonstrate high reliability (all Cronbach's alpha > 0.80).In addition to the results in Table 5, the total variance explained for AI-supported entrepreneurial culture is 63.767%, the total variance explained for AI-enhanced leadership is 65.098%, the total variance explained for adopting AI to reduce employee workload is 74.425%, the total variance explained for incorporating AI tools into work processes is 76.510%, and the total variance explained for employee engagement is 84.209%.Table 6 presents model fit and quality indicators.For all five constructs, the CR exceeds 0.7, and the AVE values surpass 0.5, with CR values consistently outstripping AVE values.This configuration underscores the achievement of convergent validity across all constructs.An R 2 value of 0.764 indicates that the model explains approximately 76.4% of the variance in the dependent variable using the independent variables, which is considered a relatively high value.This suggests that the model fits the data well and possesses substantial predictive power.A Q 2 value greater than 0 indicates that the model possesses predictive relevance.Predictive relevance is a key indicator that the model provides useful insights that can be applied for practical or research purposes.For practical applicability, Q 2 values greater than 0 are generally considered [66].The VIF values, which fell between 1.012 and 1.084 and are well below the threshold of 5.0, indicate the absence of collinearity issues in the structural model's outcomes.Table 8 presents standardized path coefficients for male and female entrepreneurs. Table 8 shows the results of the SEM and the structural coefficients of the compounds of the basic structural model.Table 8 reveals that all constructs positively affect employee engagement for both male and female entrepreneurs.Based on the provided results from Table 8, the path coefficients for both male and female entrepreneurs show how different constructs affect employee engagement.For AI-supported entrepreneurial culture (γ = 0.131, p < 0.05) and incorporating AI tools into work processes (γ = 0.184, p < 0.05), the path coefficients are slightly higher for male entrepreneurs, indicating a stronger effect on employee engagement compared with female entrepreneurs.However, when it comes to adopting AI to reduce employee workload (γ = 173, p < 0.05) and AI-enhanced leadership (γ = 157, p < 0.05), the effects are stronger for female entrepreneurs, as indicated by higher path coefficients.Additionally, the positive link direction is observed in all cases.Furthermore, Cohen's coefficient values show that the influence of the predictive latent variables is of high strength in all cases.Thus, we accepted hypothesis H5.1:There is a statistically significant difference in the path coefficients in the use of AI to enhance employee engagement between male and female entrepreneurs. Discussion The integration of AI into entrepreneurship represents a transformative shift in business operations, strategies, and scaling, particularly when viewed through the lens of gender.The democratization of access to AI tools has the potential to reduce traditional disparities between male and female entrepreneurs, fostering a more inclusive and equitable business environment [10,17].This aligns with prior studies that highlight AI's role in leveling the playing field, enabling entrepreneurs of all genders to enhance operational efficiency and engage more effectively with employees [1,4,38,39].Our findings confirm that both male and female entrepreneurs recognize the importance of clear company policies and timely communication, which are crucial for effective AI integration.However, male entrepreneurs tend to view their business culture as more adaptable to technological changes, which may facilitate higher employee engagement with AI tools [11,19].This observation is consistent with research suggesting that proactive leadership in technology adoption can significantly influence organizational culture and employee responsiveness [70].Interestingly, our results revealed gender differences in the perception and implementation of AI, with male entrepreneurs displaying a higher path coefficient in AI-supported entrepreneurial culture's impact on employee engagement.This may reflect broader societal norms and expectations about gender roles in technology and innovation [28,71,72].Such disparities underscore the need for a gender-neutral approach in adopting AI, ensuring that both male and female entrepreneurs can leverage these technologies to compete effectively and create a collaborative, innovative workplace environment.To enhance the impact of AI within entrepreneurial cultures, it is essential for leaders to foster an inclusive atmosphere that supports technological advancement and innovation.Providing adequate training and resources, ensuring open communication, and actively involving employees in AI integration can help maximize the benefits of AI across genders.Furthermore, regular monitoring and adaptation of AI strategies based on employee feedback can improve outcomes and foster a culture of continuous improvement and inclusivity [29,30,60].The results presented in Table 2 reveal that both female and male entrepreneurs generally agree on having a clear vision for their departments.However, there are notable differences in other areas.Male entrepreneurs report a strong understanding of business problems and the ability to direct AI initiatives effectively, emphasizing the importance of providing employees with the necessary training for handling AI applications.Female entrepreneurs, on the other hand, highlight the significance of fostering open communication and immediate problem resolution, which are complemented by proactive AI solution designs to anticipate and meet future business needs of functional managers, suppliers, and customers.Table 8 shows that AI-enhanced leadership significantly impacts employee engagement, with a higher path coefficient observed for female entrepreneurs.Research [26,32,71] suggests that female entrepreneurs often exhibit more transformational leadership qualities, such as empathy, inclusiveness, and a collaborative approach.These qualities are essential for effective communication and relationship building, which are pivotal in engaging employees, particularly when implementing new technologies like AI.Based on these findings, several recommendations can be made to maximize the benefits of AI-enhanced leadership for improving employee engagement across genders as follows: (1) personalized development: entrepreneurs should use AI data analysis to develop personalized mentoring and coaching programs that align with individual employee needs and goals; (2) flexible work environments: AI can help create flexible work conditions that cater to employee preferences, such as adjustable working hours or remote work options, thereby boosting satisfaction and engagement; (3) leadership feedback: AI tools can offer insights into leadership practices, enabling entrepreneurs to enhance their understanding and empathy toward employees, which can lead to improved motivation and engagement; (4) performance recognition: employing AI to analyze and recognize employee achievements promptly can strengthen appreciation and contribute to higher engagement; and (5) data-driven decision-making: utilizing AI for strategic decision-making can enhance project efficiency and success, positively impacting employee engagement.By integrating these AI-driven strategies into their leadership practices, entrepreneurs can foster an encouraging and supportive work environment that enhances employee engagement and leverages the unique strengths of both male and female leadership styles.The findings in Table 3 reveal that both female and male entrepreneurs predominantly agree that AI technology implemented in their enterprises can efficiently handle orders and tasks, consequently alleviating employee workload.Additionally, there is consensus that AI aids in information search and analysis, leading to a reduction in employee burden.Moreover, respondents from both groups acknowledge that AI streamlines task completion, saves work time, and enhances communication with users/customers, thereby easing the workload of employees.Female entrepreneurs, on average, demonstrate higher agreement with utilizing AI to mitigate employee workload compared with their male counterparts.Table 8 further indicates that the impact of AI adoption on reducing employee workload shows a stronger path coefficient among female entrepreneurs than male entrepreneurs.This could be attributed to various factors such as the inclusive and empathetic leadership styles often exhibited by female entrepreneurs, which prioritize employee well-being and workload management [23,24].Enterprises led by female entrepreneurs may cultivate a culture that values work-life balance, employee satisfaction, and technological support, thus enhancing employee engagement [26,32,73].This culture, emphasizing a reduction in employee workload through AI, could contribute to higher employee engagement [4].Female entrepreneurs may adopt a comprehensive approach to AI integration, ensuring that these technologies genuinely alleviate employee burdens rather than solely automate tasks.This might involve extensive training, support, and feedback mechanisms to ensure successful adoption [26,32,34,68,73].To adapt AI to reduce employee workload, we propose the following recommendations for entrepreneurs, regardless of gender.Entrepreneurs should use AI to automate routine and time-consuming tasks such as data processing, email management, and administrative duties.This can free up employees' time for more strategic and creative tasks, enhancing their engagement.Entrepreneurs should use AI to analyze communication within the enterprise and identify potential issues or bottlenecks (AI can suggest improvements or automate certain aspects of communication to make it more efficient and less burdensome) as well as develop AI systems that can recognize employee engagement and success and automatically suggest rewards or acknowledgments.This can increase employee engagement.Also, entrepreneurs should implement AI tools for monitoring the well-being of employees, which can proactively identify signs of overload or stress and use AI to analyze the performance and preferences of employees to tailor their work tasks and projects.Ensuring employees are engaged in tasks that match their skills and interests reduces the feeling of overload.By employing these strategies, entrepreneurs can leverage the power of AI to improve the work environment, reduce employee workload, and increase their engagement.Entrepreneurs should involve employees in decisions related to AI adoption and application.This could include surveys, focus groups, or inclusion in pilot projects.Employee involvement can lead to higher acceptance and engagement with AI technologies.Finally, entrepreneurs should regularly evaluate the impact of AI on employee workload and engagement; they should be prepared to adjust strategies based on feedback and outcomes to ensure that AI adoption remains aligned with employee well-being and organizational goals; and they should leverage AI to automate routine tasks and free up employees for more creative and meaningful work.This not only reduces workload but also enhances employee engagement. The adoption of AI technologies across both genders indicates a strong commitment to digital transformation strategies among entrepreneurs, as shown in Table 4.However, nuances exist in their application: male entrepreneurs are slightly more inclined toward using project scheduling and resource management tools, while female entrepreneurs favor tools like Robotic Process Automation.This gender-based divergence in AI tool utilization underscores the potential of digital technologies to not only streamline operations but also to address gender disparities in entrepreneurship.Significantly, the adoption of these technologies offers a pathway to democratizing entrepreneurship by providing equal access to essential resources and networks, thereby enhancing female entrepreneurs' competitiveness in traditionally male-dominated markets [74,75].Digital platforms also facilitate flexible work arrangements, which can mitigate some societal barriers women face, thus promoting a more inclusive and dynamic business environment.However, Table 8 reveals the higher effectiveness of AI tools in enhancing employee engagement among male entrepreneurs.To address this and foster an equitable work environment, it is crucial for entrepreneurs to develop AI integration strategies that are sensitive to the diverse needs of all employees.This includes forming gender-inclusive teams, improving communications about AI benefits, and employing AI analytics to deepen insights into business operations.Additionally, integrating AI solutions like chatbots and virtual assistants can free up employees to tackle more complex challenges, thereby boosting engagement. 1. Unveiling gender-specific dynamics in AI use: Our study explored how male and female entrepreneurs in Slovenia differ in their perception and implementation of AI technologies.This contribution significantly expands the literature on the impact of gender on the adoption and use of technology in entrepreneurship. 2. Development and testing of a model: Our study developed and empirically tested a model that connects various aspects of AI-supported entrepreneurial culture, AIenhanced leadership, adopting AI to reduce employee workload, and incorporating AI tools into work processes with employee engagement.This model offers a new framework for understanding the complex interactions between AI and entrepreneurship. 3. Focus on the Slovenian entrepreneurial context: With an emphasis on Slovenia, which has been a relatively unexplored environment in the context of AI in entrepreneurship, our study contributes to a better understanding of global trends and their local application. 1. Improvement of entrepreneurial practices: The findings of our study provide practical insights into how enterprises can better leverage the potential of AI to improve leadership practices, reduce employees' workload, and increase their engagement.This includes the following: ( Support for policymakers: Our research offers a basis for the development of targeted policies and programs that promote gender equality in entrepreneurship and technology, focusing on utilizing AI to achieve these goals. 3. Advice for entrepreneurs: Thos study provides advice for male and female entrepreneurs on how AI technologies can be successfully integrated into their business models and work processes to improve employee engagement and foster innovation. Limitations and Future Possibilities The limitations of this study primarily revolve around its geographical scope, focusing solely on Slovenian enterprises.The present study, while providing valuable insights into the gender nuances of AI's impact on employee engagement within the Slovenian entrepreneurial context, inherently carries limitations in its geographic scope.Acknowledging this limitation, we advocate for future research to adopt a more global perspective, encompassing a diverse range of countries and regions.Such research should aim to explore the multifaceted relationship between AI and gender in entrepreneurship across various cultural and economic landscapes.It is imperative to consider how different societal norms, business practices, and levels of AI maturity could affect the integration of AI technologies in businesses and their subsequent impact on employee engagement and gender dynamics.Moreover, comparative studies between countries could unveil how regional variations in AI adoption influence the empowerment or marginalization of different genders within the entrepreneurial ecosystem.This could involve investigating factors such as access to AI technologies, the availability of skills training, and the presence of supportive policies for gender equality in the tech sector.Future research recommendations include expanding this study to include a wider geographical range, incorporating longitudinal data to examine changes over time, and integrating qualitative research methods to gain deeper insights into the subjective experiences of entrepreneurs with AI.Furthermore, investigating the role of AI in different industry sectors could provide a more nuanced understanding of its impact on gender dynamics within entrepreneurship. One of the novel aspects of our study was the comparative analysis of path coefficients between male and female entrepreneurs within the Slovenian entrepreneurship context.This methodological choice was driven by our goal to uncover gender-specific dynamics in the adoption and impact of AI technologies on entrepreneurship, a signif-icant yet underexplored area of research.By exploring these differences, we aimed to contribute to the nuanced understanding of how gender influences technological engagement in entrepreneurship.We directly compared the path coefficients for male and female entrepreneurs to highlight potential patterns and generate hypotheses for future investigations.This limitation underscores the importance of further research that could employ additional statistical methodology.Future studies could build on our initial findings by analyzing a multigroup structural equation model to statistically test differences between the path coefficients of male and female entrepreneurs. The integration of AI into entrepreneurship marks a significant shift toward creating equitable opportunities for entrepreneurs of all genders.By harnessing AI to bridge historical gaps, the business world is evolving into a more inclusive arena where the success of male and female entrepreneurs is determined by their skills and contributions rather than gender [16,38].This evolution not only fosters gender equality but also stimulates innovation and competitiveness on a global scale.Embracing AI equips entrepreneurs with the tools to devise strategies that bolster engagement, productivity, innovation, and harmony within the workplace, steering businesses toward lasting success in an increasingly inclusive economic environment [1,4,5,39]. Conclusions In the digital transformation era, AI is revolutionizing not just the realms of data analytics or robotic automation but also redefining enterprises' approaches to managing workplace dynamics and employee interactions.One of the more intriguing outcomes of AI's integration into the workplace is its potential to "blur" and challenge traditional gender norms, especially in entrepreneurship.However, as AI assumes a central role in businesses, questions arise about gender disparities or unities in its application.Human biases often come into play during decision-making processes.AI systems can facilitate objective decision-making in various leadership aspects, such as promotions, assignments of employee tasks, and evaluations of employees.By relying on quantifiable metrics and data-driven insights rather than subjective judgments, AI minimizes bias based on gender or any other factors.This fosters a sense of fairness among employees, reduces disparities, and leads to employee engagement.When employees believe that their company values diversity and gives everyone an equal chance to succeed, they are more likely to be engaged and committed to their roles.High employee morale often leads to increased productivity and work engagement.Our study's findings indicate no statistically significant differences between male and female entrepreneurs in Slovenian enterprises across the following key constructs: AI-supported entrepreneurial culture, AI-enhanced leadership, adopting AI to reduce employee workload, incorporating AI tools into work processes, and employee engagement.Understanding such trends helps businesses to adapt and foster a more inclusive and engaged work environment.Our study holds significant implications, suggesting that integrating AI into the work environment can reduce gender disparities in decision-making and leadership within Slovenian enterprises.The absence of significant gender differences in these areas implies that AI-driven approaches can create a more level playing field where gender-related factors influence leadership and technology practices less.Our research findings underscore AI's potential to promote fairness and objectivity in various aspects of business management.Our study significantly advances the understanding of gender's complex roles in utilizing AI within the entrepreneurial environment.It reveals how AI is a transformative force in reducing traditional gender gaps in entrepreneurship, providing a fresh perspective on how technology's integration reshapes business operations and strategies.By analyzing how male and female entrepreneurs differently experience and embrace AI technologies, our research underscores AI's capacity to cultivate a business environment that is more inclusive, equitable, and competitive business landscape.This study adds to the academic discourse on AI and entrepreneurship.It provides actionable insights for practitioners aiming to harness AI's full potential while promoting gender equality within the entrepreneurial sector. Systems 2024 , 25 Figure 1 . Figure 1.Comparative analysis of AI utilization for employee engagement in male-and female-led enterprises. Figure 1 . Figure 1.Comparative analysis of AI utilization for employee engagement in male-and femaleled enterprises. Table 1 . Descriptive statistics and the results of the Mann-Whitney test for the AI-supported entrepreneurial culture construct. Table 2 . Descriptive statistics and the Mann-Whitney Test results for the AI-enhanced leadership construct. Table 3 . Descriptive statistics and the Mann-Whitney Test results for the adoption of AI to reduce employee workload construct. Table 4 . Descriptive statistics and the Mann-Whitney Test results for the incorporating AI tools into work processes construct. Table 5 . Factor analysis results. Table 6 . Model fit and quality indicators. Table 6 reveals statistically significant values for APC, ARS, and AARS, all with p-values below 0.05, indicating strong model predictors.The AVIF and AFVIF values are under 5.0, demonstrating low collinearity concerns.SPR, RSCR, SSR, and NLBCD exceed their minimum thresholds, ensuring model validity.The GoF (goodness-of-fit) indicator results show that the model is highly appropriate.Table7shows the quality indicators of the structural model. Table 7 . Indicators of the quality of the structural model. Table 8 . Standardized path coefficients of male and female entrepreneurs. 1) Tailored AI training programs: given the gender differences in AI adoption and utilization observed, enterprises should develop gender-sensitive training programs.For example, since female entrepreneurs may prioritize AI for market research and customer experience, training initiatives should focus on enhancing these skills among female entrepreneurs, offering tools and case studies that align with their strategic preferences.(2)Gender-inclusive AI tool development: enterprises should involve both male and female entrepreneurs in the development phase of AI tools.This involvement can ensure that the tools are designed to meet the varied needs and preferences of all users, ultimately leading to broader acceptance and more effective use across the business.(3)Strategic decision-making support: enterprises should develop AI-driven analytic tools that specifically aid in strategic decision-making, ensuring they are accessible and adaptable to both male and female entrepreneurs.Such tools can help in identifying trends, forecasting, and providing insights that cater to the distinct strategic inclinations observed among genders.(4)Enhancing employee engagement through AI: enterprises should implement AI systems that actively monitor employee engagement and workload, tailored to the different management styles of male and female entrepreneurs.This can help in adjusting work processes in real time to enhance productivity and work engagement.(5)Community building and networking through AI: enterprises should facilitate AI-enabled platforms that foster networking and mentorship among entrepreneurs.These platforms can be designed to encourage interaction across genders, promoting knowledge exchange and collaboration that respects and utilizes the unique strengths of each group.2.
2024-04-26T15:25:51.481Z
2024-04-24T00:00:00.000
{ "year": 2024, "sha1": "3684881e6a9cc5a8cb18d0f0f45708211fc3f75a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-8954/12/5/145/pdf?version=1713957621", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4d3791787fb94ec261506e8539a79e5bbad0922f", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
251626726
pes2o/s2orc
v3-fos-license
A qualitative study of pregnant women's opinions on COVID-19 vaccines in Turkey Objectives to examine pregnant Turkish women's opinions on COVID-19 vaccines. Design a qualitative approach was used to gather data through semi-structured interviews. Participants and setting 16 women about to receive a vaccine during their pregnancy and who did or did not experience vaccine hesitancy participated. Analysis qualitative content analysis. Findings three main themes emerged regarding the pregnant women's opinions on COVID-19 vaccines: fear, security/insecurity and social support. Key conclusions and implications for practice pregnant women mostly recalled their babies and fears about COVID-19 vaccines. Although the fear of vaccines created vaccine hesitancy during pregnancy, the fear of contracting COVID-19 led to a positive attitude to the vaccines. It is critical to provide pregnant women with information about COVID-19 and vaccines for the disease in order to enhance vaccination rates among pregnant women. Introduction SARS-CoV-2 was recognized in the world in December in 2019, and COVID-19, which was caused by this virus, spread very quickly ( WHO, 2022a ). Due to a rapid global dissemination, the World Health Organization (WHO) declared COVID-19 as a pandemic on 11 March 2020 ( WHO, 2022b ). To control the pandemic, during which millions of people became infected and died ( WHO, 2022a ), vaccination against COVID-19 started worldwide ( The New York Times, 2022 ). Research and development on COVID-19 vaccines was undertaken, and by 25 January 2022 nine vaccines were approved ( The New York Times, 2022 ). Despite these advances, the rate of the people vaccinated against COVID-19 was, and is not, considered sufficient ( WHO, 2022a ). Opinions of people on COVID-19 vaccines play an important role in the low vaccination rate ( Dodd et al., 2021 ;Eberhardt and Ling, 2021 ;Moore et al., 2021 ). These opinions can be affected by information about vaccine effectiveness ( Dodd et al., 2021 ), a feeling of insecurity about the countries where the vaccines were developed ( Moore et al., 2021 ), conspiracy beliefs ( Eberhardt and Ling, 2021 ) and dissatisfaction with explanations about the vaccines ( Dodd et al., 2021 ). Even individuals not experiencing a special condition can have opinions against vaccines. Therefore, what pregnant women think of the vaccines is worth an in-depth examination. When projects utilising COVID-19 vaccines were first launched, it was recommended that most individuals should be vaccinated against the disease. However, there was uncertainty about special groups, such as pregnant women, of people. With an increase in vaccine experimental studies the uncertainty about pregnant women reported to have a risk of experiencing a more severe infection disappeared ( CDC, 2021 ). Then the WHO started to recommend giving a COVID-19 vaccine to women planning to become pregnant, pregnant women and breastfeeding women ( WHO, 2021 c). Although women experiencing hesitancy received a COVID-19 vaccine before becoming pregnant ( Gencer et al., 2021 ), they may reject booster doses because of their worries about harm to their pregnancy and baby ( Anderson et al., 2021 ). Therefore, the aim of the present study was to examine pregnant women's opinions on COVID-19 vaccines. Study design This qualitative study was based on the phenomenological approach. This approach focuses on the experiences of people about a phenomenon and is frequently used in health research ( Neubauer et al., 2019 ). The phenomenon examined in the present https://doi.org/10.1016/j.midw.2022.103459 0266-6138/© 2022 Elsevier Ltd. All rights reserved. study was pregnant women's opinions on COVID-19 vaccines. The study was reported in accordance with The Standards for Reporting Qualitative Research ( O'Brien et al., 2014 ). Recruitment and sampling The study sample comprised pregnant women (1) living in Turkey, (2) able to speak and understand Turkish, (3) at or over the age of 18 years, (4) with a minimum of 12-weeks-gestation, (5) having no contraindications to receive a COVID-19 vaccine (6) and experiencing or not experiencing hesitancy about vaccination. The women were accessed through snowball sampling between 6 October and 13 November in 2021. Snowball sampling (or chain sampling) is a nonprobability sampling technique where existing study subjects recruit future subjects from among their acquaintances ( Streubert and Carpenter, 2011 ;Houser, 2014 ). In research, the announcement of the study was made in the accounts mostly followed by pregnant women on social media sites such as Facebook and Instagram. The first pregnant woman included in the study was accessed on a social media site. With the help of the first pregnant woman who was contacted on social media, the other pregnant woman was reached, it was continued in a chained manner until the data saturation was reached. As recommended in the literature ( Streubert and Carpenter, 2011 ;Houser, 2014 ), interviews were continued until no new information was obtained. The study was completed with 16 pregnant women. Data collection Data were collected in two forms: a descriptive characteristic form and a semi-structured interview form at in-depth interviews on the phone which were conducted in the Turkish language. The descriptive characteristics form was composed of questions about sociodemographic characteristics such as age and income. The semi-structured interview form included the following open-ended questions: "What do you think of receiving a COVID-19 vaccine in pregnancy?" and "What are your reasons for getting or not getting a COVID-19 vaccine?". Each participant was interviewed by the same researcher (DFY). To clarify the participants' responses, improve their confirmability and offer the participants a chance to change their responses, the researcher read the summary of what they had said after each question and asked for their confirmation. Each interview lasted 17-27 minutes and was voice recorded. Analysis Content analysis was used for data analysis as described by Graneheim and Lundman, (2004) . Voice recordings of the interviews were transcribed verbatim and documented. The participants were assigned numbers such as P1, P2… P16 in the documents to keep their identities confidential. The interviews were analyzed by two researchers individually. Differences in codes and categories were compared and the final themes and categories were created ( Graneheim and Lundman 2004 ). The themes were explained and presented with quotations from the participants. The quotations were first translated into English and then the English versions were back-translated into Turkish to ensure that they were expressed correctly. Word cloud analysis, which is known to facilitate understandability of data ( Bletzer, 2015 ), was performed with MAXQDA 2022. The most frequently used 200 words at the interviews were selected. Pronouns (I, you, he, they, me, him, her and us etc.), possessive adjectives (my, your, his and her etc.), prepositions (in, at, for etc.), conjunctions (and, so, because etc.) and auxiliary verbs (can, could and didn't etc.) were excluded. The fonts of the words in the cloud were proportional to their frequencies at the interviews. Ethical considerations Research Ethics approval was obtained from the Turkish Ministry of Health COVID-19 Research Evaluation Committee and the medical ethics committee of a university in the western part of Turkey (E-60116787-020-113991). The pregnant women who were accessed were informed about the aim of the study, voicerecording of interviews and confidentiality of obtained data. They were assured that participation in the study was voluntary and that they could leave the study whenever they wanted. The women who agreed to participate in the study were informed about the study again and their oral consent was obtained after voice recordings started. Voice recordings and transcriptions were kept on password-protected computer files. Participants' demographic characteristics The mean age of the participants was 29.31 ±3.82 years. The mean gestational week of pregnancy at the time of interview was 23.43 ±8.1 and 11 participants were in their first pregnancy. Twelve participants had not had COVID-19 before. Twelve participants reported having vaccine hesitancy and seven participants had received a COVID-19 vaccine before they became pregnant ( Table 1 ). Opinions on COVID-19 vaccines The opinions of the participants on COVID-19 vaccines were analysed with a word cloud. The most frequently used word in the documents was vaccine (214 times), followed by know (149 times Three main themes emerged regarding the participants' opinions on COVID-19 vaccines: fear, security/insecurity and social support. Fear All of the women included in the study reported that they were afraid of contracting COVID-19 or receiving a vaccine for this infection. The pregnant women afraid of getting COVID-19 did not experience vaccine hesitancy and most of them received the first or booster dose during pregnancy. Their fears about COVID-19 included having miscarriages, having to stay in the intensive care unit and dying due to COVID-19. They were also worried that their babies may have to stay in the intensive care unit and may die. The source of their fears was the news on the mass media. Some of the pregnant women afraid of COVID-19 vaccines experienced vaccine hesitancy before pregnancy. Most of the pregnant women afraid of the vaccines reported that they did not have their booster doses or were indecisive about them. They were worried about side-effects of the vaccines on their babies and complications such as miscarriages and preterm birth. Social media, the mass media, explanations by health professionals and negative narrations from family members and friends played a role in their fears: Babies born to women with COVID-19 both need more ventilation support and have many negative conditions such as low birth weight. Therefore, I believe pregnant women should be vaccinated against COVID-19. (P-3). I've heard from the news… Pregnant women who do not receive a COVID-19 vaccine may die or their babies may die. I don't want to leave my baby behind or I don't want to lose my baby. This can be deeply saddening. Therefore, I felt afraid and decided to get the vaccine. (P-11). I know many people vaccinated against COVID-19. Some pregnant women vaccinated against the virus did not have any sideeffects. However, I've heard that others have hypertension, heart attack, diabetes and a preterm baby or die. One can feel scared when she/he has heard all of it. (P-1). Some of the pregnant women who had not had their booster vaccine doses before the interview did not want to make a decision on behalf of their babies and were afraid of feeling guilty about their late decisions: I can make my own decisions and take responsibility for their outcomes. However, it is very cruel to tell an individual 'I made a decision on behalf of you and that's why you have this condition'. Even if she/he is my baby, I don't have the right to cause her/him to suffer or experience something bad. I feel guilty and I can even question my motherhood. (P-2). Security/Insecurity The pregnant women not experiencing vaccine hesitancy reported having the feeling of security about COVID-19 vaccines since they protect against the disease. Recommendations made by health professionals and other pregnant women and posts shared by pregnant actresses getting a vaccine played an important role in their feeling of security: I asked other pregnant women and doctors about the vaccines. Everybody advised me to get a vaccine and I got vaccinated. (P-11). I saw some pregnant actresses getting vaccinated. One can easily empathise and think I can also get the vaccine if they get vaccinated when they are pregnant. This certainly creates a feeling of security. (P-16). All the pregnant women experiencing vaccine hesitancy admitted that they had the feeling of insecurity about COVID-19 vaccines. The reasons for this insecurity were a disbelief about the protective effect of the vaccines, a belief that experiments about the vaccines were conducted too quickly and may not have been completed, discontinuation of some drugs used for the treatment of COVID-19 and inconsistent explanations about the vaccines: The reason why I lost my confidence in the vaccines are that there were many people getting a COVID-19 vaccine and then contracting the disease. I even knew people dying from COVID-19 despite getting a vaccine. (P-8). I had COVID-19 when the disease first appeared. The drug given to me at that time has been forbidden in our country now, but I took that drug when I was ill. Therefore, I believe that results of the experiments about the vaccines are not very clear. In my opinion, it is not very sensible to get the vaccines during pregnancy. (P-9). I was given two doses of a COVID-19 vaccine, but explanations made about the vaccines created a sense of insecurity about them. Therefore, I don't think I will get the third dose of the vaccine. (P-4). Social support Half of the pregnant women not experiencing vaccine hesitancy said that strong support from their family and friends was effective in their decisions about vaccination against COVID-19. They commented that they did not feel lonely when they received support from their families and friends: My husband's advice to get the vaccine influenced me. I was actually planning to get the vaccine later. With his encouragement, I got the vaccine earlier than I planned. (P-3). I asked my friends about the vaccines. They said the vaccines were protective and did not have any negative effects. I talked to my husband. He said I could get a vaccine if I liked and added he would also receive a vaccine. My husband and my friends promised to support me. They assured me that I shouldn't feel guilty if something bad happened. I received the vaccine thanks to their great support. (P-15). Some pregnant women reported that the social media sites they utilised were effective in their decision to receive the vaccine: The women following an Instagram account for pregnant women posted that they got the vaccine, so I decided to get vaccinated . (P-13). Discussion In the present study, the word cloud analysis revealed that the most frequent words were vaccine, baby, vaccinated, know, pregnant, think, people, COVID-19, get and fear. The finding that "baby", "pregnancy" and "fear" were the second most frequently uttered words following "vaccine" showed that the pregnant women's opinions about COVID-19 vaccines focused on their babies and fears about the vaccines. Several prior studies have also demonstrated that pregnant women's fears about COVID-19 vaccines were mainly related to their babies ( Anderson et al., 2021 ;Gencer et al., 2021 ). In the current study, while fear was found to encourage some of the pregnant women to receive a COVID-19 vaccine, it discouraged others from being vaccinated. It has been stated in the literature that COVID-19 and resultant complications can lead to fears in pregnant women ( Naghizadeh and Mirghafourvand, 2021 ;Onchonga et al., 2021 ;Ralph et al., 2021 ;Sutton et al., 2021 ). The pregnant women afraid of having COVID-19 and resultant complications were not observed to experience vaccine hesitancy in our study. In the present study, the pregnant women who had fears about side-effects of COVID-19 vaccines on babies, miscarriages and preterm birth were found to experience vaccine hesitancy. Consistent with this finding, several studies have shown that pregnant women are worried about effects of COVID-19 vaccines on fetal development, miscarriages and fetal death ( Anderson et al., 2021 ;Duarte et al., 2021 ;Ralph et al., 2021 ;Sutton et al., 2021 ). Unlike prior studies, the present study showed that the pregnant women were afraid of feeling guilty if the vaccines had side-effects on their babies. The women avoided receiving the vaccines in order not to experience the feeling of guilt. In the present study, fears about COVID-19 vaccines were found to have been influenced by social media, the mass media, negative criticisms expressed by family and friends and comments made by health professionals. Similar studies also revealed that social media ( Bandeu et al., 2021 ;Gencer et al., 2021 ;Luo et al., 2021 ), conspiracy beliefs ( Scrima et al., 2022 ), comments passed by health workers and television programmes ( Nomura et al., 2021 ) had an impact on opinions on COVID-19 vaccines. The reason why social media causes fears about COVID-19 vaccines is that antivaccine rhetoric came to the fore on social media ( Luo et al., 2021 ). Gencer et al. (2021) also discovered that following Internet forums some pregnant women had vaccine hesitancy. It is reported in the literature that confidence in the effectiveness of the vaccines and reliable information about the vaccines play a role in acceptance of the vaccines among pregnant women ( Anderson et al., 2021 ;Januszek et al., 2021 ;Sutton et al., 2021 ). However, different sources were found to offer conflicting information ( Kumari et al., 2021 ). The sources of vaccine hesitancy were reported to be a relatively short time of vaccine development ( Trent et al., 2021 ), doubts about vaccine protection ( Dodd et al., 2021 ) and explanations made by health professionals ( Nomura et al., 2021 ). In the present study, the factors creating the feeling of security/insecurity about the vaccines were the processes of vaccine development and comments made by health professionals, especially midwives. Unlike previous studies, the present study revealed that debates over some drugs used for the treatment of COVID-19 at the beginning of the pandemic and discontinuation of these drugs in the later stages of the pandemic created vaccine hesitancy in the pregnant women. The women commented that uncertainty about the drugs used to treat COVID-19 could also be true for the vaccines against the disease. The current study also showed that social support had a positive influence on the pregnant women's opinions of COVID-19 vaccines and encouraged them to receive the vaccines. However, several studies have demonstrated that family and friends can play a role in hesitancy in routinely administered vaccines during pregnancy ( Andre et al., 2019 ;Urrunaga-Pastor et al., 2021 ). Conclusion Fears greatly affected pregnant women's opinions on COVID-19 vaccines. Although the fear of vaccines created vaccine hesitancy during pregnancy, the fear of contracting COVID-19 led to a positive attitude to the vaccines. Confidence in the vaccines and social support had a positive effect on the decision to receive a vaccine during pregnancy. Offering pregnant women information about COVID-19 and vaccines for the disease are important to increase the rate of vaccinations in pregnant women. Strengthening support mechanisms will eliminate the negative feelings of fear, guilt and regret, create a feeling of security about the vaccines and improve the rate of vaccinations in pregnant women. Ethical Approval Ethical approval was obtained from Covid-19 Research Evaluation Committee of the Turkish Ministry of Health and the medical ethics committee of a university in the western part of Turkey (E-60116787-020-113991). Funding Sources This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Declaration of interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2022-08-18T13:12:39.209Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "6ffd63e6be56c80004ba108f96e30c8b85012a72", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.midw.2022.103459", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "610c980bcfbf5fdf9d580f06e9088efefc2687b9", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
246210236
pes2o/s2orc
v3-fos-license
Oxygen-enhanced extremely metal-poor DLAs: A signpost of the first stars? We present precise abundance determinations of two near-pristine damped Ly$\alpha$ systems (DLAs) to assess the nature of the [O/Fe] ratio at [Fe/H]<-3 (i.e.<1/1000 of the solar metallicity). Prior observations indicate that the [O/Fe] ratio is consistent with a constant value, [O/Fe] ~ +0.4, when -3<[Fe/H]<-2, but this ratio may increase when [Fe/H]<-3. In this paper, we test this picture by reporting new, high-precision [O/Fe] abundances in two of the most metal-poor DLAs currently known. We derive values of [O/Fe] = +0.50 +/- 0.10 and [O/Fe] = +0.62 +/- 0.05 for these two z ~ 3 near-pristine gas clouds. These results strengthen the idea that the [O/Fe] abundances of the most metal-poor DLAs are elevated compared to DLAs with [Fe/H]>-3. We compare the observed abundance pattern of the latter system to the nucleosynthetic yields of Population III supernovae (SNe), and find that the enrichment can be described by a (19-25) M$_{\odot}$ Population III SN that underwent a (0.9-2.4)$\times 10^{51}$ erg explosion. These high-precision measurements showcase the behaviour of [O/Fe] in the most metal-poor environments. Future high-precision measurements in new systems will contribute to a firm detection of the relationship between [O/Fe] and [Fe/H]. These data will reveal whether we are witnessing a chemical signature of enrichment from Population III stars and allow us to rule out contamination from Population II stars. INTRODUCTION The first stars in the Universe are responsible for producing the first chemical elements heavier than lithium. These elements -known as metals -irrevocably changed the process of all subsequent star formation and mark the onset of complex chemical evolution within our Universe. Since no metal-free stars have been detected thus far, we know very little about their properties (e.g. their mass distribution) and the relative quantities of the metals that they produced. When the first stars ended their lives, some as supernovae (SNe), they released the first metals into their surrounding environment. The stars that formed in the wake of these (Population III) SNe were born with the chemical fingerprint of the first stars. By studying the chemistry of these relic objects, we can investigate the metals produced by the first stars and, ultimately, trace the evolution of metals across cosmic time. Historically, the fingerprints of the first stars have been studied in the atmospheres of low mass, Population II stars that are still alive today (e.g. Cayrel et al. 2004;Frebel et al. 2005;Aoki et al. 2006;Frebel et al. 2015;Ishigaki et al. 2018;Ezzeddine et al. 2019); the composition of the stellar atmosphere is studied in absorption against the light of the star. This process of stellar archaeology, along with simulations of stellar evolution, allows us to infer the elements produced by the first SNe and, subsequently, infer their properties (such as mass, rotation rate, explosion energy, and even the geometry of stellar outflows; Woosley & Weaver 1995;Chieffi & Limongi 2004;Meynet et al. 2006;Tominaga et al. 2007;Ekström et al. 2008;Heger & Woosley 2010;Limongi & Chieffi 2012). Extragalactic gas, often seen as absorption along the line-of-sight towards unrelated background quasars, offers a complementary opportunity to study chemical evolution and the first stars (Pettini et al. 2008;Penprase et al. 2010;Becker et al. 2011). The extragalactic gas clouds that have been studied in absorption to date cover a broad range of metallicity, which appears to increase over time (Rafelski et al. 2012(Rafelski et al. , 2014Jorgenson et al. 2013;Lehner et al. 2016;Quiret et al. 2016;Lehner et al. 2019). Those whose relative iron abundance is 1/1000 of the solar value (i.e. [Fe/H] < −3.0) 1 are classified as extremely metal-poor (EMP). These environments have necessarily experienced minimal processing through stars and are therefore an ideal environment to search for the chemical signature of the first stars. Among the least polluted environments currently known, there are three absorption line systems at z ∼ 3 − 4 that appear to be entirely untouched by the process of star formation, with metallicity limits of [M/H] −4.0 (Fumagalli et al. 2011;Robert et al. 2019); all three are Lyman limit systems (LLSs) whose neutral hydrogen column density is 16.2 < log 10 N (H i)/cm −2 < 19.0. These pristine LLSs are a rarity. It is more common to detect absorption line systems that are, at least, minimally enriched with metals. Crighton et al. (2016) reported the detection of a LLS at z ≈ 3.5 with a metal abundance Z/Z = 10 −3.4±0.26 whose [C/Si] abundance is consistent with enrichment by either a Population III or Population II star. In order to distinguish between these scenarios, additional metal abundance determinations are required. Distinguishing between the gaseous systems enriched by Population III stars and later stellar populations would allow us to trace the metals produced by the first stars and determine the typical Population III properties. Furthermore, such an investigation will reveal the timescale over which these gas clouds are enriched by subsequent stellar populations. A prime environment to disentangle these chemical signatures are damped Lyα systems (DLAs; log 10 N (H i)/cm −2 > 20.3; see Wolfe et al. 2005 for a review). Indeed, the most metal-poor DLAs may have been exclusively enriched by the first generation of metal-free stars (Erni et al. 2006;Pettini et al. 2008;Penprase et al. 2010;Cooke et al. 2017;Welsh et al. 2019). These high H i column density gas clouds are self-shielded from external radiation. Thus, the constituent metals reside in a single, dominant, ionization state. This negates the need for ionization corrections and leads to reliable gas-phase abundance determinations. These systems are most easily studied in the redshift interval 2 < z < 3 when the strongest UV metal absorption features are redshifted into the optical wavelength range. Only the most abundant elements are typically observed in EMP DLAs, including the α-capture elements (C, O, Si, S), some odd atomic number elements (N, Al), and some iron-peak elements (usually, only Fe). Given that these elements trace various nucleosynthetic pathways, these abundant elements are sufficient to understand the properties of the stars that are responsible for the enrichment of EMP DLAs, and tease out the potential fingerprints of the first stars. Based on the chemical abundances of EMP stars, we have uncovered some signatures of the first stars, including the enhancement of the lighter atomic number elements relative to the heavier atomic number elements. For example, the observed enhancement of carbon relative to iron in EMP stars with a normal abundance of neutron capture elements (i.e. a 'CEMP-no' star) may indicate that these stars contain the metals produced by Population III stars (see Beers H] > −3.0 suggests that the relatively higher metallicity DLAs were all enriched by a similar population of stars, drawn from the same initial mass function (IMF). Since oxygen is predominantly sourced from the supernovae of massive stars, the apparent 'inflection' observed in the EMP regime can be explained by three equally exciting possibilities. Relative to the stars that enriched the DLAs with [Fe/H] > −3.0, the stars that enriched the most metal-poor DLAs were either: (1) Drawn from an IMF that was more bottom-light; (2) ejected less Fe-peak elements; or (3) released less energy during the explosion that ended their life. All three of these alternatives are signatures of enrichment by a generation of metal-free stars (e.g. Heger & Woosley 2010). However, the errors associated with the currently available data are too large to confirm this trend. In this paper, we present the detailed chemical abundances of two chemically near-pristine DLAs to study the behaviour of [O/Fe] at the lowest metal- licities. These DLAs are found along the line-of-sight to the quasars SDSS J095542.12+411655.3 (hereafter J0955+4116) and SDSS J100151.38+034333.9 (hereafter J1001+0343). Previous observations of these quasars have shown that these two gas clouds are among the most metal-poor DLAs currently known. These gas clouds are therefore ideally placed to assess the [O/Fe] inflection in near-pristine environments. This paper is organised as follows. Section 2 describes our observations and data reduction. In Section 3, we present our data and determine the chemical composition of the two DLAs. We discuss the chemical enrichment histories of these systems in Section 4, before drawing overall conclusions and suggesting future work in Section 5. OBSERVATIONS The data analysed in this paper are either the first high-resolution observations of the DLA (as is the case for J0955+4116), or, we have obtained additional highresolution observations that target previously unobserved metal lines (as is the case for J1001+0343). The DLA identified along the line of sight to the m r ≈ 19.38 quasar -J0955+4116 -was identified by Penprase et al. (2010) as an EMP DLA, based on observations with the Keck Echellete Spectrograph and Imager. We then observed this quasar using the High Resolution Echelle Spectrometer (HIRES; Vogt et al. 1994) on the Keck I telescope. We utilised the C1 (7.0 × 0.861 arcsec slit) and C5 (7.0 × 1.148 arcsec slit) decker, resulting in a spectral resolution of 49 000 and 37 000, respectively. These observations consist of 9 × 3600 s exposures using the C1 setup and 4 × 3600 s exposures using the C5 setup. Prior observations of the m r ≈ 17.7 quasar -J1001+0343 -using the Ultraviolet and Visual Echelle Spectrograph (UVES; Dekker et al. 2000) at the European Southern Observatory (ESO) Very Large Telescope (VLT) revealed that the intervening DLA at z abs ≈ 3.078 is one of the least polluted gas reservoirs currently known. The 9.3 hours of VLT/UVES data presented in Cooke et al. (2011b) Cooke et al. (2011b) covered the wavelength range 3282 − 6652Å. Thus, the iron abundance was determined from observations of the Fe ii λ1608 line. We have secured further observations that focus on red wavelengths and target the stronger Fe ii λ2344 and λ2382 features of this DLA. The new data on J1001+0343 were collected with UVES (R 40 000) throughout the observing period P106 and P108 spanning the wavelength range 3756 − 4985Å and 6705 − 10429Å using a 0.8 arcsec slit width. We acquired 8×3000 s exposures on target using 2×2 binning in slow readout mode. A summary of our observations can be found in Table 1. Data reduction The HIRES data were reduced with the makee reduction pipeline while the ESO data were reduced with the EsoRex reduction pipeline. Both pipelines include the standard reduction steps of subtracting the detector bias, locating and tracing the echelle orders, flat-fielding, sky subtraction, optimally extracting the 1D spectrum, and performing a wavelength calibration. The data were converted to a vacuum and heliocentric reference frame. Finally, we combined the individual exposures of each DLA using uves popler 2 . This corrects for the blaze profile, and allowed us to manually mask cosmic rays and minor defects from the combined spectrum. When combining these data we adopt a pixel sampling of 2.5 km s −1 . Due to the different resolutions of the C1 and C5 HIRES deckers, we separately combine and analyse the data collected using each setup. For the regions of the J1001+0343 spectrum ∼ 9000Å, that are imprinted with absorption features due to atmospheric H 2 O, we also perform a telluric correction with reference to a telluric standard star. We test the robustness of this correction by also analysing the extracted spectra of the individual exposures (as discussed further in Section 3.2). ANALYSIS Using the Absorption LIne Software (alis) package 3which uses a χ-squared minimisation procedure to find the model parameters that best describe the input data -we simultaneously analyse the full complement of high S/N and high spectral resolution data currently available for each DLA. We model the absorption lines with a Voigt profile, which consists of three free parameters: a column density, a redshift, and a line broadening. We assume that all lines of comparable ionization level have the same redshift, and any absorption lines that are produced by the same ion all have the same column density and total broadening. The total broadening of the lines includes a contribution from both turbulent and thermal broadening. The turbulent broadening is assumed to be the same for all absorption features, while the thermal broadening depends inversely on the square root of the ion mass; thus, heavy elements (e.g. Fe) will exhibit absorption profiles that are intrinsically narrower than the profiles of lighter elements, (e.g. C). There is an additional contribution to the line broadening due to the instrument. For the HIRES and UVES data, the nominal instrument resolutions are v FWHM = 6.28 km s −1 (HIRES C1), v FWHM = 8.33 km s −1 (HIRES C5), and v FWHM = 7.3 km s −1 (UVES). Finally, we note that we simultaneously fit the absorption and quasar continuum of the data. We model the continuum around every absorption line as a low-order Legendre polynomial (typically of order 3). We assume that the zero-levels of the sky-subtracted UVES and HIRES data do not depart from zero 4 . In the following sections we discuss the profile fitting for each DLA in turn. J0955+4116 J0955+4116 is best modelled with two gaseous components at z abs = 3.279908 ± 0.000002 and z abs = 3.27996 ± 0.00001 (∆v = 4 ± 1 km s −1 ) for all singly ionized species (except Fe ii), and just the former component for neutral species (i.e. O i). We assume the temperature is T = 1 × 10 4 K (a value that is typi- cal for a metal-poor DLA; see Cooke et al. 2015, Welsh et al. 2020, Noterdaeme et al. 2021) and find that the turbulent components are b = 3.3 ± 0.2 km s −1 and b = 14.2 ± 1.5 km s −1 respectively. The data, along with the best-fitting model are presented in Figure 1, while the corresponding column densities are listed in Table 2. These results are unchanged when the assumed temperature varies between T ∼ (0.5 − 1.2) × 10 4 K -the range of values that have been measured in other metalpoor DLAs (Cooke et al. 2015). We find the data are best modelled when we allow for small velocity offsets between the features observed using the C1 and C5 deckers. These are found to be 1.5 km s −1 and account for potential differences between the wavelength calibrations of the data taken with the two HIRES setups. Note, we only use the neutral component (identified in the O i absorption) to infer the relative chemical abundances of this gas cloud; the component at z abs = 3.27996 likely arises from ionized gas as indicated by the lack of concurrent absorption from any neutral species. The neutral component at z abs = 3.279908 constitutes ∼ 70% to the total absorption in Si ii. We find that [Fe/H] = −2.95 ± 0.10 while [O/Fe] = +0.50 ± 0.10. This places the DLA towards J0955+4116 at the cusp of the EMP regime where the plateau in [O/Fe] may change. Here, and subsequently, the errors are given by the square root of the diagonal term of the covariance matrix calculated by alis at the end of the fitting procedure. We note that, while the individual Fe ii features may be relatively weak, the simultaneous analysis of the full complement of data results in a 4.8σ detection of Fe. J1001+0343 J1001+0343 is best modelled with one component at z abs = 3.078408 ± 0.000006 with a turbulence Al ii λ1670 Velocity relative to z abs = 3.279908 [km s −1 ] 12.42 ± 0.05 −3.25 ± 0.07 a 3σ upper limit on column density. b = 6.3 ± 0.4 km s −1 and temperature T = (1.0 ± 0.6) × 10 4 K. The data, along with the best-fitting model are presented in Figure 2, while the corresponding column densities are listed in Table 3. We find that [Fe/H] = −3.25 ± 0.07 and [O/Fe] = +0.62 ± 0.05. All reported column densities are consistent with the previous determinations by Cooke et al. (2011b), but with a reduced error; in particular, the new data reported here have allowed the precision on the [O/Fe] measurement to be improved by a factor of three, from 0.15 to 0.05. To ensure the errors associated with these abundance determinations are robust, we have performed some additional checks. First, near the absorption features of interest, we have ensured that the fluctuations in the continuum are well-described by the error spectrum in this region. This ensures we are not underestimating the error associated with the data. Second, we have refit the O i and Fe ii features using a Monte Carlo approach, described in Fossati et al. (2019), and converged on abundance determinations that are consistent within 1σ. When analysing the DLA towards J1001+0343, we adopt two approaches for modelling the Fe ii absorption features. The Fe ii λ2344 and λ2382 features fall in regions of the spectrum that are impacted by telluric absorption; the DLA absorption features are therefore partially blended with telluric features to varying degrees of severity. Prior to combining the individual DLA exposures, we remove these features using the spectrum of a telluric standard star. The resulting data near the Fe ii λ2382 line, after performing this correction, are shown in the right panel of the third row of Figure 2. To ensure that the telluric correction has not introduced any artefacts in the data, we simultaneously fit the standard star spectrum and all of the individual quasar expo-sures (uncorrected for telluric absorption). The results of this fitting procedure are shown in Figure 3. From this figure it is clear that, while the Fe ii λ2382 feature is partially blended with a telluric absorption line, the range of dates used to observe this target results in a sequential shift in the position of the telluric feature relative to the Fe line of interest. In the top right panel of Figure 3, the telluric absorption is ∼ +5 km s −1 from the Fe ii λ2382 line center (as indicated by the blue tick mark). In the bottom right panel, the telluric absorption is ∼ −10 km s −1 from the Fe ii λ2382 line center. When jointly analysing these data, this shift allows us to capture an accurate profile of both the telluric and Fe ii features. Note, the centroid of the Fe ii λ2382 line is tied to the other DLA absorption features, while the centroid of the telluric feature is fixed from other telluric lines in the standard star spectrum. Using this approach we find a total Fe ii column density consistent with our analysis of the corrected combined spectrum. The value we report in this paper is based on the fits to the individual exposures. These new data confirm that the DLA towards J1001+0343 is one of the most iron-poor DLAs currently known. The new found precision afforded for the iron column density allow us to conclude that [O/Fe] is significantly elevated in this DLA compared to the plateau observed at higher metallicity. Before discussing the origin of this elevation, we perform some simulations to support our analysis. Mock models To further test if the DLA towards J1001+0343 exhibits an elevated [O/Fe] ratio, we simulated the O i and Fe ii absorption line profiles that would be expected given different intrinsic [O/Fe] abundance ratios. To achieve this, we take the best fit cloud model from our modelling procedure and generate synthetic model profiles varying the column density of either O i or Fe ii. The results of this test are shown in Figure 4. The top row shows the observed UVES data centered on the O i λ1039 (left) and O i λ1302 (right) absorption features. Overplotted on these data are the model profiles that would be expected if the underlying [O/Fe] ratio were +0.4, +0.6, and +0.8. To generate these profiles, we assume that the Fe ii column density is fixed (at the value given by our best fit model) and then vary the column density of O i accordingly. In the bottom row we show the UVES data centered on the Fe ii λ1608 at the value given by our best fit model). Below these data we show the residual fits between the model and the data. (Heger & Woosley 2010). However, before we consider the abundance pattern of this DLA in relation to the yields of Population III SNe, we first examine possible origins of this elevation. Origin of elevation Dust depletion is expected to be minimal in VMP DLAs (Pettini et al. 1997;Akerman et al. 2005;Vladilo et al. 2011;Rafelski et al. 2014). However, if the depletion of metals onto dust grains is unaccounted for, it will lead to artificially low metal abundance determinations for refractory elements. It is therefore useful to rule out its impact in the DLAs presented here. Depletion studies compare the relative abundances of elements in DLAs to the expected nucleosynthetic ratio which can be inferred from the abundances of stars of similar metallicity. O is minimally depleted onto dust grains (Spitzer & Jenkins 1975;Jenkins 2009;Jenkins & Wallerstein 2017). Both Si and Fe are refractory elements, and are partially depleted onto dust grains but at different rates. As shown in Figure 5, both metal-poor halo stars and VMP DLAs exhibit an identical evolution of the [O/Fe] ratio (see also Figure 12 of Cooke et al. 2011b). Given this agreement, we therefore expect dust depletion to be minimal in DLAs that have a metallicity We can also use the [Si/Fe] abundance ratio to explore the possibility of dust depletion. The most metalpoor stars and the most metal-poor DLAs appear to have a metallicity independent evolution of [Si/Fe] when [Fe/H] < −2. For stars, the plateau occurs at [Si/Fe] = +0.37 ± 0.15 (Cayrel et al. 2004). While for DLAs, the plateau occurs at [Si/Fe] = +0.32 ± 0.09 (Wolfe et al. 2005;Cooke et al. 2011b). The [Si/Fe] of both J0955+4116 and J1001+0343 are consistent with the plateau seen in metal-poor DLAs (see Figure 6). We therefore do not expect dust depletion to be the source of the elevated [O/Fe] abundance ratio in the EMP regime. Volatile elements, like S and Zn, are less readily depleted onto dust grains than Si and Fe (Savage & Sembach 1996;Jenkins 2009;Jenkins & Wallerstein 2017), however these elements are not currently accessible for the EMP DLAs studied here. The advent of the next generation of 30 − 40 m telescopes will make these abundance determinations possible for EMP DLAs. We also find it unlikely that the cloud model introduces a systematic [O/Fe] enhancement; the O i and Fe ii column densities of both DLAs are derived from at least one weak absorption line. Furthermore, we note that ionization effects cannot explain this behaviour at low metallicity; the presence of an unresolved component of ionized gas containing Fe ii would not affect the O i column density, but would lead to an overestimate the Fe ii column density. Thus, accounting for ionized gas would only act to further increase the [O/Fe] ratio. We therefore conclude that the elevated oxygen to iron abundance ratio observed for the EMP DLA towards J1001+0343 is intrinsic to the DLA. Stochastic enrichment In the previous section, we concluded that the observed [O/Fe] ratio is intrinsic to the DLA towards J1001+0343. We now explore the possibility that this DLA has been enriched by the first generation of stars. Specifically, we compare the observed abundance pattern of this DLA to those predicted by a stochastic chemical enrichment model developed in previous work (Welsh et al. 2019). This model describes the underlying mass distribution of the enriching stellar population using a power-law: ξ(M ) = kM −α , where k is a multiplicative constant that is set by the number of enriching stars that form between a given mass range: Since the first stars are thought to form in small multiples, this underlying mass distribution is necessarily stochastically sampled. We utilise the yields from simulations of stellar evolution to construct the expected distribution of chemical abundances given an underlying IMF model. These distributions can then be used to assess the likelihood of the observed DLA abundances given an enrichment model. In our analysis, we use the relative abundances of stellar layers (f He ). The explosion energy is a measure of the final kinetic energy of the ejecta at infinity while the mixing between stellar layers is parameterised as a fraction of the helium core size. For further details, see both HW10 and Welsh et al. (2019). While the HW10 yields have been calculated for metal-free stars, they are also representative of EMP Population II CCSNe yields (at least for the elements under consideration in this work); this can be seen by comparison with the Woosley & Weaver (1995) yields of metal-enriched massive stars. As a result, in previous studies of near-pristine absorption line systems, it has been difficult to distinguish between enrichment by Population II and Population III stars. Fortunately, in our analysis, we can take advantage of this degeneracy and consider the HW10 yields to be representative of both Population II and Population III SNe yields. The default enrichment model, described above, contains six free parameters (N , M min , M max , α, E exp , and f He ). We are using three relative abundances to assess the enrichment of the DLA towards J1001+0343. Thus, we cannot simultaneously investigate these six parameters. We can, however, make some simplifications. The underlying IMF of the first stars remains an open question; this is not the case for massive Population II stars. The Population II IMF for stars of mass M 10 M is expected to be well-described by a Salpeter IMF (i.e. α = 2.35 in Equation 1) 5 . Under the assumption of a Salpeter IMF, if the number of enriching stars we derive is large, then this may imply that Population II stars are the dominant enrichment source. Alternatively, if N is low, then it is possible that a pure or washed out Population III signature may still be present in this DLA. We test this idea by using a Markov Chain Monte Carlo (MCMC) likelihood analysis to investigate the number of stars that have enriched this DLA. We explore the entire enrichment model parameter space to find the parameters that best fit our data. The HW10 parameters span (10 − 100) M (with a mass resolution of ∆M 0.1 M ), (0.3 − 10) × 10 51 erg (sampled by 10 values), and (0−0.25) f He (sampled by 14 values) where f He is the fraction of the He core size; in total, there are 16 800 models in this yield suite. We adopt a upper mass limit of 70 M beyond which pulsational pair instability SNe are thought to occur (Woosley 2017). This leaves a grid of 15 792 models to explore. During our analysis, we linearly interpolate between this grid of yields while applying uniform priors on each parameter. The results of this analysis are shown in Figure 7. The most favoured result of this model is that the DLA towards J1001+0343 was enriched by a low number of massive stars which, as argued above, make it consistent with Population III enrichment (but does not rule out Population II yields). Motivated by these findings, we now assess the possible properties of a putative metal-free star that may be responsible for the enrichment of the DLA towards J1001+0343. In this case we assume that the DLA has been enriched by one Population III SN, again utilising the HW10 yields. The results of this analysis are shown in Figure 8. We find that the abundances of this DLA are best modelled by a Population III star with a mass between 19 − 25 M (2σ) and an explosion energy between (0.9 − 2.4) × 10 51 erg (2σ). The degree of mixing between the stellar layers remains unconstrained, but generally favours lower values of the mixing parameter. To test how well this model describes our data, we compare the [X/O] ratios supported by this model to those presented in Table 3. This comparison is shown in Fig The Population III star that best models the abundances of the DLA towards J1001+0343 has an explosion energy that is consistent with the value found for a typical metal-poor DLA (Welsh et al. 2019). The results of this analysis are also similar to the inferred enrichment of the most metal-poor DLA currently known; Cooke et al. (2017) find that the abundance pattern of the most metal-poor DLA can be well-modelled by a Population III SN with a progenitor mass M = 20.5 M . The DLA analysed by Cooke et al. (2017) was preferentially modelled with a somewhat higher explosion energy than that reported here, but still consistent within 2σ. This preference towards higher energy explosions (i.e. hypernovae) is also inferred from the analysis of some EMP stars (e.g. Grimmett et al. 2018;Ishigaki et al. 2018). However, the HW10 analysis of the Cayrel et al. (2004) sample of EMP stars favoured models with an explosion energy between (0.6 − 1.2) × 10 51 erg. Recent theoretical works have started to favour lower energy supernova explosions for metal-free progenitors (e.g. the simulations performed by Ebinger et al. (2020) suggest a range (0.2 − 1.6) × 10 51 erg). This is in line with the range of explosion energies inferred for the enrichment of the DLA towards J1001+0343. Furthermore, the analysis conducted by Haze Nuñez et al. (2021) has compared the typical (median) abundances of VMP DLAs to the IMF-weighted yields from various simulations; for the abundances considered in this work,the Ebinger et al. , some odd atomic number elements (e.g. N, Al), and select iron-peak elements (e.g. Ni, Cr, Zn). These elements may allow us to further pin down the properties of the enriching stars. An informative probe of the explosion physics is the relative abundance of zinc and iron -we do not detect zinc absorption in this DLA, but this may be possible with the next generation of 30 − 40 m telescopes. Future avenues for ruling out Population II In the previous section we investigated the chemical enrichment of the DLA towards J1001+0343. Under the assumption of a Sapleter IMF, we found that this DLA was best modelled by a low number of enriching stars (consistent with one). We also found that the observed abundances of [C/O], [Si/O], and [Fe/O] can be simultaneously well modelled by the yields of an individual Population III SN. However, given the age of the Universe at redshift z = 3 (∼ 2 Gyr), there is sufficient time (∼ 1.5 Gyr between z = 10 to z = 3) for this DLA to be enriched by Population III stars and subsequent Population II stars. Any putative Population III signature is thought to be washed out soon after the birth of Population II stars ; just a few massive Population II stars are required to wash out a peculiar Population III chemical signature. How- ever, there could be a delay between Population III and Population II star formation. For example, reionization quenching can temporarily suspend star formation in low mass galaxies Wheeler et al. 2015), and this may prolong the time that a Population III signature can be preserved in near-pristine gas. After a period of dormancy, interactions with gaseous streams in the intergalactic medium can help re-ignite star formation in these low mass objects (Wright et al. 2019). Interestingly, the chemistry of the most metalpoor DLAs shows an increase in [C/O] with decreasing redshift (see Figure 8 of Welsh et al. 2020 Unravelling the Population III fingerprint We have mentioned both EMP stars and EMP DLAs as potential environments to uncover the Population III fingerprint. Both stars and DLAs have their respective advantages. The large sample size afforded when studying stellar relics cannot yet be matched in similar studies of gaseous relics. The potential evolution of [O/Fe] across EMP stars has also been the subject of much discussion (see the review by McWilliam 1997). However, determining this ratio in EMP stars is particularly challenging. There are four approaches to determine the oxygen abundance in stellar atmospheres. The typical method utilises the O i λ777 nm triplet; a transition that requires large non-LTE corrections (Fabbian et al. 2009). The weak O i λ630 nm line is known to form in LTE and, as such, may be more reliable (Asplund 2005). However, the strength of this feature means that it is challenging to detect at low metallicities. Observations of the UV and IR molecular OH features also show systematic offsets relative to the aforementioned [O/H] abundance indicators, although the offsets can be reduced by accounting for 3D hydrodynamical effects (Nissen et al. 2002;García Pérez et al. 2006;Dobrovolskas et al. 2015). Given the difficulty of accurately determining the O abundance in the lowest metallicity stars, we suggest that DLAs are the ideal environment to study the evolution of this element despite the smaller sample size. The comparative analysis of the chemistry of both gaseous and stellar relics offers the opportunity to study early chemical enrichment further. It is well established that some of the most metal-poor Milky Way halo stars show an enhanced [C/Fe] ratio (Beers & Christlieb 2005). This may be a sign of enrichment from the first stars. Or, indeed in some cases, this enhancement may be explained via mass transfer across stars in binary systems (Arentsen et al. 2019). Whether this enhancement is also prevalent across metal-poor DLAs is yet to be seen. Figure 6 highlights that, with current statistics, we cannot discern any concurrent enhancement of the [C/Fe] Another environment that may offer unique insight to the metals produced by the first stars are the ultrafaint dwarf galaxies (UFDs) that orbit the Milky Way. These UFDs contain some of the most metal-poor stars currently known. The stars in these UFDs may have experienced a different enrichment history to the Milky Way halo stars. If metal-poor DLAs are indeed the antecedents of the UFDs, we might be able to search for a consistent chemical signature in the most metal-poor stars of the UFDs. DLAs analysed in this work may provide a signpost to some of the most pristine environments in the high redshift universe, and would be an ideal place to search for Population III host galaxies and perhaps even the light from Population III SNe using the forthcoming James Webb Space Telescope. We thank the anonymous referee who provided a prompt and careful review of our paper. We thank M. Fossati for the use of their line-fitting software and we thank A. Skuladottir for helpful discussions surrounding comparisons with stellar populations. This paper is based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile (VLT program IDs: 083.A-0042(A) and 105.20L3.001), and at the W. M. Keck Observatory which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. We are also grateful to the staff astronomers at the VLT and Keck Observatory for their assistance with the observations. This work has been supported by Fondazione In this appendix, we present the DLA data used to produce Figure 5. In Table 4 we list the column density of neutral hydrogen, the redshift, and the relative abundances of oxygen and iron.
2022-01-24T02:15:47.236Z
2022-01-20T00:00:00.000
{ "year": 2022, "sha1": "8c9c3b161b5432c3c7dc84a2c2f2861a42986c7e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8c9c3b161b5432c3c7dc84a2c2f2861a42986c7e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244445595
pes2o/s2orc
v3-fos-license
Adherence with direct oral anticoagulants in patients with atrial fibrillation: Trends, risk factors, and outcomes Abstract Background Adherence to direct oral anticoagulants (DOACs) remains a concern among non‐valvular atrial fibrillation (AF) patients. We aimed to assess patterns of adherence with DOACs and examine their association with ischemic stroke and systemic embolism (SE). Methods This retrospective cohort study includes all adult members of Clalit Health Services, the largest healthcare provider in Israel, with newly diagnosed non‐valvular AF between January 2014 and March 2019, who initiated DOACs within 90 days of AF diagnosis and used DOACs exclusively. Adherence was assessed using the proportion of days covered (PDC) over the first year of treatment, and high adherence was defined as PDC ≥80%. Regression models were used to identify predictors of high adherence to DOACs and to examine the association between adherence and stroke or SE. Results Overall 15,255 patients were included in this study. The proportion of highly adherent (PDC ≥80%) DOACs users was around 75% and decreased slightly over the years. On multivariable analyses, the likelihood of high adherence to DOACs increased with age and across higher socioeconomic classes, and was more likely among females, Jews, statins users, and patients with CHA2DS2‐VASc score ≥2. Risk of stroke and SE was lower among highly adherent DOACs users; adjusted HR 0.56 (95% CI, 0.45–0.71), compared to users with PDC <80%. Conclusions Adherence with DOACs is still sub‐optimal among non‐valvular AF patients, resulting in a higher risk of stroke and SE. | INTRODUC TI ON Atrial fibrillation (AF) is a common cardiac rhythm disorder which poses a significant risk for cerebrovascular morbidity and mortality. 1 When used appropriately, oral anticoagulants (OACs) have shown to reduce the incidence of embolic stroke in AF by more than 50%. 2 Vitamin K antagonists (VKAs) have long been considered the main OAC used worldwide. VKAs have several limitations such as high bleeding risk, slow onset of action, the need for frequent monitoring, and numerous drug and diet interactions. Direct oral anticoagulants (DOACs) have been recently introduced and have been gradually replacing VKAs owing to their distinct benefits in terms of efficacy, safety, convenience, more predictable effect and fewer drug and diet interactions. 3 However, their short half-life time and their rapid onset of action require a strict treatment compliance and adherence in order to maintain the desirable antithrombotic effect. [4][5][6] It has been shown that less than half of the patients adhere to OAC treatment over time. 7 There was an expectation that DOACs introduction, given their advantages, would be translated into improved adherence. Unfortunately, adherence to DOACs remains poor in patients with AF, 7-12 and appears to be associated with increased risk of stroke, [11][12] yet real-life data is scant. [7][8][9][10][11][12] Adherence patterns are an integral part of clinical decision making and depend on various factors, either social or clinical. Understanding those patterns and factors is clinically relevant and may be helpful in better resources utilization and health planning. In this study, we use a population-based real-life data to assess patterns and trends of patients' adherence with DOACs treatment. This study also aims to provide specific insight as for the implication of non-adherence to DOACs on the risk of ischemic stroke and systemic embolism (SE). | Study outcomes and definition of terms Adherence to DOACs was assessed using the proportion of days covered (PDC), as recommended by the Pharmacy Quality Alliance and SE (ICD-9; 444.X, 445.X). | Covariates Demographic characteristics, and data on comorbidities were retrieved from the CHS-computerized database for the calculation of CHA 2 DS 2 -VASc score, a widely used risk stratification score for stroke prediction in patients with AF. 16 | Predictors of high adherence to DOACs Multivariable logistic regression analysis revealed that the likelihood of high adherence to DOACs (PDC ≥80%), during the first year of treatment, increased with age and with increasing socioeconomic classes, and was more likely among females compared to males, among Jews compared to Arabs, among patients treated with statins and among patients with CHA 2 DS 2 -VASc score ≥2 compared to those with CHA 2 DS 2 -VASc score <2. The likelihood of high adherence was lower in smokers compared to nonsmokers. Compared to rivaroxaban use, the likelihood of high adherence was significantly lower with dabigatran use, whereas no statistically significant difference was observed with apixaban use (Table 3). | Association between adherence and stroke or SE The risk of ischemic stroke and SE, after adjustments to CHA 2 DS 2 -VASc score, was found to be 44% lower (adjusted HR 0.56, 95% CI 0.45-0.71) among highly adherent DOACs users (PDC ≥80%) compared to DOACs users with PDC <80% (Table 4). Using PDC as a ordinal variable with the lowest category (PDC <40%) serving as reference category, it was shown that increasing PDC up to 80% was not associated with stroke and SE risk reduction, and that only high adherence (PDC ≥80%) was associated with a statistically significant decrease of stroke and SE; adjusted HR 0.59 (95% CI, 0.42-0.83) ( Table 4 and Figure 4). A protective effect was also demonstrated when using PDC as a continuous variable as well, with a 9% decrease (adjusted HR 0.91, 95% CI 0.88-0.95) in the risk of stroke and SE for each 10% increase in PDC (Table 4). | DISCUSS ION This is one of the few large-scale population-based studies to assess adherence to DOACs. In this study, we found that approximately 75% of incident AF patients treated with DOACs were highly adherent to treatment, namely, were covered at least 80% of the days during the first year of treatment. The proportion of high adherence to DOACs in this study is consistent with the rates described in two recent large cohort studies conducted in France and Canada. 17,18 TA B L E 4 Multivariable a hazard ratios (HRs) for the association between DOACs use adherence, as estimated by PDC in the first years of treatment, and the risk of ischemic stroke and systemic thromboembolism Nevertheless, as much as these results seem encouraging compared to previous studies, [7][8][9] and despite the growing use of DOACs, non-adherence to DOACs is still a concern, with the rate reaching up to 25% over a year period. Moreover, there is a worrisome pro- However, direct comparisons to those studies are difficult because of methodological differences. A notable strength of this study is being a population-based study with a relatively large number of AF patients. The fact that healthcare services in Israel are public and medication copayment is low makes the documentation of medications prescribing and purchases, reliable. In addition, relying on these documentations rather than patients' self-reports avoids the potential recall bias. Nevertheless, our study is subject to some limitations. First, as an observational study, in which data is extracted from electronic health records, it is prone to misclassification bias or lacks data. Second, we relied on prescriptions and purchases data rather than actual taking the drugs by the patients. Thus, there might be an overestimation of adherence rates in cases in which patients stopped taking DOACs even though they had purchased the drugs. In addition, patients could have taken lower doses than purchased, hence being exposed to higher embolic risk. Finally, our study is prone to the healthy adherer effect. 24 In other words, patients who are adherent to DOACs, may also adhere to other therapies as well as to healthier lifestyle and to medical preventive services, providing another explanation for the lower risk of embolic complications. | CON CLUS IONS Adherence with DOACs treatment is still far from optimal, resulting in a substantial higher risk of embolic complications. More efforts should be done in order to increase physicians' and AF patients' awareness of the importance of compliance to DOACs. ACK N OWLED G M ENTS Walid Saliba and Zomoroda Abu-Ful conceived and designed the study, did the analysis, and took responsibility for the integrity of the data and the accuracy of the data analysis. All authors had full access to all of the data in the study. Anat Arbel and Walid Saliba drafted the manuscript. All authors critically revised the manuscript for important intellectual content and gave final approval for the version to be published. CO N FLI C T S O F I NTE R E S T The authors declare no conflicts of interest for this article. E TH I C S A PPROVA L The study was approved by the Review Board of the Lady Davis Medical Centre and conducted in accordance with the Declaration of Helsinki. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available because of privacy or ethical restrictions.
2021-11-21T16:13:06.011Z
2021-11-18T00:00:00.000
{ "year": 2021, "sha1": "43f92478df73e8c4f508471d6ee7c586b1f84ffe", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/joa3.12656", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "aa610b63ca86e05ceb028fabca983985e2d1b3b4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252154190
pes2o/s2orc
v3-fos-license
Exopolysaccharides of Bacillus amyloliquefaciens Amy-1 Mitigate Inflammation by Inhibiting ERK1/2 and NF-κB Pathways and Activating p38/Nrf2 Pathway Bacillus amyloliquefaciens is a probiotic for animals. Evidence suggests that diets supplemented with B. amyloliquefaciens can reduce inflammation; however, the underlying mechanism is unclear and requires further exploration. The exopolysaccharides of B. amyloliquefaciens amy-1 displayed hypoglycemic activity previously, suggesting that they are bioactive molecules. In addition, they counteracted the effect of lipopolysaccharide (LPS) on inducing cellular insulin resistance in exploratory tests. Therefore, this study aimed to explore the anti-inflammatory effect and molecular mechanisms of the exopolysaccharide preparation of amy-1 (EPS). Consequently, EPS reduced the expression of proinflammatory factors, the phagocytic activity and oxidative stress of LPS-stimulated THP-1 cells. In animal tests, EPS effectively ameliorated ear inflammation of mice. These data suggested that EPS possess anti-inflammatory activity. A mechanism study revealed that EPS inhibited the nuclear factor-κB pathway, activated the mitogen-activated protein kinase (MAPK) p38, and prohibited the extracellular signal-regulated kinase 1/2, but had no effect on the c-Jun-N-terminal kinase 2 (JNK). EPS also activated the anti-oxidative nuclear factor erythroid 2–related factor 2 (Nrf2) pathway. Evidence suggested that p38, but not JNK, was involved in activating the Nrf2 pathway. Together, these mechanisms reduced the severity of inflammation. These findings support the proposal that exopolysaccharides may play important roles in the anti-inflammatory functions of probiotics. Introduction Inflammation involves a series of defensive reactions, including removal of impaired tissues, and fighting against invading pathogens or harmful factors. However, prolonged or excessive inflammation results in tissue damage and prompts the development of several chronic disorders, such as cancers, type 2 diabetes, and cardiovascular diseases [1]. Probiotics have been suggested to modulate immune reactions and ameliorate inflammatory disorders, such as inflammatory bowel diseases [2,3]. Nevertheless, the mechanisms underlying these effects are not fully understood. Bacillus amyloliquefaciens, a Gram-positive bacterium, has been suggested to be probiotic and prebiotic for animals [4][5][6][7][8], and GRAS (generally recognized as safe) for humans [9]. It is ubiquitously present in food and the environment, often used in food fermentation [8,10], and is present in human gut microbiota [11]. Recently, the evidence suggests that diets supplemented with B. amyloliquefaciens can reduce inflammation. For example, dietary supplementation with B. amyloliquefaciens improved the growth performance and decreased intestinal inflammatory responses in piglets with intra-uterine growth retardation [12]. Likewise, broilers fed with a diet supplemented with B. amyloliquefaciens SC06 showed decreased levels of the pro-inflammatory cytokines interleukine-6 (IL-6) and tumor necrosis factor-α (TNF-α) in the ileum [13]. Moreover, B. amyloliquefaciens administration Anti-Inflammatory Effect of EPS Exopolysaccarides were isolated from the medium of amy-1 culture. The total carbohydrates in this preparation (i.e., EPS) occupied 95.65 ± 0.68% of the total mass, while proteins, lipids, and polyphenols were undetectable in EPS. The cytotoxicity of EPS to the model cell line THP-1 was examined. Figure 1A reveals that 0.1-200 µg/mL EPS did not significantly affect the viability of THP-1 cells, suggesting that EPS is not toxic to the cell line at concentrations between 0.1-200 µg/mL. However, the addition of LPS did affect cell growth. As shown in Figure 1B, cotreatment of THP-1 cells with 1 µg/mL LPS and 20-200 µg/mL EPS caused a 10-20% decrease in the number of viable cells. Theoretically, as a control in the functional study of EPS, a strain of B. amyloliquefaciens that does not produce exopolysaccharides should be cultured simultaneously, and the resulting medium being subjected to the same isolation protocol used for EPS preparation. However, we do not know any strain of B. amyloliquefaciens that does not produce exopolysaccharides. Thus, the control for EPS was prepared by incubating the same volume of sterile medium at 37 • C for 48 h, mimicking the condition of amy-1 cultivation except that the medium was not inoculated with any micro-organisms. The medium was then subjected to an isolation protocol, the same as that used for preparing EPS. The resulting powders, abbreviated as PPT (please see Section 4.3), with total carbohydrates and total proteins being 52.00 ± 3.10% and 10.95 ± 0.64% of the total mass, respectively, were used as a control of EPS in experiments. The cytotoxicity of PPT was also analyzed. Figure 1C shows that 0.1-100 µg/mL PPT did not significantly affect the viability of THP-1 cells, but 150 and 200 µg/mL of PPT exhibited 10-20% inhibition on cell growth. In Figure 1D, the combined effects of 1 µg/mL LPS and 10-200 µg/mL PPT were analyzed. Only LPS plus 200 µg/mL PPT displayed approximately 10% suppression on cell growth. Therefore, the effects of EPS and PPT on cell growth are different. viability of THP-1 cells, but 150 and 200 μg/mL of PPT exhibited 10-20% inhibition on cell growth. In Figure 1D, the combined effects of 1 μg/mL LPS and 10-200 μg/mL PPT were analyzed. Only LPS plus 200 μg/mL PPT displayed approximately 10% suppression on cell growth. Therefore, the effects of EPS and PPT on cell growth are different. The anti-inflammatory effect of EPS was tested in LPS-treated THP-1 cells. Figure 2A shows that LPS increased the expression of inducible nitric oxide synthase (iNOS; Lane 2), whereas the presence of 10, 50, or 100 μg/mL EPS obviously inhibited this effect of LPS in a dose-dependent manner (Lanes 3, 4, and 5). Similarly, LPS increased the expression of cyclooxygenase-2 (COX-2; Figure 2B, Lane 2), and this reaction was also dose-dependently suppressed by 10-100 μg/mL EPS ( Figure 2B, Lanes 3, 4, and 5). The effect of PPT on iNOS and COX-2 expression was also examined. Consequently, unlike EPS, 10-100 μg/mL PPT did not suppress LPS-induced iNOS ( Figure 2C, Lanes 3, 4, and 5 vs. Lane 2) and COX-2 expression ( Figure 2D, Lanes 3, 4, and 5 vs. Lane 2), suggesting that PPT does not have anti-inflammatory activity. Furthermore, LPS increased the production of TNF-α ( Figure 2E, Group 2) and IL-6 ( Figure 2F, Group 2) from THP-1 cells; yet the addition of EPS significantly decreased TNF-α ( Figure 2E, Group 3) and IL-6 ( Figure 2F, Group 3) production. Subsequently, the phagocytic activity of LPS-, LPS + EPS-, or EPS-treated THP-1 cells was analyzed. LPS increased the phagocytic activity of THP-1 cells as expected ( Figure 2G, Group 2 vs. Group 1), whereas EPS alone increased the phagocytic activity of the cells even further than LPS did ( Figure 2G, Group 3 vs. Group 2). However, cotreatment of THP-1 cells with both LPS and EPS ( Figure 2G, Group 4) reduced the phagocytic activity of THP-1 cells as compared to LPS treatment ( Figure 2G, Group 2). Therefore, when LPS and EPS are added together, the phagocytic activity of THP-1 cells is suppressed compared to LPS-treated cells, but EPS alone can increase the phagocytic activity of the cells. Together, data in Figure 2 demonstrate that EPS diminishes LPS-induced proinflammatory biomarkers in THP-1 cells. Thus, the anti-inflammatory activity of EPS was then tested in vivo. LPS-treated cells, but EPS alone can increase the phagocytic activity of the cells. Together, data in Figure 2 demonstrate that EPS diminishes LPS-induced proinflammatory biomarkers in THP-1 cells. Thus, the anti-inflammatory activity of EPS was then tested in vivo. . EPS treatment (250, 500, and 750 µg/ear; Groups 4, 5, and 6, respectively) resulted in dose-dependent amelioration of the ear edema in mice at these time points. The effects of 500 and 750 µg/ear of EPS were close and were both similar to that of 500 µg/ear indomethacin (Group 3), an antiinflammatory medicine. These data demonstrate that EPS has an obvious anti-inflammatory effect in vivo. FOR PEER REVIEW 5 of Figure 3 displays that 12-O-tetradecanoylphorbol-13-acetate (TPA) stimulati caused ear edema in mice at 4, 16, and 24 h after TPA application (Group 2). EPS tre ment (250, 500, and 750 μg/ear; Groups 4, 5, and 6, respectively) resulted dose-dependent amelioration of the ear edema in mice at these time points. The effects 500 and 750 μg/ear of EPS were close and were both similar to that of 500 μg/ear ind methacin (Group 3), an anti-inflammatory medicine. These data demonstrate that E has an obvious anti-inflammatory effect in vivo. Molecular Mechanism-The Inhibitor Kappa B Kinase (IKK)/Nuclear Factor-κB (NF-κB) Pathway LPS activates the proinflammatory IKK/NF-κB pathway [24,25]. Thus, whether E inhibits this pathway was examined in our study. As shown in Figure 4A, LPS obvious increased IKK phosphorylation (Lane 2); the addition of EPS decreased the level of IK phosphorylation (Lane 3). Consistently, in Figure 4B, phosphorylation of the inhibitor NF-κB (IκB) was elevated by LPS (Lane 2); the presence of EPS reduced the level phosphorylated IκB (Lane 3). Furthermore, Figure 4C shows that LPS increased the lev of nuclear p65, a subunit of NF-κB (Lane 2), whereas the addition of EPS obviously minished the nuclear presence of p65 (Lane 3), indicating that EPS inhibited the nucle translocation of NF-κB induced by LPS. Taken together, data in Figure 4 suggest that E inhibits the LPS-activated IKK/NF-κB pathway. Molecular Mechanism-The Inhibitor Kappa B Kinase (IKK)/Nuclear Factor-κB (NF-κB) Pathway LPS activates the proinflammatory IKK/NF-κB pathway [24,25]. Thus, whether EPS inhibits this pathway was examined in our study. As shown in Figure 4A, LPS obviously increased IKK phosphorylation (Lane 2); the addition of EPS decreased the level of IKK phosphorylation (Lane 3). Consistently, in Figure 4B, phosphorylation of the inhibitor of NF-κB (IκB) was elevated by LPS (Lane 2); the presence of EPS reduced the level of phosphorylated IκB (Lane 3). Furthermore, Figure 4C shows that LPS increased the level of nuclear p65, a subunit of NF-κB (Lane 2), whereas the addition of EPS obviously diminished the nuclear presence of p65 (Lane 3), indicating that EPS inhibited the nuclear translocation of NF-κB induced by LPS. Taken together, data in Figure 4 suggest that EPS inhibits the LPS-activated IKK/NF-κB pathway. . The levels of total IKK (IKKα + IKKβ) and phosphorylated IKK (A), total IκB (the inhibitor of NF-κB) and phosphorylated IκB (B), nuclear p65 subunit, as well as lamin B1 and α-tubulin (C), were assayed using Western blotting. Relative band intensity (normalized) vs. Lane 2 was determined. Data are presented as the mean ± standard deviation of four independent experiments. * p < 0.05 and ** p < 0.005 vs. Lane 2 or between the indicated groups. No significant difference was found between Lanes 3 and 4 of (A), and between those of (B). Thus, EPS inhibits ERK1/2 and has no effect on JNK, but activates p38, and p38 is known to promote inflammation through the activation of activator protein 1 (AP-1), which promotes the expression of inflammatory cytokines [27]. Nonetheless, p38 also activates the transcription factor nuclear factor erythroid 2-related factor 2 (Nrf2), which can counteract the action of NF-κB [27]. Hence, we tested the role of p38. The addition of a p38 inhibitor, SB202190, further decreased the LPS-induced production of TNF-α (Figure 2E, Group 4) and IL-6 ( Figure 2F, Group 4) as compared to the inhibitory effect of EPS (Group 3; Figure 2E,F), indicating that p38 was involved in promoting the production of these proinflammatory cytokines in LPS and EPS cotreated cells, and the p38 in- . The levels of total IKK (IKKα + IKKβ) and phosphorylated IKK (A), total IκB (the inhibitor of NF-κB) and phosphorylated IκB (B), nuclear p65 subunit, as well as lamin B1 and α-tubulin (C), were assayed using Western blotting. Relative band intensity (normalized) vs. Lane 2 was determined. Data are presented as the mean ± standard deviation of four independent experiments. * p < 0.05 and ** p < 0.005 vs. Lane 2 or between the indicated groups. No significant difference was found between Lanes 3 and 4 of (A), and between those of (B). Thus, EPS inhibits ERK1/2 and has no effect on JNK, but activates p38, and p38 is known to promote inflammation through the activation of activator protein 1 (AP-1), which promotes the expression of inflammatory cytokines [27]. Nonetheless, p38 also activates the transcription factor nuclear factor erythroid 2-related factor 2 (Nrf2), which can counteract the action of NF-κB [27]. Hence, we tested the role of p38. The addition of a p38 inhibitor, SB202190, further decreased the LPS-induced production of TNF-α ( Figure 2E, Group 4) and IL-6 ( Figure 2F, Group 4) as compared to the inhibitory effect of EPS (Group 3; Figure 2E,F), indicating that p38 was involved in promoting the production of these proinflammatory cytokines in LPS and EPS cotreated cells, and the p38 inhibitor repressed this function of p38. This is consistent with the known proinflammatory effect of p38. In Figure 4A,B, the p38 inhibitor did not obviously affect the levels of IKK phosphorylation and IκB phosphorylation that were already inhibited by EPS (Lane 4 vs. Lane 3). However, in Figure 4C, NF-κB nuclear translocation that was suppressed by EPS (Lane 3) was re-boosted by the addition of SB202190 (Lane 4), suggesting that p38 played a role in inhibiting NF-κB nuclear translocation. The results in Figure 4 are consistent with the notion that p38 activates Nrf2, which hinders the action of NF-κB, but does not act on IKK and IκB [27]. Therefore, whether EPS-activated Nrf2 was characterized subsequently. hibitor repressed this function of p38. This is consistent with the known proinflammatory effect of p38. In Figure 4A,B, the p38 inhibitor did not obviously affect the levels of IKK phosphorylation and IκB phosphorylation that were already inhibited by EPS (Lane 4 vs. Lane 3). However, in Figure 4C, NF-κB nuclear translocation that was suppressed by EPS (Lane 3) was re-boosted by the addition of SB202190 (Lane 4), suggesting that p38 played a role in inhibiting NF-κB nuclear translocation. The results in Figure 4 are consistent with the notion that p38 activates Nrf2, which hinders the action of NF-κB, but does not act on IKK and IκB [27]. Therefore, whether EPS-activated Nrf2 was characterized subsequently. Molecular Mechanism-The Anti-Oxidative Pathway Nrf2 induces the expression of anti-oxidative enzymes, including heme oxygenase-1 (HO-1) and the glutamate-cysteine ligase modifier subunit (GCLM) [27]. Therefore, whether EPS enhanced the expression of these enzymes and Nrf2 was examined. Figure 6A shows that LPS elevated the expression of HO-1 (Lane 2) as expected [27]; EPS alone (Lane 3) or EPS plus LPS (Lane 4) also promoted the expression of HO-1. The result of Lane 3 suggests that EPS promotes HO-1 expression. The same assay was performed, using PPT. As a result, Figure 6B reveals that HO-1 expression level was not significantly increased by PPT (Lane 3) compared to the control (Lane 1). In cells treated by both PPT and LPS, HO-1 expression was enhanced as expected due to the presence of LPS (Lane 4). Thus, PPT does not activate the expression of HO-1. Subsequently, Figure 6C,D exhibit that LPS increased the expression of GCLM (C, Lane 2) and Nrf2 (D, Lane 2). Similarly, EPS alone, or EPS plus LPS, also promoted the expression of GCLM (C, Lanes 3 and 4) and Nrf2 (D, Lanes 3 and 4). Overall, Figure 6 suggests that EPS, like LPS, can activate the Nrf2/HO-1 anti-oxidative pathway. Molecular Mechanism-The Anti-Oxidative Pathway Nrf2 induces the expression of anti-oxidative enzymes, including heme oxygenase-1 (HO-1) and the glutamate-cysteine ligase modifier subunit (GCLM) [27]. Therefore, whether EPS enhanced the expression of these enzymes and Nrf2 was examined. Figure 6A shows that LPS elevated the expression of HO-1 (Lane 2) as expected [27]; EPS alone (Lane 3) or EPS plus LPS (Lane 4) also promoted the expression of HO-1. The result of Lane 3 suggests that EPS promotes HO-1 expression. The same assay was performed, using PPT. As a result, Figure 6B reveals that HO-1 expression level was not significantly increased by PPT (Lane 3) compared to the control (Lane 1). In cells treated by both PPT and LPS, HO-1 expression was enhanced as expected due to the presence of LPS (Lane 4). Thus, PPT does not activate the expression of HO-1. Subsequently, Figure 6C,D exhibit that LPS increased the expression of GCLM (C, Lane 2) and Nrf2 (D, Lane 2). Similarly, EPS alone, or EPS plus LPS, also promoted the expression of GCLM (C, Lanes 3 and 4) and Nrf2 (D, Lanes 3 and 4). Overall, Figure 6 suggests that EPS, like LPS, can activate the Nrf2/HO-1 anti-oxidative pathway. Subsequently, when p38 inhibitor was added, the expression of HO-1, GCLM, and Nrf2 was obviously decreased in LPS and EPS cotreated cells ( Figure 6A,C,D, Lane 5), as well as in LPS and PPT cotreated cells ( Figure 6B, Lane 5), implying that p38 was likely involved in the activation of the Nrf2/HO-1 pathway in these cells. To further assess the role of p38 in EPS-induced activation of the Nrf2/HO-1 pathway, Figure 7A shows that EPS alone activated p38 and increased the expression of Nrf2, HO-1, and GCLM (Lane 2), yet these reactions were all obviously inhibited by the p38 inhibitor (Lane 3), suggesting that p38 mediated EPS-induced activation of the Nrf2/HO-1 pathway. EPS does not inhibit the LPS-induced activation of JNK. Thus, whether JNK is involved in the activation of the Nrf2/HO-1 pathway in EPS and LPS cotreated cells was also examined. Figure 7B demonstrates that in cells treated with LPS or LPS + EPS, JNK was activated (Lanes 2 and 5, respectively) but was not in those treated with EPS alone (Lane 4). Moreover, the expression levels of HO-1, GCLM, and Nrf2 were enhanced with treatment by LPS (Lane 2), EPS (Lane 4), or LPS + EPS (Lane 5). These results are consistent with those of Figures 5 and 6. When a JNK inhibitor was added to cells treated with LPS or LPS + EPS, JNK activation was indeed suppressed ( Figure 7B, Lanes 3 and 6, respectively). However, the expression levels of HO-1, GCLM, and Nrf2 were not obviously affected by the inhibitor (Figure 7B, Lanes 3 and 6). It suggests that JNK is not important in the activation of the Nrf2/HO-1 pathway in cells treated with LPS or LPS + EPS. Subsequently, when p38 inhibitor was added, the expression of HO-1, GCLM, and Nrf2 was obviously decreased in LPS and EPS cotreated cells ( Figure 6A,C,D, Lane 5), as well as in LPS and PPT cotreated cells ( Figure 6B, Lane 5), implying that p38 was likely involved in the activation of the Nrf2/HO-1 pathway in these cells. To further assess the role of p38 in EPS-induced activation of the Nrf2/HO-1 pathway, Figure 7A shows that EPS alone activated p38 and increased the expression of Nrf2, HO-1, and GCLM (Lane 2), yet these reactions were all obviously inhibited by the p38 inhibitor (Lane 3), suggesting that p38 mediated EPS-induced activation of the Nrf2/HO-1 pathway. EPS does not inhibit the LPS-induced activation of JNK. Thus, whether JNK is involved in the activation of the Nrf2/HO-1 pathway in EPS and LPS cotreated cells was also examined. Figure 7B demonstrates that in cells treated with LPS or LPS + EPS, JNK was activated (Lanes 2 and 5, respectively) but was not in those treated with EPS alone (Lane 4). Moreover, the expression levels of HO-1, GCLM, and Nrf2 were enhanced with treatment by LPS (Lane 2), EPS (Lane 4), or LPS + EPS (Lane 5). These results are consistent with those of Figures 5 and 6. When a JNK inhibitor was added to cells treated with LPS or LPS + EPS, JNK activation was indeed suppressed ( Figure 7B, Lanes 3 and 6, respectively). However, the expression levels of HO-1, GCLM, and Nrf2 were not obviously affected by the inhibitor (Figure 7B, Lanes 3 and 6). It suggests that JNK is not important in the activation of the Nrf2/HO-1 pathway in cells treated with LPS or LPS + EPS. The anti-oxidative effect of EPS was then examined through analysis of the intracellular level of reactive oxygen species (ROS). As shown in Figure 7C, LPS apparently enhanced the level of intracellular ROS (Group 2), and the addition of EPS significantly reduced the amount of ROS (Group 3). However, in the presence of p38 inhibitor, the ROS level was increased again (Group 4 vs. Group 3). Figure 7C supports the proposal that EPS activates p38, which in turn activates the Nrf2/HO-1 pathway, resulting in a reduced ROS level. Discussion In this study, the exopolysaccharide preparation of B. amyloliquefaciens amy-1 was shown to impede the proinflammatory actions of LPS on THP-1 cells including: LPSinduced expression of iNOS, COX-2, TNF-α, and IL-6, phagocytic activity of the cells, and activation of the IKK/NF-κB pathway. In the animal tests, EPS effectively ameliorated ear inflammation in mice, and the efficacy of EPS was similar to that of the clinical anti-inflammatory medicine indomethacin. These results suggest that EPS possesses antiinflammatory activity. Our study further indicates that the underlying mechanisms are associated with several intracellular signaling pathways. EPS inhibits LPS-induced activation of the IKK/NF-κB pathway and that of ERK1/2, yet at the mean time activates p38 and the Nrf2/HO-1 pathway. The former reduces the intensity of inflammation caused by LPS; the latter enhances anti-oxidative function of the cells that decreases oxidative stress caused by inflammation, protects cells from oxidative damages, and also reduces the intensity of inflammation. Moreover, we found that JNK is likely not involved in activating the Nrf2 pathway in THP-1 cells. The compositions of EPS and PPT are obviously different, because the carbohydrate content of EPS is over 95% of its total mass, whereas PPT contains 52% carbohydrates and a substantial amount of proteins (over 10%). PPT was likely composed of ethanolinsoluble components of the MRS broth. The effect of PPT was also tested to explore whether medium ingredients might have contributed to the observed activity of EPS. As a result, EPS and PPT exhibited distinct effects on the growth of THP-1 cells. PPT mildly yet significantly inhibited the growth of THP-1 cells at 150 and 200 µg/mL, yet EPS did not. It is speculated that at higher concentrations the content of PPT might have significantly affected the osmotic pressure of the culture medium of THP-1 cells, leading to the observed dose-dependent effect of PPT on the survival of THP-1 cells. Moreover, unlike EPS, PPT did not show any anti-inflammatory activity. PPT had no effect on suppressing LPS-induced iNOS and COX-2 expression, did not inhibit nor activate MAPKs, and neither promoted HO-1 expression, suggesting that the ethanol-insoluble components of the medium have no anti-inflammatory activity. Taken together, we conclude that the activity of EPS was not contributed by medium ingredients. EPS demonstrated differential effects on MAPKs. EPS alone activated p38 and inhibited ERK1/2 in a dose-dependent manner, but did not activate, nor inhibit JNK. LPS was reported to activate ERK1/2, JNK, and p38 [28], and our data also confirmed this point. However, in LPS and EPS cotreated cells, ERK was inhibited, but JNK was activated, suggesting that EPS hindered LPS-induced activation of ERK1/2, but not that of JNK. The latter coincided with the finding that EPS had no effect on JNK. The reaction of p38 in LPS and EPS cotreated cells was different. In the presence of 10 and 50 µg/mL EPS, p38 activation was obviously decreased compared to the control (LPS-treated cells), suggesting that EPS inhibited LPS-induced p38 activation. However, the level of p38 activation actually increased with increasing concentrations of EPS in LPS and EPS cotreated cells ( Figure 5A), indicating that EPS on one hand impeded the action of LPS on activating p38, whereas on the other hand activated p38 by itself. The former again suggests that EPS prohibits the effect of LPS, and the latter agrees with the finding that p38 was activated in cells treated with EPS alone. Therefore, we propose that EPS can activate p38, yet in the meantime inhibits the effect of LPS on p38. Thus, the observed increasing levels of p38 activation with increasing concentrations of EPS in Figure 5A Lanes 3-5 was due to the activation of p38 by EPS. We have not found any other anti-inflammatory compound in the literature reported to behave this way on p38 when cotreated with LPS. ERK1/2 [29,30], p38 [31][32][33], and JNK [33,34], have all been suggested to activate the Nrf2 pathway. However, our data suggest that JNK is likely not involved or not important in the activation of the Nrf2/HO-1 pathway in THP-1 cells. In LPS and EPS cotreated cells, ERK1/2 was inhibited, whereas p38 and JNK were activated. However, inhibiting JNK in these cells did not obviously affect the expression of Nrf2, HO-1, and GCLM, but suppressing p38 effectively inhibited the expression of these factors. Theoretically, if JNK and p38 both activate Nrf2, repressing either one of them should not effectively impede the activation of the Nrf2/HO-1 pathway since the other kinase can still activate the pathway. However, in our data, p38 inhibitor efficiently hindered the activation of the Nrf2/HO-1 pathway in LPS + EPS-treated cells, but JNK inhibitor did not. Hence, our data suggest that p38 plays a crucial role in activating Nrf2 in THP-1 cells, but JNK does not. In previous studies JNK was indicated to activate the Nrf2 pathway in RAW264.7 cells [33,34]; however, other studies showed that the downregulation of JNK was accompanied with the upregulation of Nrf2 in a microglial cell line [35], and in HepG2 cells [36]. Therefore, the role of JNK on regulating Nrf2 seems to be distinct between different cell lines or tissues. Therefore, our proposed mechanisms underlying the anti-inflammatory effect of EPS are depicted in Figure 8. LPS activates the IKK/NF-κB pathway and the MAPKs p38, ERK1/2, and JNK. All MAPKs activate AP-1 [27]. Together, the IKK/NF-κB pathway and the MAPKs promote inflammation. Additionally, ERK1/2 and p38 activate the Nrf2/HO-1 pathway, which activates the expression of anti-oxidative factors and thus reduces oxidative stress. The activation of the anti-oxidative pathway by MAPKs may be a protective strategy exerted by the cell to reduce self-injury, or to avoid excessive inflammatory responses [37]. EPS inhibits LPS-induced IKK activation, leading to the suppression of the IKK/NF-κB signaling and resulting in a reduced inflammation intensity. Furthermore, LPS-induced MAPK activation is partially hindered by EPS. EPS inhibits ERK1/2 activation and hinders LPS-induced ERK1/2 activation. However, EPS does not have an obvious effect on JNK and does not interfere with LPS-induced JNK activation. EPS activates p38 but inhibits the action of LPS on p38. Consequently, JNK and p38 still activate some proinflammatory reactions through AP-1. Hence, in our data, EPS mitigated but did not completely prohibit LPS-induced TNF-α and IL-6 production ( Figure 2E,F, Group 3), yet adding a p38 inhibitor further reduced the levels of these cytokines ( Figure 2E,F, Group 4). Nonetheless, p38 also activates Nrf2, which reduces inflammation-caused oxidative stress and interferes with NF-κB activity due to the fact that Nrf2 activates the expression of anti-oxidative proteins and can counteract the action of NF-κB [27,38]. Thus, EPS can protect cells from oxidative damages and decrease the intensity of inflammation. The mechanism by which EPS can differentially regulate MAPKs is unclear; furthermore, the mechanism through which EPS activates p38 yet simultaneously inhibits p38 activation by LPS is intriguing. Further investigation is required to answer these questions. That EPS contains anti-inflammatory activity which coincides with the finding that the extracellular extracts of B. amyloliquefaciens MBMS5 contained anti-inflammatory function [15]. Other probiotic exopolysaccharides suggested to have anti-inflammatory activity include those produced from Lactobacillus paraplantarum [39], Bacillus subtilis [40,41], Lactobacillus reuteri [16], Lactococcus lactis [42], Bifidobacterium longum [43], etc. However, the molecular mechanisms underlying the anti-inflammatory effects of these exopolysaccharides are mostly unclarified, except that B. subtilis exopolysaccharides were reported to inhibit NF-κB expression [41], or to induce M2 phenotype of macrophages [40]. This study provides a clearer picture for the potential molecular mechanisms mediating the anti-inflammatory effect of the exopolysaccharides of a probiotic. ti-inflammatory activity of EPS require further investigations. On the other hand, LPS binds to toll-like receptor 4 [28], whereas our data reveal that EPS counteracts the pro-inflammatory actions of LPS. Hence, it is reasonable to speculate that EPS may work as an antagonist of toll-like receptor 4, resulting in inhibition on the effects of LPS. Yet, LPS-induced JNK activation was not affected by EPS (Figure 8). Put differently, EPS did not inhibit all the effects of LPS that are mediated by activated toll-like receptor 4. Therefore, whether EPS binds to toll-like receptor 4 needs to be carefully evaluated. In conclusion, the exopolysaccharides of B. amyloliquefaciens amy-1 possess anti-inflammatory activity. The underlying mechanisms are associated with the inhibition of the IKK/NF-κB pathway and ERK1/2, and the activation of p38 followed by that of the Nrf2/HO-1 pathway. Together, these mechanisms reduce the severity of inflammation. These findings support the proposal that exopolysaccharides are a crucial element in the anti-inflammatory functions of at least some probiotics. Treating THP-1 cells with EPS alone obviously increased the phagocytic activity of the cells ( Figure 2G, Group 3). In addition to being a biomarker in inflammation, increased phagocytic activity of neutrophils or macrophages is also a sign of enhanced innate immunity. For example, polysaccharides isolated from Ganoderma lucidum were suggested to increase immunity, and one of the evidences was that they increased the phagocytic activity of macrophages [44]. Polysaccharides isolated from Angelica dahurica were suggested to have immunomodulatory properties and were shown to enhance the phagocytic activity of RAW264.7 cells [45]. Therefore, EPS may have immunomodulatory activity, which deserves to be further investigated. Cell Culture, Treatments, and Cytotoxicity Assay In our previous study, EPS was found to activate bitter-taste receptors in enteroendocrine cells [22]. More and more evidence has suggested that bitter-taste receptors are associated with immune modulation [23,24,46]. Therefore, whether EPS activates bitter-taste receptors in THP-1 cells, and whether this activation is associated with the anti-inflammatory activity of EPS require further investigations. On the other hand, LPS binds to toll-like receptor 4 [28], whereas our data reveal that EPS counteracts the pro-inflammatory actions of LPS. Hence, it is reasonable to speculate that EPS may work as an antagonist of toll-like receptor 4, resulting in inhibition on the effects of LPS. Yet, LPS-induced JNK activation was not affected by EPS (Figure 8). Put differently, EPS did not inhibit all the effects of LPS that are mediated by activated toll-like receptor 4. Therefore, whether EPS binds to toll-like receptor 4 needs to be carefully evaluated. In conclusion, the exopolysaccharides of B. amyloliquefaciens amy-1 possess antiinflammatory activity. The underlying mechanisms are associated with the inhibition of the IKK/NF-κB pathway and ERK1/2, and the activation of p38 followed by that of the Nrf2/HO-1 pathway. Together, these mechanisms reduce the severity of inflammation. These findings support the proposal that exopolysaccharides are a crucial element in the anti-inflammatory functions of at least some probiotics. Cell Culture, Treatments, and Cytotoxicity Assay THP-1 cells were purchased from Bioresource Collection and Research Center (Hsinchu, Taiwan) and cultured in antibiotic-free RPMI 1640 medium containing 10% fetal bovine serum at 37 • C in a humidified incubator supplied with 5% CO 2 . To perform EPS functional assays, the cells were seeded into 35-mm culture dishes (2 × 10 6 /well), and cultured in a serum-free, antibiotic-free medium containing 0.2% bovine serum albumin and 100 ng/mL TPA for 24 h to allow cells to differentiate into M0-state macrophage-like cells [47]. Then, the cells were treated with vehicle (the same volume of sterile distilled water), 1 µg/mL LPS (dissolved in sterile distilled water), and/or a desired concentration of EPS for the indicated duration. In experiments using a kinase inhibitor, cells were pretreated with 20 µM of the inhibitor for 30 min, followed by the addition of LPS and/or EPS. For cytotoxicity assays, cells were seeded in a 96-well plate and treated with TPA to allow differentiation into M0state cells as aforementioned. The cells were then treated with the indicated concentration of LPS, EPS, and/or PPT in a serum-free medium for 24 h. After washing the cells with phosphate-buffered saline (PBS, pH 7.4), a Cell Counting Kit-8 reagent (Target Molecule Corp., Boston, MA, USA) was added into the medium, and the cells were incubated for 4 h as per the supplier's instructions. Absorbance at 450 nm was then measured by using Varioskan LUX multimode microplate reader (Thermo Fisher Scientific Inc., Waltham, MA, USA). Cell viability relative to the control (cells treated with vehicle) was calculated. Preparation and Analysis of EPS and the Control (PPT) Exopolysaccharides were isolated from the culture medium (MRS broth) of B. amyloliquefaciens amy-1 through ethanol precipitation as described previously [20]. The isolated exopolysaccharides (i.e., EPS) were lyophilized, stored at −20 • C, and dissolved in sterile distilled water in appropriate concentrations for experiments. The exopolysaccharide solution was subjected to total carbohydrate analysis, total protein analysis, total lipid analysis, and total polyphenol analysis (N = 3) as described previously [20]. The control was prepared by incubating the same volume of sterile MRS broth without inoculating any micro-organism in a shaking incubator at 37 • C for 48 h like the cultivation of amy-1. The medium was then subjected to ethanol precipitation by the same protocol used for preparing EPS. The resulting precipitate (PPT) was collected, lyophilized, dissolved in sterile distilled water, and assayed for total carbohydrates and total proteins. In experiments, the vehicle control for EPS or PPT was the same volume of sterile distilled water. Protein Extraction and Western Blotting THP-1 cells were cultured and treated as aforementioned. Cells were incubated for 16 h for the analysis of iNOS and COX-2; 10 min for that of IKK and IκB; 90 min for nuclear p65; 1 h ( Figure 5) or 24 h (Figure 7) for MAPKs; and 24 h for HO-1, GCLM, and Nrf2. Cells were then washed twice with PBS and lysed through submersion in a proper volume of RIPA lysis buffer (Merck Millipore) containing a protease inhibitor cocktail and a phosphatase inhibitor cocktail. Lysed cells were scraped off the plate on ice and centrifuged. The supernatant was collected and subjected to Bradford Assay for determining protein concentrations. For nuclear NF-κB analysis, nuclear proteins were extracted using a Nuclear Extraction kit (Merck Millipore) following the manufacturer's instructions. Equal amounts of proteins were sampled from each treatment, subjected to acrylamide gel electrophoresis, and then transferred onto Immobilon-P Membrane PVDF membrane (Merck Millipore). After blocking the membranes with BlockPRO Protein-Free Blocking Buffer (Energenesis Biomedical Co., Taipei, Taiwan) for 1 h at room temperature, they were hybridized with primary antibodies diluted (2000-10,000 folds) in Immobilon Signal Enhancer for Immunodetection (Merck Millipore) at 4 • C overnight. Then, they were washed at least thrice with Tris-buffered saline (pH 7.4) containing 0.1% Tween-20 (TBS-T), and hybridized with HRP-conjugated secondary antibodies for 1 h at room temperature. Subsequently, they were washed thoroughly with TBS-T, and a proper volume of Super-Signal West Pico PLUS Chemiluminescent Substrate (Thermo Fisher Scientific), or Clarity Max Western ECL Substrate (Bio-Rad Laboratories, Hercules, CA, USA), was added to the membranes. Signals were detected using an imaging system (ChemiDoc XRS+ Imaging Systems, Bio-Rad), and band intensities were analyzed using the supplied software (Image Lab, Bio-Rad). 4.5. Enzyme-Linked Immunosorbent Assay (ELISA) for TNF-α and IL-6 1 × 10 6 THP-1 cells were seeded into 35 mm culture dishes and differentiated to M0-state macrophage-like cells by TPA as aforementioned. Then, they were washed twice with PBS and treated with SB202190 (Group 4 of Figure 2E,F) or vehicle (Groups 1, 2, and 3 of Figure 2E,F) for 30 min, followed by the addition of 1 µg/mL LPS and/or 100 µg/mL EPS, as indicated in the figures, and incubated for 16 h. The medium was collected and analyzed for TNF-α and IL-6 concentrations by using the respective ELISA kits (Invitrogen, Thermo Fisher Scientific) as per the manufacturer's instructions. Phagocytosis Assay THP-1 cells were seeded into 96-well culture plates (1 × 10 4 cells/well) and differentiated to M0 state by using TPA. After washing the cells twice with PBS, they were treated with 1 µg/mL LPS and/or 100 µg/mL EPS for 24 h. Then, they were washed twice with PBS, incubated in PBS containing 40 µg/mL neutral red for 1 h at 37 • C, and washed twice with PBS again. Subsequently, the cells were incubated in a lysis solution (50% acetic acid and 50% ethanol, v/v) for 1 h at room temperature. Absorbance at 492 nm was then measured using a microplate reader (Varioskan™ LUX multimode microplate reader, Thermo Fisher Scientific). Animal Tests Eight-week-old male ICR mice were purchased from BioLASCO (Taipei, Taiwan) and allowed 2-week acclimation before the experiments. The mice were housed under a 12 h light-dark cycle, allowed free access to water and food, and fed with a regular laboratory rodent diet. The mouse ear edema assay was conducted as described previously [48], with modifications. The mice were randomly divided into six groups of five. TPA (3 µg/ear dissolved in 20 µL acetone) was applied on the right ears of the mice in Groups 2-6, and 20 µL acetone was applied to the right ears of the mice in Group 1. One hour later, 500 µg/ear indomethacin (dissolved in 20 µL of acetone) was applied to the right ears of the mice in Group 3; 250, 500, and 750 µg/ear EPS (dissolved in 20 µL WA solvent, which was 50% water and 50% acetone) was applied to the right ears of the mice in Groups 4, 5, and 6, respectively. For mice in Groups 1 and 2, 20 µL of WA solvent was applied to the right ears. The thickness of the treated ears was measured using a dial thickness gauge (Peacock, Ozaki, Tokyo, Japan) before stimulation and 4, 16, and 24 h after TPA stimulation. Intracellular ROS Assay THP-1 cells were seeded into 35-mm culture dishes (1 × 10 6 cells/dish) and differentiated to the M0 state by using TPA. The cells were then treated with serum-free media containing 100 µg/mL of EPS for 1 h. After washing the cells twice with PBS, they were incubated in serum-free media containing 10 µM dichlorofluorescin diacetate and 20 µM SB202190 (or vehicle) for 30 min. The cells were washed with PBS again and treated with serum-free media containing 1 µg/mL LPS (or vehicle) for 1 h. Then, they were washed twice with PBS and scraped off the dish under PBS. The resulting suspension was analyzed for fluorescence intensity (excitation wavelength 504 nm and emission wavelength 529 nm) in a fluorescence detector (Modulus Single Tube Multimode Reader, Turner BioSystems, Sunnyvale, CA, USA). Statistical Analysis Data were analyzed through one-way analysis of variance followed by Scheffe's post hoc test by using the Microsoft program Excel. Significant difference was identified when p < 0.05.
2022-09-09T16:21:41.075Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "83b8e68d2457e71d6c444d979b0294342b5cbb76", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/18/10237/pdf?version=1662468561", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "618b8c0293f74cbe71df234de02a4d39298fff38", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
246568629
pes2o/s2orc
v3-fos-license
An Investigation on the Vortex Effect of a CALM Buoy under Water Waves Using Computational Fluid Dynamics (CFD) : Floating offshore structures (FOS) must be designed to be stable, to float, and to be able to support other structures for which they were designed. These FOS are needed for different transfer operations in oil terminals. However, water waves affect the motion response of floating buoys. Under normal sea states, the free-floating buoy presents stable periodic responses. However, when moored, they are kept in position. Mooring configurations used to moor buoys in single point mooring (SPM) terminals could require systems such as Catenary Anchor Leg Moorings (CALM) and Single Anchor Leg Moorings (SALM). The CALM buoys are one of the most commonly-utilised type of offshore loading terminal. Due to the wider application of CALM buoy systems, it is necessary to investigate the fluid structure interaction (FSI) and vortex effect on the buoy. In this study, a numerical investigation is presented on a CALM buoy model conducted using Computational Fluid Dynamics (CFD) in ANSYS Fluent version R2 2020. Some hydrodynamic definitions and governing equations were presented to introduce the model. The results presented visualize and evaluate spe-cific motion characteristics of the CALM buoy with emphasis on the vortex effect. The results of the CFD study present a better understanding of the hydrodynamic parameters, reaction characteristics and fluid-structure interaction under random waves. Introduction In recent times, the most commonly-utilised type of offshore loading terminal is the Catenary Anchor-Leg Mooring (CALM). The CALM buoy is a floating buoy with catenary chain legs secured to anchors or piles that anchor it to the bottom, and the buoy also has attached marine hoses [1][2][3][4][5]. As a result of sheer limited inertia of the CALM buoys, mooring line reactions are extremely sensitive to waves, posing a significant wear risk to the mooring lines. Extreme waves can even cause mooring lines to break and affect the behaviour of marine hoses as reported in some CALM buoy system failures [6][7][8][9]. As a result, studying the motions of the CALM buoy in mild, squall and severe wave conditions is extremely important [10][11][12][13], as they also influence hose mechanics [14][15][16][17][18][19]. Over the years, there has been a number of motion response phenomena of floating CALM buoys, however, there are limited computational fluid dynamics (CFD) investigations presented. Different questions have been answered on marine riser mechanics [20][21][22], CALM buoy dynamics [23][24][25][26][27] and CALM buoy motion stability [28][29][30][31], but few works addressed other issues that encompass the motion response of CALM buoy in CFD, moored aspects of single point mooring (SPM) systems, flow vorticity, pressure distribution on CALM buoy, velocity impact on CALM buoy, and vortex-induced motion (VIM) on CALM buoys or similar floating offshore structures (FOS). The sketch the (un)loading operation on a CALM buoy with wave forces and boundary conditions and configured as (a) Chinese-lantern and (b) Lazy-S configurations in Figure 1. It shows the hose-string was attached at an end (named 'End A') under the buoy while the second end (named 'End B') connected unto the Pipeline End Manifold (PLEM). In some recent studies [32][33][34][35], coupled simulations of a CALM buoy using a CFD-FEM model for wave-induced motion (WIM), whereby the mooring system's FEM model is linked to the CALM buoy CFD model, and the Level-set method is utilised to simulate waves and free surface effects. The authors made use of MOORING3D in conjunction with a motion solver with 6 degrees of freedom (6DoFs). The CFD module calculated the hydrodynamic loading on the moored buoy, using the large eddy simulation (LES) applied on the turbulence model with the Finite-Analytic Navier-Stokes (FANS) code. They concluded that the WIM of the buoy is dominated by inertial and viscous effects of the hull, and that the results of coupled study on free-decay and wave-induced motions match well with model testing. In a similar study, Toxopeus et al. [36] conducted some CFD simulations under waves and calm water using a self-propelled free running 5415M ship model and presented some motion response distributions. Bandringa et al. [37] presented a CFD investigation which was validated using a linked CFD-dynamic mooring model for simulating the behaviour of a shallow water CALM buoy in extreme waves. In their study, a Navier-Stokes based finite-volume, VoF (volume of fluid) CFD solver was coupled with a dynamic mooring model to simulate an interactively moving CALM buoy in a horizontal mooring system. The CFD results were compared to model tests conducted during a Com-FLOW-2 joint industry project (JIP) in MARIN's shallow-water basin, and the authors concluded that the validation study focuses on accurately predicting the CALM buoy's coupled responses in extreme, regular shallow-water waves. In addition, they opined that the CFD simulations in which the mooring system is represented by a linearly equivalent spring matrix, including cross terms, are offered as an alternative to simulations with a fully connected dynamic mooring setup. In another JIP called EXPRO-CFD EU FP5 project reported by Woodburn et al. [38,39], some predictions were conducting by utilizing results from a commercial CFD software with existing hydromechanics tools to forecast the response of floating structures in waves and currents, including viscous effects for the response of CALM buoys in waves. The dynamics of the floating structure, its moorings, and risers are modelled using the AQWA-NAUT platform, and CFD delivers the whole set of hydrodynamic forces and moments at each time step in the simulated motion. The CALM buoy has a 23 m diameter and a 2 m broad skirt attached 1m above the keel; the effects of flow separation off this skirt and the related viscous damping on the buoy's motions were predicted to be considerable, especially around its natural period. Further experiments showed that the flaw in the potential flow technique appeared within the formulated extra viscous damping rather than the drag coefficient model values. Bunnik et al. [40] covered experimental work conducted to obtain insight into the tension variations in the mooring lines and export risers of a CALM buoy through a series of model studies. The tests were conducted on a model with on-linearities in the wave forces on the buoy, such as those caused by the presence of the skirt, were investigated via captive experiments in regular and irregular waves. The authors opined that to establish the dampening of the buoy's oscillations and acquire the natural periods, decay tests were necessary as well as the mooring system's dynamics, and the consequent dampening which has substantial impact on the buoy's motions. Different CFD studies include validation studies on CALM buoys presented by various researchers [41][42][43], application using different CFD modelling methods for fluid studies [44][45][46] and coupled models [46][47][48]. Figure 2 shows a typical CALM buoy in the Baltic Sea offshore Lithuania installed by SOFEC [49]. In this study, a numerical investigation is presented on a CALM buoy model conducted using CFD in ANSYS Fluent version R2 2020. The aim of the research is to investigate the vortex effect of water waves on the CALM buoy. Section 1 presents some background on the research while Section 2 presents some governing equations and theoretical models. Section 3 presents the numerical model for the CFD study while Section 4 presents the results with some discussion. Section 5 presents the concluding remarks on the study. Theoretical Model The theories on the hydrodynamics and statics for CALM buoy with attached hoses is presented in this section. Motion Forces, Drag and Damping Formulation The formulation for the drag and damping of the buoy is based on some assumptions. Buoy Model Assumptions To achieve this, the following model assumptions are considered in this study: 1. The body of the CALM buoy model is cylindrically shaped; 2. The buoy has a circular skirt attached to it; 3. The skirt is made from solid plates with thin thickness; 4. The skirt is devoid of perforations, except where fairleads or mooring lines are attached; 5. Viscous contributions of damping from skin friction can be neglected; 6. It is assumed that the linear radiation-diffraction computations can be utilised to obtain the CALM buoy's damping and added masses in the following: linear heave, linear surge, and linear pitch; 7. It is assumed that the drag loads on the CALM buoy's bilges are very small; 8. It is assumed that the drag loads on the CALM buoy's skirt can influence the quadratic pitch and heave damping contributions; 9. The local fluid velocity around the skirt's circumferential area utilised in computing these damping contributions. This is conducted by considering the CALM buoys' velocity, but ignoring the flow's disturbance due to the buoy's presence and the wave orbital motions; 10. It is assumed that the CALM buoy hull is positioned in X-Z axes, and subject to a flow direction; 11. The buoy has 6 degrees of freedom (6DoFs) as illustrated in Figure 3. The buoy is considered typically as a single system, and as a floating buoy with a rigid body. Added Mass & Damping Coefficients The experimental investigation by Cozijn et al. [50,51] were conducted using forced oscillations for heave and pitch damping on CALM buoy. In that study, the loads in the 6component force frame, as well as the motions of the CALM buoy model, were measured. The measured signals were subjected to a harmonic analysis, where the applied motion is used as the lead signal. The very first harmonic is the observed loads' amplitudes and phase angles. The damping coefficient and the added mass coefficient were calculated using the CALM buoy's motion. Figure 4 is the coordinate system of the CALM buoy hull. Cozijn et al. [51] obtained the expressions for typical heave motion as given in Equations (1) and (2), where M denotes the dry mass of the CALM buoy model, Czz denotes the Roll Yaw Z X Y Pitch Heave heave hydrostatic restoring force coefficient, εFz and Fz denote the measured heave force amplitude, and ω denotes the amplitude and frequency of the applied heave motion, and phase lag. Load Computations on Buoy's Skirt The computation for the loads on the skirt are based on the CALM buoy's geometry, as illustrated in Figure 5. It shows the diameter of the skirt, , the diameter of the CALM buoy's body, , the tangential angle obtained from the skirt's circumference, α. Based on the assumption that the buoy is circular with a circular skirt section, the area of the skirt can be obtained using the following expression: To obtain the radius of the buoy section, a representative skirt radius, is considered using Figure 5. This representatively provides an assumed definition to the position or locus where the application of the local drag loads is considered. Thus, To obtain the width of the CALM buoy'skirt, the measurement is taken outer section of the skirt's rim to the outer region of the buoy's body, as expressed in Equation (5): Viscous Damping Load Computations A semi-empirical model including the drag term from Morison's formulation is used to calculate the viscous contributions in the CALM buoy heave and pitch damping [51]. From the description provided on the CALM buoy, the skirt's geometry and the load computation, it is possible to compute the load on a skirt segment. By bringing this force together using integration, the quadratic loads of heave and pitch damping are derived from the circumference of the skirt. The expression for the infinitesimally considered section of the skirt area where the local drag loads act, as provided in the literature [50,51], is: It has been identified that the local velocity upon the skirt area, dA is a function of the velocities for the pitch (θ), roll (φ), and heave (z) motion, and which is computed using: The value for is the only empirical parameter in Equation (8) for the model for which a suitable value should be chosen. Keulegan and Carpenter [52] discovered that when the flow is oscillatory, the , the dimensionless drag coefficient, may be affected by the amplitude of the motion. Thus, the dimensionless KC number is frequently used to express the amplitude. In this study, it implies that the 's value may be influenced by the motion amplitude of the skirt part in question, as model-tested in MARIN [50,51]. In summary, the drag loads on the CALM buoy skirt that have been examined are flow related. At a sharp edge, it is believed that separation and formation of eddies will occur. The value is independent of the Re value. The independence of the Re was also mentioned by Sarpkaya & O'Keefe [53] as a contribution in the instance of oscillating flow through sharp-edged plates. Damping Computations on Buoy The total frequency dependent damping is made up of a linear equivalent viscous term and a linear potential term when evaluating the CALM buoy damping in the frequency domain (FD). The quadratic drag loads on the skirt are represented by the viscous damping term [50,51]. As a corollary, it will be a dependent of the amplitude of the motion, as in Equation (9). For the heave damping, it is also assumed that the CALM buoy performs a harmonic motion, in the form given in Equation (10). For the pitch damping, the KC values considered in the literature [50,51], are given as: Force Computations on Buoy The forces that act on the buoy assume an irrotational motion and an ideal fluid and neglect the effect of viscosity; thus, they are calculated by the use of linear wave theory [54][55][56][57]. For small waves, the linearization of the dynamic free-surface boundary condition is assumed. For small buoys, an approximation can be carried out to determine the excitation force, while the diffracted wave is neglected [58]. However, for cylindrical shaped buoys, Froude-Krylov force can be calculated for its heave motion, as presented in Equation (13). In principle, Froude-Krylov force assumes an ideal flow, where the pressure field is undisturbed, by applying the linear airy wave theory. where a is the radius of the buoy, h is the water depth, d is the draft of the buoy, k is the wave number, 2 is the circumference of the buoy, and J1 is the first-order Bessel function. The hydrostatic stiffness, Fh for a cylindrical buoy is given by Equation (14); where a is the radius of the buoy, g is the gravitational constant, and is a vector representing the translational degree of freedom of the buoy. The heaving and swaying amplitude motions of the buoy is a factor of its slenderness ratio, as columnar buoys have lesser area around the water line than cylindrical buoys. This was given by the study by Jiang et al. [59] and Newman [60] on the heave inherent period of the buoy, given by Equation (15): where wo is the heave natural frequency of the buoy, ρ is seawater density, g is gravitational constant, M is the mass of the buoy and Ao is the waterline area of the buoy. FSI Formulation & Governing Equations The formulation for the fluid-structure interaction (FSI) and governing equations are presented in this section. The governing equations used in numerical modelling is based on applying Newton's 2nd law of motion, Morison's equation, hydrodynamic equations, Navier-Stokes equation and Continuity equation. Details on stability and motion equations exist in texts [61][62][63][64][65]. For irregular waves in the CFD model, the flow considered here is turbulent, thus we neglect the forces due to elasticity and the surface tension. Newton's 2nd Law of Motion The Newton's law of Equation is numerically presented in Equation (16), where the Newtonian Force, F is the external load of the system, Cv is the viscous damping, k is the spring constant, kx is the elastic force component and Ma is the inertia of the system. The Newtonian Force is given by the sum of the inertia force of the system, the viscous damping load and the elastic force components (also called the stiffness load of the system). Navier-Stokes Equations The rule of Navier-Stokes Equations included here are for thermo-fluid incidents directed by these governing equations, based on the laws of conservation. The Navier-Stokes (N-S) equations is the broadly applied mathematical model to examine changes in those properties during dynamic interactions, thermal interactions, and fluidic motions. The Navier-Stokes equations assume that the fluid, at the scale of interest, is a continuum, in other words is not made up of discrete particles but rather a continuous substance. Hence, the Navier-Stokes equations consists of three (3) conservation laws: a time-dependent continuity equation for conservation of mass, three time-dependent conservation of momentum equations and a time-dependent conservation of energy equation. For fluid that is considered incompressible and non-Newtonian, the Navier-Stokes Equations are applied [66,67]. The summation of the body force, pressure gradient and viscous force make up the fluid inertia. This is given in Equations (17)-(19), where P is the pressure, µ is the kinematic viscosity, Fx is the body force per unit mass in x-direction, Fy is the body force per unit mass in y-direction and Fz is the body force per unit mass in zdirection. Vortexes are basically formed as a result of instabilities generated from flow separations, as they travel through the hull. The flow is assumed to be incompressible; i.e., the energy of the vortexes are allowed to continuously increase or damp away, depending on the situation. Continuity Equations The Euler equation for incompressible flow is presented in Equation (20). In this paper, the CFD study was carried out for incompressible unsteady flow using continuity equations [67]. The dimensionless vector form of the continuity equations can be written as: The equations used in the formulation of finite volume method for incompressible and unsteady flow which is based on Navier Stokes and continuity equations, expresses nonlinear dimensionless parameters in Cartesian coordinate, as expressed in Equations (21) and (22). where Φ is a non-dimensional velocity vector component, expressed in three directions; u, v and w. The Reynolds number 'Re' is expressed in terms of the flow incidence velocity U, the fluid viscosity v, and the cylinder diameter D, as given in Equation (23): where the Reynold's number Re is a measure of the flow velocity, the column diameter, and the kinematic viscosity of water. However, mathematically, the expression for drag force is: Therefore, the hydrodynamic drag in the X-direction is calculated as: However, considering the Keulegan Carpenter number, KC [51,52] which is given in Equation (26), as a function of the frequency of the oscillating wave fw, vortex shedding and vortex induced motion (VIM) can be measured, thus the surface wave becomes an important parameter. Morison's Equations Based on the forces on the risers, the Morison's equation was used, as it considers the wave forces acting on a cylinder, due to the relative motion of body immersed in the fluid [68,69]. Thus, it yields the sum of the Froude-Kyrov force FFK, the hydrodynamic force of the fluid, FH, and the drag force, FD. Morison's equation is expressed in Equation (28): where V is the volume of the body, A is the area of the body, Cd is the drag coefficient, Cm is the inertial force coefficient. The equation can be simplified, as the fluid force is equal to the sum of the drag force and the force of inertia, thus Equation (29): The global design conducted in this investigation was carried out under irregular wave, and the damping was calculated using the modified Morison Equation [46]. where V is the volume of the body, A is the area of the body, D is the diameter of the body, Cd is the drag coefficient, Ca is the added mass coefficient, Cm is the inertial force coefficient, and the Vr is the relative velocity of fluid particles. Model Description The model is the numerical design of the CALM buoy, carried out in ANSYS Fluent. The effect of vortex flow around the buoy was also investigated using CFD in the present study. in the present study. Other comparatively-related researches on CALM buoys include experimental investigations that could be used to verify the flow behavior on buoys [26,27]. The present study was conducted using ANSYS Fluent R2 2020 [70][71][72][73][74], in 2D bounded walls, and in 3D. However, the results of the 2D study were only presented herein. The k-epsilon turbulence model was used to develop the 2D CFD model. In the model setup, the velocity specification method was based on magnitude normal to boundary. The reference frame was absolute and the Gauge Pressure of 0Pa was considered. The maximum iterations used per time step were 2,000, with a time step size of 0.01 for 250 time-steps. In the turbulence model, the turbulence intensity was set at 5% and turbulent viscosity ratio was 10, given in Table 1. The momentum input is taken in absolute reference frame. The turbulence model's momentum was considered in the inlet zone using the magnitude velocity specification method which is applied normal to the boundary. CFD Model The CFD model is a pressure-based transient CFD model that uses k-epsilon (2 equations) standard turbulence model, with standard wall functions applied in near-wall treatment. The solver applies absolute approximations in the velocity formulation in 2D plane, for X and Y axes. The corresponding planar velocities are u and v, respectively. For the kepsilon model, the model constants are given in Table 2. Mesh Details It was modelled in 2D as one body with surface area of 9,721.5 m 2 , 1 face, 5 edges and 4 vertices. The domain surface body has 82,846 Domain Nodes and 81,313 Elements, with mesh as shown in Figure 6. The statistics for the mesh details of the CALM buoy model conducted in ANSYS Fluent can be seen in Table 3. It summarises the amount of meshed sections applied on the 2D CFD model. Solution Method The solution method considered in the CFD modelling is pressure-velocity coupling. For the spatial discretization, the gradients were based on least square cell based gradient, second order pressure, second order upwind momentum, first order upwind turbulent kinetic energy, first order upwind turbulent dissipation rate and second order implicit turbulent dissipation rate. The result of the scaled residuals, with an absolute criterion of 0.01 is presented in Figure 7, for 34,000 iterations. Boundary Conditions The boundary conditions considered for this CFD model are presented in Tables 4 and 5. For the buoy boundary, it was set as stationary wall with a "no slip" shear condition. For the other 2 outer adjacent boundary walls, the specified shear condition was applied, using standard roughness model, roughness height of 0m and roughness constant of 0.5 m. The buoy setup in ANSYS CFD showing the boundary conditions is presented in Figure 8. Materials & Fluid Structure Interaction The CFD study shows fluid structure interaction (FSI) using ANSYS Fluent R2 2020. The buoy model was developed using aluminum and steel materials. The density for sea water is 1,003 kg/m 3 , the density for fresh water at normal temperature of 20 °C is 998.2 kg/m 3 , the density of aluminum is 2,719 kg/m 3 , while the density of steel is 7,800 kg/m 3 . Details of the fluid properties considered are given in Table 6. The parameters for the 2D CFD buoy model in ANSYS Fluent are given in Table 7 and Figure 9. Result and Discussion The results and discussion of the CFD study are presented in this section. Results of Flow Vorticity around Buoy From the result obtained from the CFD study, it can be observed that the pressure and velocity have an effect on the profile of the flow around the CALM buoy. The resulting profile in Figures 10 and 11 shows the velocity profiles for the flow around the CALM buoy. It can be observed that the flow from the inlet (LHS) moves towards the outlet (RHS) of the wall. The flow creates different flow patterns, depending on the force filed developed around the structure when the velocity is 0.45 m/s, 1.0 m/s and 10 m/s. This speed is chosen based on the environmental condition used in investigating the flow characteristics and motion behaviour. Another CFD model was carried out in ANSYS Fluent to investigate the sloshing effect of water waves on the CALM buoy. A total of 2 different velocities, of magnitudes 0.45 m/s and 1.5 m/s, were investigated for the CFD analysis of the CALM buoy, as they represent 2 different ocean conditions. This is confirmed in the streamline series for the velocity along directions in Figures 12 and 13. In the velocity contours in Figures 10 and 11, it can be observed that the higher the velocity, the higher the vorticity around the CALM buoy. Results of Vortex Effect from the Flow Regimes The investigation of the vortex effect on the buoy was conducted using different flow regimes as shown in Figure 11. It was studied using 3 different cases of velocities: 0.45 m/s for normal operation condition, 1.0m/s for extreme weather condition and 10m/s for survival weather condition. Due to the waves generated on the buoy, there was some ripple effect from viscous damping on the buoy. It was also noticed that there was a higher vortex flow on the buoy under a higher velocity profile, which is attributed to contributions from linear and quadratic damping from the buoy motion responses in the heave, roll and pitch motions. This can be seen in the streamline series in Figures 12 and 13. Based on the generated linear contributions partially resulting from radiated waves and the frictional viscous effect, it can be opined that the buoy has some eddies in the direction of the flow. On the other hand, the generated quadratic contributions partially resulting from the eddies separating the buoy's vertices, and the sharp edges around the buoy's skirt, it was found to develop much higher buildup of ripple-like vortices. Hence, these buildups result in some vortex effect on the buoy, however, further studies are recommended on postprocessing the vortex-induced motion (VIM) of the buoy. Results of Pressure, Velocity and Wall Shear Profiles on the Buoy The investigation of the pressure, velocity and wall shear profiles on the buoy were conducted using different flow regimes in Figures 14-19. In Figures 14 and 15, the pressure profile can be seen to be higher with higher magnitude of velocity as seen for 0.45 m/s case is higher than the 1.5m/s case. Similarly in Figures 16 and 17, the velocity profile can be seen to be higher with higher magnitude of velocity as seen for 0.45 m/s case is higher than the 1.50 m/s case. In Figures 18 and 19, the wall shear can be seen to have highest distribution in a ripple for 0.45 m/s case which is higher than the 1.50 m/s case. In this investigation, the highest velocity distribution for 0.45 m/s case is 1.963 m/s in Figure 14, while the highest velocity distribution for 1.50 m/s case is 2.611 m/s in Figure 15. Moreover, the highest-pressure distribution for 0.45 m/s case is 12.18 Pa in Figure 16, while the highest-pressure distribution for 1.50 m/s case is 5.766 Pa in Figure 17. Lastly, the highest wall shear distribution for 0.45 m/s case is 0.5809 Pa in Figure 18, while the highest wall shear distribution for 1.50 m/s case is 0.5963 Pa in Figure 19. In this investigation, the wall shear, pressure, and velocity profiles reflect some vorticity patterns in the axial direction which differed under different cases of velocity magnitudes. Results of Turbulence Streamlines The numerical calculation for the buoy profiles for different parameters to present individual turbulence streamlines are conducted in this sub-section. In Figure 20, the streamline series for the velocity cases is higher in 1.0 m/s compared to the 0.45 m/s across the x-axis and y-axis. Furthermore, in Figure 21, the streamline series for the pressure cases is higher in 1.0 m/s compared to the 0.45 m/s across the x-axis and y-axis. Similarly in Figure 22, the streamline series for the turbulence kinetic energy cases is higher in 1.0 m/s compared to the 0.45 m/s across the x-axis and y-axis. These results on turbulence kinetic energy show that the turbulence model has an influence on the buoy model in this CFD study. This implies that the parameters of the turbulence model can be used to improve the performance of the buoy model. Results of Viscous Damping The calculation for the viscous damping is a very important aspect of the modelling. In this present investigation, damping estimation was considered for the CALM buoy. In principle, there are quadratic and linear contributions on damping from the buoy motion responses in the heave, roll and pitch motions. The generation of the linear contributions partly result from radiated waves and the frictional viscous effect. Conversely, the generation of the quadratic contributions are partly resulting from the eddies that separate the buoy's vertices, and the sharp edges around the buoy's skirt. Using a semi-empirical model, using MARIN's viscous study on CALM buoy [50,51], the values for the viscous damping coefficients are obtained. The viscous damping is proportional to a single drag coefficient, skirt plate geometry, wave frequency, velocity, and buoy amplitude. To compute this viscous damping semi-empirically, some model variations and parameters are considered, as given in Table 8. The semi-empirical method for viscous damping is detailed in literature [6,51]. The results are compared with the prediction in the present study, as seen in Figure 23. However, detailed computations are not given in this paper. In the study by Cozijn et al. [51], a comparison was performed between the damping values recorded in the forced oscillation experiments and the damping values computed using the pitch and heave damping model for the CALM buoy having a skirt. The drag coefficient CD employed in the heave and pitch damping model was chosen to match the measured damping values as closely as possible. This strategy, however, can only be employed when model test data is available. In some circumstances, a different approach to determining a suitable value for the drag coefficient CD is required. Empirically, the CD values are examined in greater depth by using Equations (27) and (28) to compute for the CD value for the CALM buoy's skirt from each pitch and heave oscillation test result. The accompanying KC numbers, which are defined in the equations, can then be displayed as a function of the CD values. Some studies were conducted on perforated plates, free plates, and bounded plates [52,53,[75][76][77]. Sarpkaya & O'Keefe [53] gave CD values for a wall bounded plate in a 2D oscillating flow as a function of the KC number. Figure 23 depicts similar findings in a graph (black dots) from 2 publications [51,53]. The CD values obtained from the heave and pitch forced oscillation tests for the CALM buoy skirt are presented in the same figure (coloured points) for comparison. The CALM buoy skirt CD values are identical to the CD values for a wall-bounded plate in a 2D oscillating flow, as shown in Figure 23. The CALM buoy skirt CD values, on the other hand, appear to be slightly greater than the ones reported by Sarpkaya & O'Keefe [53]. This can be explained by the fact that, despite the same flow patterns, the flow around the buoy skirt is axi-symmetric 3D rather than 2D. It is also feasible that the drag loads on the CALM buoy bilge are not insignificant. In such instance, their contribution to total drag is included in the drag loads on the skirt in the study presented here, which could lead to an overestimation of the CD values. Recent applications have been conducted by coupling CALM buoy models using related hydrodynamic formulations in literature [78][79][80][81]. [53], as they created the black and white image, based on their own experiments with wall mounted plates in a flume tank. Cozijn et al. [51] added the orange and blue points, which are the data resulting from MARIN's forced oscillation model experiments with a CALM buoy. Amaechi C.V. [26] added the green points from experiments using WITmotion bluetooth sensors on CALM buoy in Lancaster University wave tank. The image was adapted with permissions of ISOPE, MARIN and ASME. Original sources: [53], [51], and [26]). Conclusions In this research, the CALM buoy was numerically investigated under water waves using computational fluid dynamics (CFD) to investigate the vortex effect on the buoy. Some mathematical modelling and governing equations for the CALM buoy system were presented. By considering the complexity of a CALM buoy system, the boundary conditions were formed using some assumptions and some governing equations. Considerations were made using the damping to develop the equations by considering the buoy and its skirt. However, special attention is given to the CALM buoy and the skirt in the formulation. For the CFD study, the model was conducted using a 2D model. The results showed peculiar characteristics, which should be considered in the design due to the drag and damping implications on it. This research also presents findings from the vortex effect, velocity, pressure, wall shear and turbulence. This study is important to enable designers to design appropriately based on the CALM buoy system, buoy geometry and CFD data. The model highlights include the following: firstly, a theoretical model is presented on motion characterization for CALM buoy model without attached hoses. Secondly, the CALM buoy model was conducted on the vortex effect in 2D CFD under different parameters. Thirdly, different novel techniques were applied in the numerical investigation to obtain the influence of the turbulence model on the CALM buoy. Fourthly, the study on the motion scenario from pressure, velocity, wall shear, and motion response study on waves upon the CALM buoy. Lastly, prediction of the turbulence and flow vorticity on the CALM buoy's motion characteristics was presented from the study. The study presented buoy motion profiles based on 2D CFD study and numerical and theoretical predictions. From an offshore mechanical point of view, the motion characterisation phenomenon has been confirmed to exist as a result of the response from the water waves and other global loads on the CALM buoy. The study shows more dimension on the CALM buoy in a water body and buoy motion. The study has presented a comprehensive formulation of the offshore structure as is necessary for understanding the stability and dynamics behaviour. The vortex flow effect on the free-floating buoy is investigated using CFD. Another validation is recommended on an engineering application by coupling using the Orcaflex FEM to prove it is a working application of the mathematical formulations presented herein. However, further studies are recommended on the CFD study on buoy motion with moorings based on the analytical approximations for the moving boundary of marine hoses, and investigation on hose-snaking behaviour. Figure 23 are also appreciated. The authors also acknowledge the feedback given on this submission by Professors George Aggidis of Lancaster University UK and Long-Yuan Li of Plymouth University, UK. The authors appreciate the support of the reviewers and journal editors in reviewing this manuscript. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. 2D Two Dimensional 3D Three Dimensional 6DoF Six
2022-02-06T16:22:21.859Z
2022-02-04T00:00:00.000
{ "year": 2022, "sha1": "26e8a53613e5eb590b0aae43792ce9cfc0c38be3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2411-5134/7/1/23/pdf?version=1644567813", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "cba94106efe419abf28878eda8ae21d38786490a", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
250520489
pes2o/s2orc
v3-fos-license
Curvature-informed multi-task learning for graph networks Properties of interest for crystals and molecules, such as band gap, elasticity, and solubility, are generally related to each other: they are governed by the same underlying laws of physics. However, when state-of-the-art graph neural networks attempt to predict multiple properties simultaneously (the multi-task learning (MTL) setting), they frequently underperform a suite of single property predictors. This suggests graph networks may not be fully leveraging these underlying similarities. Here we investigate a potential explanation for this phenomenon: the curvature of each property's loss surface significantly varies, leading to inefficient learning. This difference in curvature can be assessed by looking at spectral properties of the Hessians of each property's loss function, which is done in a matrix-free manner via randomized numerical linear algebra. We evaluate our hypothesis on two benchmark datasets (Materials Project (MP) and QM8) and consider how these findings can inform the training of novel multi-task learning models. Introduction Graph networks (GNs) (Battaglia et al., 2018) are considered state-of-the-art machine learning (ML) methods for many scientific problems, including property prediction of both inorganic crystalline materials (Xie & Grossman, 2018;Park & Wolverton, 2020) (hereafter "crystals", e.g., Figure 1) and small organic molecules (Gilmer et al., 2017;Gasteiger et al., 2020) (hereafter "molecules", e.g., Figure 2); the same GN architectures have shown success in both domains (e.g., (Chen et al., 2019), who suggest the "crystal" vs. "molecule" terminology and use "material" to refer to both). A common use case might involve training a model from materials with known properties and then screening a separate dataset lacking those properties to obtain potential candidates to further investigate. Typically, property-prediction tasks are formulated as single-target regression problems, where the goal is to predict a scalar property like band gap, formation stability, elasticity, or solubility. However, practical materials design requires optimizing multiple properties of interest. For example, when designing a cell phone screen, a material with both high optical transparency and hardness would be desired. Predicting a single property can be posed as a single task for an ML model; using a single ML model to predict multiple properties is thus a type of multi-task learning (MTL). The most common family of MTL approaches is hard parameter sharing, in which a single representation space is shared across multiple tasks of interest, and task-specific networks map from that space to output predictions (Caruana, 1997). However, the latest research in novel MTL techniques largely has been focused on classification problems in the natural language processing (NLP) and computer vision (CV) domains -for example, the MultiMNIST problem (Sener & Koltun, 2018), where a model must identify two separate digits in the same picture. With some recent exceptions -e.g., Tan et al., 2021;Kong et al., 2021) -there has been little work exploring the application of MTL to multi-property GN regression problems. Some existing papers have used hard parameter sharing for material property prediction, in which a single model predicts a set of properties. However, results suggest that this approach has degraded performance compared to a set of single-property models -see, e.g., (Gasteiger et al., 2020), which predicts twelve quantum mechanical properties of molecules and finds that a single multi-output model performs worse than a set of single-output models. Works have explored reasons for this: (Yu et al., 2020) identifies a "tragic triad" of reasons that MTL models might perform worse compared to single-task ones, one of which is that multi-task loss functions can have high curvature, as characterized by the Hessian of the loss. However, (Yu et al., 2020) rely on first-order approximations to curvature and do not directly incorporate second-order information. arXiv:2208.01684v1 [cs. LG] 2 Aug 2022 Figure 1. Example of converting a periodic crystal lattice structure (Ba Ti O3) to a multigraph (a): Crystal lattice structure, created using ase 2 . (b): The corresponding multigraph representation of the crystal that is ingested by a graph network. Concurrently, several works have applied techniques from randomized numerical linear algebra to directly probe the Hessians of large ML models (Sagun et al., 2017;Alain et al., 2019;Yao et al., 2019;Ghorbani et al., 2019;Papyan, 2020). These rely (1) on the fact that the product of a model's Hessian and another vector (a Hessian-vector product) may be efficiently obtained, without explicitly calculating the full Hessian, and (2) that given a matrix-free Hessian-vector product function, several key spectral properties of the Hessian, including its eigenvalues and trace, may be estimated. However, these papers focus only on standard image classification problems, and do not consider the MTL setting, GNs, or regression problems. In this paper, we formulate the application of curvatureinformed techniques to MTL models (Section 2.1), describe the application of graph networks to the problem of multiproperty prediction in the domains of materials science and chemistry (Section 2.2 and Section 2.3), and analyze training dynamics of multi-property prediction models using methods based on assessing local curvature of loss function surfaces (Section 3). Our results suggest how novel multi-property prediction models might inherently account for differences in curvature to enhance learning efficiency. Approach Let G be a space of directed multigraphs corresponding to crystal or molecule structures. A directed graph g = (V, E) consists of a set of nodes V = {v} and edges E = {(v, v , k)}. Each node v has a node embedding x v ∈ R V , and each edge (v, v , k) has an edge embedding e v,v ,k ∈ R E . A pair of nodes v and v may have multiple edges connecting them, and these multi-edges are indexed by k = 1, . . . , K v,v . Furthermore, a graph g has T properties y = (y 1 , . . . , y T ) ∈ Y, here assumed to all be scalars. We require a model that can predict all targets y for a given g. This entails minimizing a loss with respect to a set of neural network weights {θ sh , θ 1 , . . . , θ T }, where θ t is used only in predicting property t, and θ sh is shared across predictions. The form of this problem and its neural network is discussed in Section 2.2. First we consider the loss's curvature. Curvature assessment techniques We view the shared parameters θ sh of a graph network as a flattened vector in R P ; let L t : R P → R be a propertyspecific loss function of θ sh , where the property-specific parameters θ 1 , . . . , θ T are held constant. Typically, machine learning models consider only first-order derivative information of L t when training neural networks. However, secondorder information is also useful in optimization problemsconsider how a quasi-Newton method like limited-memory L-BFGS (Liu & Nocedal, 1989) is more efficient than a first order method like gradient descent. This suggests that additional insight into the training problems of ML models can be found in using second-order information. The curvature of a function L t is characterized by its Hessian H t = ∇ 2 θ sh L t ∈ R P ×P for a weight vector θ sh ∈ R P . For typical deep neural networks, P ≈ 10 6 , and calculating H directly is computationally infeasible due to storage contraints (H t will have 10 12 entries). However, the product of H t with a vector v is computable in approximately the same time-complexity as calculating the gradient of L t with respect to θ (Pearlmutter, 1994). 3 This enables calculation of properties of H t that can be obtained from observing its action on a chosen vector v (Yao et al., 2019;Ghorbani et al., 2019). In particular, the trace of H t , and an approximate probability density function of its eigenvalues can be efficiently estimated. The trace of H t is both the sum of its eigenvalues and L t 's Laplacian (i.e., tr H t = p ∂ θ sh,p θ sh,p L t (θ)). Thus, the trace of H t can be viewed as a measurement for the complexity of its curvature (Yao et al., 2019). The sign of the trace also indicates the sign of L t 's local curvature. The trace of H t may be estimated with the expectation where P R is the distribution of random variables in R P with iid Rademacher components (Avron & Toledo, 2011). Having access to all of H t 's eigenvalues gives us a full picture of L t 's curvature. These eigenvalues are often represented as a function ψ called the spectral density or density of states that is given by ψ(t) = 1 P p δ(t − λ p ), where λ 1 ≤ · · · ≤ λ P are the eigenvalues of H t , and δ is the Dirac distribution (Lin et al., 2016). However, the weight vector θ sh is too high-dimensional for all of H t 's eigenvalues to be directly calculated in practice. Methods like power iteration can be used to estimate the large-magnitude eigenvalues of a Hessian (Yao et al., 2019), but they scale poorly as its dimensionality increases. A solution is to first use the Lanczos algorithm to estimate P P eigenvalues of H t and then smooth these estimated values into a continuous approximation to the discrete density ψ by convolving the values with a Gaussian kernel (Golub & Welsch, 1969;Ghorbani et al., 2019). 4 This yields a continuous functionψ(t) that approximates ψ. See Appendix E for an example of estimatingψ for a simplified loss function. Interpreting estimated spectral densities remains a challenging task: (Ghorbani et al., 2019) and (Yao et al., 2019) claim that a spectral density should be "smooth" (i.e., concentrated within a dense region) and consider the effect of different network architectures on density smoothness. (Alain et al., 2019) argues that having a large concentration of negative eigenvalues can lead to inefficient training, because most optimizers fail to leverage this local negative curvature. (Papyan, 2020) relate the outlier structure of a Hessian's eigenvalues to the number of classes in a dataset. Compared to CV problems, graph-structured datasets have a number of interesting attributes. For example, a graph input has a variable number of node and edge features, which complicates the GN learning problem. Furthermore, in contrast to the use of very deep networks in CV problems, increasing depth in graph networks has often not been found to be effective (see, e.g., discussion in (Godwin et al., 2022)). Hessian-based information has the potential to inform some of these comparisons. Note that the estimations of both the trace and the spectral densities are based on random quantities. For trace, multiple vectors v must be sampled and the quantity v T H t (θ)v computed, and then convergence in mean can be assessed. In practice, we find that approximately 500 samples are sufficient. For spectral density, each Lanczos iterate is initialized as a random Gaussian vector. We adapt an implementation of the Lanczos algorithm and smoothing process developed by (Ghorbani et al., 2019), in which we reorthogonalize the Lanczos iterates during each step of estimation to promote numerical stability. We follow (Yao et al., 2019) and use P = 100, which is more than the P = 90 that (Ghorbani et al., 2019) validate by comparing to an exact calculation of every eigenvalue of a smaller network. Graph networks for property prediction Letŷ : G → Y be a neural network parameterized by θ. We follow (Sener & Koltun, 2018) in splitting θ into a set of shared weights θ sh , and a set of property-specific weights θ 1 , . . . , θ T . Then the predicted t th property iŝ where z : G → R D is a task-independent GN (Battaglia et al., 2018), and each f t : R D → R is a task-specific feedforward network. We briefly outline the functionality of our graph network z, which maps an input graph g to a graph-level representation vector u M . First, the multigraph g is given a global feature u 0 ∈ R U that is initialized as a vector of ones, and, similarly to (Chen et al., 2019), the node features x v and edge features e v,v ,k are linearly projected: where W V and W E are matrices learned during training. Then information is propagated across the multigraph during m = 1, . . . , M messagepassing steps. The m th message-passing step proceeds as follows: First, the edge features are updated: where φ E is a feed-forward neural network, and cat is the concatenation operator. Then node updates are collected. For every node v, first edge updates are calculated: where the first summation is over the tuples (v , k) such that (v, v , k) is an edge for g, and the second summation is defined similarly. With these updates, the new node representation for v is given by where φ V is a feed-forward neural network. Finally, the global feature is updated: where φ U is a feed-forward neural network, and each feature vector x m v , e m v,v ,k , u m is processed with a layer normalization (Ba et al., 2016) layer. After M steps, the final global state u M is fed into each property-specific network f t . Our graph networks are implemented in the jraph 5 framework that is built on jax 6 . We use flax 7 to implement component neural networks φ V , φ E , and φ U . Because we seek to evaluate second-order derivative information, we ensure that our neural networks φ V , φ E , φ U are smooth functions with respect to their weights by using tanh activation functions. Further details on specific hyperparameters used to instantiate models are given in Appendix A. We impose on each property a loss function L t (here assumed to be mean squared loss for all t), and we collect a training dataset of multigraph data points D = {(g n , y n )} N n=1 . Then we solve the minimization problem where L t (θ sh , θ t ) = n |ŷ n (g n ; θ sh , θ t ) − y n t | 2 are task-specific losses and θ = {θ sh , θ 1 , . . . , θ T }. We solve this minimization with stochastic gradient descent using the optax 8 implementation of AdamW (Loshchilov & Hutter, 2019) to choose stepsizes; we handle the potential difference in scales across properties by normalizing all targets prior to training and set µ t = 1 for all t. Further details on the specifics of model training are given in Appendix B. Data sources We evaluate our graph network curvature assessment techniques on datasets from two scientific domains: materials science (crystals) and chemistry (molecules). For crystals, our data featurization follows (Park & Wolverton, 2020); for molecules, it follows (Gilmer et al., 2017). To reduce the complexity of our considered problem space, we choose to use simple featurizations (one-hot encoding of atom-types as node featurizations), but our methods are compatible with other featurizations (e.g., hand-crafted node descriptors like those provided by rdkit). Further details about the data used are available in Appendix C. For materials science, we collect data from Materials Project (MP) (Jain et al., 2013), which contains results of density functional theory (DFT) calculations for different crystals. crystal records contain compositional and structural information, as well as some related properties. Each crystal's structure information is captured in a Crystallographic Information File (CIF), which we convert into a multigraph using the VoronoiNN function from pymatgen 9 , following the approach of (Park & Wolverton, 2020). As crystals are periodic structures, this process yields multiple edges between many given pairs of nodes (Xie & Grossman, 2018;Park & Wolverton, 2020;Sanyal et al., 2018) (e.g., Figure 1). As targets, we use several assessments of a crystal's elastic properties: the Voigt-Reuss-Hill (Hill, 1952) calculation for shear (G V RH ) and bulk (K V RH ) modulus, both measured in units of GPa, as well as the isotropic Poisson ratio µ, a dimensionless quantity. These properties are inherently coupled -µ can be calculated as a function of G V RH and K V RH . Thus, an ML model that predicts all three serves as a test for how well it can learn underlying physical relationships. A one-hot encoding of atom-type is used to obtain initial node features x v , and four bond-related properties calculated by pymatgen are used as edge features, resulting in a dataset of 10,500 crystal structures with known elastic properties. For chemistry, we use the QM8 (Ramakrishnan et al., 2015) dataset of organic molecules. We use the rdkit package 10 to convert SMILES strings (Weininger, 1988) into molecular structure graphs (see, e.g., Figure 2). As targets, we use the first and second transition energies, E 1 and E 2 , and the first and second oscillator strengths f 1 and f 2 . QM8 contains several versions of transition energies and oscillator strengths that have been calculated via different levels of DFT. We use the predictions of the approximate coupledcluster (CC2) (Hättig & Weigend, 2000) method as our targets, as these are treated as the "ground truth" in (Ramakrishnan et al., 2015). A one-hot encoding of atom-type is used to obtain initial node features x v , and a one-hot encoding of bond-type is used to obtain initial edge features e v,v ,k . Note that, in Section 2.2, we describe our input data points g as directed multigraphs. However, for our actual data points, the edges are not inherently directed -bonds are symmetric. Thus, we duplicate each edge feature (i.e., e v,v ,k = e v ,v,k for all edges) during data preprocessing. Results We demonstrate the application of our curvature-assessment techniques (Section 2.1) on trained GNs (Section 2.2) in two application domains: materials science and chemistry (Section 2.3). Although here we focus on GNs (Battaglia et al., 2018), our assessment techniques are applicable to the broad family of graph neural networks used in property prediction tasks. Figure 3 shows training and test set errors for each dataset. These results do not necessarily align with domain intuition, suggesting that the models do not leverage scientific knowledge in their learned representations. For example, for MP, the test set error for Poisson ratio is higher than that of G V RH or K V RH , despite the fact that Poisson ratio can be calculated as a function of G V RH and K V RH . This is interesting and suggests that the GNs are not fully leveraging known scientific knowledge. For QM8, the oscillator strengths f 1 and f 2 have the lowest and highest, respectively, test set errors. Next we summarize the curvature of the loss functions using the trace of each task's loss's Hessian. In Figure 4, we show that, despite the general decreasing training error, the curvature of each property's loss surface, as measured by the trace of its Hessian, is highly variable. In particular, both datasets show high heterogeneity across properties and across training epochs in their estimated traces. Our observations here do not align with prior work that has analyzed Hessian traces for CV problems (Yao et al., 2019). In those works, traces increased monotonically during training and were consistently positive. Here, our estimated traces often flip between being positive and negative. The Hessian trace is a high-level summary of the curvature of a loss surface; for a more granular examination, we estimate the spectral densities of each property's Hessian. In Figure 6a in D, we show that the spectral densities are similarly variable and often feature the presence of outliers that briefly occur during training. These outliers vary across properties. For the QM8 dataset, we zoom in on the range of eigenvalues where most density is concentrated in Figure 5. Similar to (Yao et al., 2019;Ghorbani et al., 2019), we see that most density is concentrated near 0. This suggests that there is a great deal of redundancy in the latent spaces learned by these models, and their true dimensionality is likely significantly less than the full P parameters of the θ sh weight vector. Unlike these works, the spectral densities are more symmetric around 0. This matches Figure 4, since the Hessian trace (sum of all its eigenvalues) varies between being very positive and very negative. The exact spectral densities vary across properties, even at the end of training. . We track property-specific Hessian traces tr Ht while training GNs for the MP and QM8 prediction problems. The loss surface of the MTL problem for GNs appears to be complicated and varying across properties. Trace signs (indicating the sign of the curvature) and trace magnitudes vary across training and across properties, the latter by orders of magnitude. In prior work that looked at traces of Hessians (Yao et al., 2019) in CV tasks, traces were found to be less variable. Figure 5. Snapshots of the spectral densities of property-specific Hessian losses for the QM8 dataset. All properties start with comparable densities closely concentrated near 0; but, as training finishes, the densities spread out as the loss surface increases in complexity. Each property has a different spread of eigenvalues, suggesting that the loss surfaces do vary in curvature by property. Note that this plot trims the range of the x-axis to remove outlier eigenvalues. The full spectra are displayed in Figure 6 in Appendix D. For these plots, the x-axis gives the range of eigenvalues for the loss function, and the y-axis gives the densityψ(t) of eigenvalues concentrated at that point t (as described in 2.1). Discussion In this work, we have posited that the performance of multitask GNs for property prediction may be stymied by underlying variation in property-specific loss-surface curvatures. Some existing work considers this question (Yu et al., 2020) in a simplified theoretical setting, but, to the best of our knowledge, no one has empirically investigated the Hessian properties of multi-task GNs. In two domains -chemistry and materials science -our results suggest that loss surface curvature varies across each modeled property. In order to assess curvature without calculating the full Hessian, we build on recent work that uses matrix-free methods to estimate Hessian properties (Alain et al., 2019;Yao et al., 2019;Ghorbani et al., 2019;Papyan, 2020). We extend their results by considering a novel type of learning problemmulti-output scalar regression -and a novel class of neural network -graph networks. Our results echo some previous results -the majority of a Hessian's spectral density is concentrated near 0 -and diverges in other respects -Hessian properties appear far noisier and more variable for GNs than for CV tasks. We leave further investigation of this question to future work. Potential explanations include the difference in loss functions for regression vs. classification tasks, a higher level of noise in the datasets we examined, and some special characteristic of GN vs. other neural network architectures. We have here focused on a specific but representative subset of GN prediction problems, but considerable potential variation exists. For example, many recent GN architectures incorporate a notion of equivariance (Satorras et al., 2021;Gasteiger et al., 2020) into their feature-extraction models, and this might impact their curvature in different ways. In addition, we have not evaluated how the choice of optimizer (e.g., stochastic gradient descent vs. Adam (Kingma & Ba, 2014) vs. AdamW) impacts the curvature properties of a learned loss surface. Our curvature assessment enable several future research directions. First, our analysis here was primarily empirical. The phenomena we identify here (a diversity of curvatures across multi-task loss functions) and the previouslyobserved degradation of performance for multi-task models (Gasteiger et al., 2020) could be connected by a theoretical justification. Similarly, we find that curvature properties of GNs appear to be much noisier and more variable than the properties of other network architectures, and we currently lack a theoretical justification of why. Intermediate steps might entail investigating the curvature properties of MTL methods that, in other domains, do out-perform single-task models (e.g., (Sener & Koltun, 2018)). Existing work in analyzing curvature for computational geometry (e.g. (Goldman, 2005)) might provide techniques to build upon. A. Model specifics We use M = 5 message-passing steps. Node, edge, and global features are projected into a 64-dimensional feature space. φ V and φ E have two layers with 256 units and a skip connection, followed by an output layer of 64 units. φ U has two layers with 192 units and a skip connection, followed by an output layer of 64 units. All networks use tanh activations. B. Training specifics Models are trained for 512 epochs using the AdamW (Loshchilov & Hutter, 2019) optimizer and the default hyperparameters used in the optax implementation. The initial rate is set as 10 −3 , with an exponential decay rate of 0.997 applied every epoch after the first 256 epochs. C. Data We summarize our datasets in Table 1. We scraped Materials Project for crystal records present in it as of October 2020 using the MPRester class from the pymatgen package. The Supplemental Information contains a table of MP IDs used in this study. The raw entries of G V RH , K V RH , and Poisson ratio µ contained several anomalously small and large values, so we removed entries with values less than the 5th percentile or greater than the 95th percentile of obtained elastic properties. This left us with a total dataset of 10,500 crystals. Summary statistics for the targets used are given in Table 2. Furthermore, G V RH and K V RH were log-transformed to create a more normal distribution. Roughly 70% of the dataset (7,400 crystals) was used as training data, and all targets were standardized prior to training. As initial node features, we used a one-hot encoding of atom type. For initial edge features, we used four features calculated by pymatgen: area, face dist, solid angle, and volume. Following similar work in applying graph neural networks to crystals (Xie & Grossman, 2018), we discretize these four features based on deciles. We use the version of QM8 hosted by MoleculeNet (Wu et al., 2018), except that we drop molecules with negative oscillator energies. Summary statistics for the targets are given in Table 3. As initial node features, we use one-hot encodings of atom type. As initial edge features, we use one-hot encodings of bond type. Roughly 70% of the dataset (15,300 molecules) was used as training data, and all targets were standardized prior to training. . We visualize snapshots of property-specific estimated spectral densities during selected training points. For both MP and QM8 models, most of the spectral density is concentrated near 0, especially after random initialization. Occasionally, very high-magnitude eigenvalues, both positive and negative, spike up for a short period. Our results here elaborate on Figure 4 -the high and varying curvature of property-specific losses is driven by singularly large eigenvalues that vary across properties. For these plots, the x-axis gives the range of eigenvalues for the loss function, and the y-axis gives the densityψ(t) of eigenvalues concentrated at that point t (as described in 2.1). E. Example spectrum estimation We consider a loss function defined by L(x) = 1 2 x T Ax, where A = 1 2 (B + B T ), for a matrix B with entries sampled iid from a standard normal distribution. In this case, the Hessian is given by A and so its estimated eigenvalues can be compared to its true eigenvalues. In Figure 7, we plot the results of a sample calculation, for a 1, 000 × 1, 000 matrix. We show that the estimated density has reasonable similarity to the true distribution, and the estimated trace also matches the true trace. Figure 7. An example showing the use of the Lanczos algorithm to estimate the spectral density of a Hessian. The left figure is the distribution of the true eigenvalues, and the right is the estimated spectral density. The estimated values accurately characterize the maximal and minimal eigenvalues, and the interior shows qualitative agreement with the known distribution.
2022-07-14T14:31:55.452Z
2022-08-02T00:00:00.000
{ "year": 2022, "sha1": "559dda94cce12e426cc5f28892936f35207537db", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b737e1c05863454c1bc2536cdd550f365436e54a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
259025756
pes2o/s2orc
v3-fos-license
Effective cell membrane tension protects red blood cells against malaria invasion A critical step in how malaria parasites invade red blood cells (RBCs) is the wrapping of the membrane around the egg-shaped merozoites. Recent experiments have revealed that RBCs can be protected from malaria invasion by high membrane tension. While cellular and biochemical aspects of parasite actomyosin motor forces during the malaria invasion have been well studied, the important role of the biophysical forces induced by the RBC membrane-cytoskeleton composite has not yet been fully understood. In this study, we use a theoretical model for lipid bilayer mechanics, cytoskeleton deformation, and membrane-merozoite interactions to systematically investigate the influence of effective RBC membrane tension, which includes contributions from the lipid bilayer tension, spontaneous tension, interfacial tension, and the resistance of cytoskeleton against shear deformation on the progression of membrane wrapping during the process of malaria invasion. Our model reveals that this effective membrane tension creates a wrapping energy barrier for a complete merozoite entry. We calculate the tension threshold required to impede the malaria invasion. We find that the tension threshold is a nonmonotonic function of spontaneous tension and undergoes a sharp transition from large to small values as the magnitude of interfacial tension increases. We also predict that the physical properties of the RBC cytoskeleton layer—particularly the resting length of the cytoskeleton—play key roles in specifying the degree of the membrane wrapping. We also found that the shear energy of cytoskeleton deformation diverges at the full wrapping state, suggesting the local disassembly of the cytoskeleton is required to complete the merozoite entry. Additionally, using our theoretical framework, we predict the landscape of myosin-mediated forces and the physical properties of the RBC membrane in regulating successful malaria invasion. Our findings on the crucial role of RBC membrane tension in inhibiting malaria invasion can have implications for developing novel antimalarial therapeutic or vaccine-based strategies. where s max is the maximum membrane arclength that adheres to the merozoite.Additionally, for axisymmetric coordinates, the principal extension ratios can then be written as where (.) ′ = d(.)ds and s 0 is the arclength along the undeformed shape of the axisymmetric skeleton mapping to an unknown position s on the deformed shape. The egg shape of an archetypal merozoite in an axisymmetric coordinate can be parametrized as [1] ( where where 0 < ϕ < 2π and 0 < θ < π.Using Eq.S4, the radius of merozoite (R) as a function of angle θ can be written as Having the radius of merozoite, we can find the radial distance from the axis of rotation (r) and the elevation from the reference plane (z) given by Using Eq.S6 and the definition of axisymmetric coordinates, we have where k 1 and k 2 are the surface principal curvatures and ds dθ = (dr/dθ) 2 + (dz/dθ) 2 .Eq. S7 allows us to find the mean curvature H along the merozoite surface as a function of θ given as where ( .)= d(.)dθ .The integral over the adhered area (Eq.S1) and the extension ratios (Eq.S2) can also be calculated as a function of θ where θ max is the maximum wrapping angle.Assuming that actomyosin motors apply forces tangentially along the membrane surface, in axisymmetric coordinates, the net radial force (f r = f cos(θ)) is zero.Thus, only the axial component of actomyosin forces (f z = f sin(θ)) pushes the merozoite forward and the work on the membrane (Eq. 3) is simplified as Using Eqs.S1, S9a, and S10, the change in the energy of bilayer/cytoskeleton due to the adhesion of merozoite and deformation bilayer/cytoskeleton (Eq.7) can be written as a function θ where Entropic energy of spectrin filaments orientations Here, in Eq.S11, we assumed that the force density applied by the actomyosin motor (f ) is constant all along the area of adhered merozoite. Incompressible bilayer and cytoskeleton Let us assume that a flat circular patch of a lipid bilayer and relaxed cytoskeleton with radius (s 0 ) deformed to fit the merozoite contour in the adhesive region.Thus, for an incompressible bilayer/cytoskeleton, the area conservation can be written as Eq. S16 allows us to find s 0 and calculate the extension ratios using Eq.S9b, which simplifies as which is consistent with zero local area strain (α = λ 1 λ 2 − 1 = 0) for an incompressible bilayer/cytoskeleton [2].Additionally, for an incompressible cytoskeleton, the shear modulus µ is simplified as [3] where Numerical implementation For an egg shape merozoite parametrized by Eq.S3, we numerically calculate the change in the energy of the bilayer/cytoskeleton as a function of wrapping angle θ (Eq.S11).Then, for any given set of constant parameters, we find an angle θ * at which the invasion state becomes an energy minimum. Incompressible bilayer and cytoskeleton Let us assume that a flat circular patch of a lipid bilayer and relaxed cytoskeleton with radius (s 0 ) deformed to fit the merozoite contour in the adhesive region.Thus, for an incompressible bilayer/cytoskeleton, the area conservation can be written as Eq. S16 allows us to find s 0 and calculate the extension ratios using Eq.S9b, which simplifies as which is consistent with zero local area strain (α = λ 1 λ 2 − 1 = 0) for an incompressible bilayer/cytoskeleton Additionally, for an incompressible cytoskeleton, the shear modulus µ is simplified as where Numerical implementation For an egg shape merozoite parametrized by Eq.S3, we numerically calculate the change in the energy of the bilayer/cytoskeleton as a function of wrapping angle θ (Eq.S11).Then, for any given set of constant parameters, we find an angle θ * at which the invasion state becomes an energy minimum. Analytical approximations In this section, we explore the analytical solution for the minimum energy state, ignoring the effects of membrane cytoskeleton energy and modeling the merozoite as a spherical particle with radius a.In this condition, the change in the energy of the system (Eq.S11) can be written as where y = 1 − cos(θ).By taking ∂∆E ∂y = 0, we have Considering our definition for a completely wrapped state (θ * > π/2), we can find the transition condition to the completely wrapped state by setting y = 1 in Eq.S20.Below, we simplified Eq.S20 for different conditions. • Case 1: Relationship between lipid bilayer tension and adhesion strength Considering the condition that γ = 0, f = 0, and H 0 = 0, Eq.S20 simplifies as Eq. S21 suggests that the particle can get fully wrapped with increasing adhesion strength. • Case 2: Relationship between lipid bilayer tension and spontaneous curvature Considering the condition that γ = 0 and f = 0, Eq.S20 gives Based on Eq.S22, when the induced spontaneous curvature is smaller than the curvature of the particle (H 0 < 1/a), the induced spontaneous curvature assists the progress of complete particle wrapping.However, larger spontaneous curvatures (H 0 > 1/a) impede the complete wrapping transition. • Case 3: Relationship between lipid bilayer tension and interfacial forces As can be seen, Eq.S20 has a symmetric barrier at θ = π/2 (the line tension term vanishes for y = 1).Thus, to find the analytical approximation, we set y = 1 ± ϵ, where ϵ is a small number, and expanded the Eq.S20 until the first order for the case that f = 0 and H 0 = 0 Based on Eqs.S23, in the first half of wrapping (θ < π/2), a line tension prevents the complete membrane wrapping process.However, once the equator is passed (θ > π/2), a line tension accommodates the particle encapsulation.It should be mentioned that with no line tension (γ = 0), a non-wrapped state (θ * = 0) is always a local minimum of ∆E(θ).However, the line tension and actomyosin force energy terms scale as √ y and their derivatives diverge at θ = 0.This means that a line tension can create an energy barrier with no minimum energy state between 0 ⩽ θ ⩽ π in which the particle even does not adhere to the membrane. • Case 4: Motor forces required for a complete wrapping as a function of membrane physical properties To calculate the minimum motor forces that are required for a complete particle wrapping (based on our definition θ * > π/2), we substitute y = 1 + ϵ in Eq.S20 and find the force density (f ) as The total force in the z direction (F z ) is obtained as Substituting Eq.S24 into Eq.S25 for a complete wrapping condition (y = 1 + ϵ), we have Based on Eq.S26, for a tensionless membrane (σ = 0), with no adhesion energy (ω = 0), no line tension (γ = 0), and no spontaneous curvature (H 0 = 0), a minimum force of F z = 3 pN is required for a complete wrapping of a spherical particle.This is consistent with the calculated magnitude of actomyosin forces required for a merozoite invasion by Dasgupta et al [1].It should be mentioned that Eq.S26 is derived for the minimum axial force needed for a complete invasion.This means if the right hand side of Eq.S26 becomes negative, the physical forces are enough to push the merozoite into the RBC and thus F z = 0.The effects of physical properties of the cytoskeleton on the efficiency of malaria invasion, σ bilayer = 0.63 pN/nm.(A) A discontinuous transition from a completely to a partially wrapped state followed by a continuous transition from a partially to a non-wrapped wrapped state with increasing L 0 from 25 nm to 85 nm.σ = 0.1 pN/nm, ω = 0.8 pN/nm, p = 25 nm, and L max = 200 nm.(B) A continuous transition from a partially to a completely wrapped state followed by a discontinuous transition from a partially to a completely wrapped state with increasing L max from 180 nm to 210 nm.σ = 0.1 pN/nm, ω = 0.8 pN/nm, p = 25 nm, and L 0 = 35 nm.(C) A discontinuous transition from a partially wrapped to a completely wrapped state with increasing the persistence length of spectrin p. σ = 0.1 pN/nm, ω = 0.8 pN/nm, L 0 = 35 nm, and L max = 200 nm. Figure A : Figure A: The change in the energy of the RBC bilayer with no cytoskeleton layer as a function of wrapping angle (θ) for a fixed ω = 2.5 pN/nm and three different bilayer tension.The change in the energy is minimized in a completely wrapped state (θ * = π) independent of the magnitude of the bilayer tension. Figure Figure B: θ * as a function of interfacial tension for wrapping of an egg-shaped merozoite without the cytoskeleton layer.(A) A discontinuous transition from θ * ∼ 5π/6 to a full wrapped state (θ * = π) with an increase in the magnitude of interfacial tension, ω = 1 pN/nm and σ bilayer = 0.6 pN/nm.(B) A discontinuous transition from θ * ∼ 5π/9 to a non-adhered state with an increase in the magnitude of interfacial tension, ω = 0.4 pN/nm and σ bilayer = 0.6 pN/nm.(C) A discontinuous transition from a partially wrapped state to a non-adhered state with an increase in the magnitude of interfacial tension, ω = 0.5 pN/nm and σ bilayer = 1 pN/nm. Figure D : Figure D: Minimum axial force (F z ) required for a complete merozoite entry as a function of (A) bilayer tension, (B) spontaneous tension, (C) adhesion strength, and (D) interfacial tension.p = 25 nm, L 0 = 35 nm, and L max = 200 nm.The gray circles show the results that we obtained from the energy minimization (Eq.S11).The dotted line represents the fitted curves and the solid blue line indicates the analytical approximation for the motor-driven force (Eq.S26).The green arrow demonstrates the increase in the magnitude of the axial force compared to the analytical approximations because of the cytoskeleton resistance against deformation.(A) F z increases as a linear function of bilayer tension.The dashed line shows the linear dependence on the bilayer tension by fitting to a line (Aσ bilayer +B), where A = 2.48 and B = 4.73 with R 2 = 0.99.(B) F z varies as a linear function of spontaneous tension.The dashed line shows a linear dependence on the spontaneous tension by fitting to a line (Aσ spon +B), where A = 5.7, B =-0.09 with R 2 = 0.99.(C) F z decreases as a linear function of adhesion strength.The dashed line shows the linear dependence on the adhesion strength by fitting to the line (Aω+B), where A = -2.93 and B = 7.4 with R 2 = 0.99.(D) Switch-like increases in axial force from F z = 0.52 nN to F z = 0.7 nN with increasing the magnitude of interfacial tension.
2023-06-03T13:11:48.118Z
2023-05-31T00:00:00.000
{ "year": 2023, "sha1": "1747292036d87d1db04650bfebbc60508ec71597", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1011694&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "849665097cc166ea8b389e7f5d0f6c7ca3f19343", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
202566698
pes2o/s2orc
v3-fos-license
Healthy eating promoting in a Brazilian sports-oriented school: a pilot study Background Adolescents, particularly athletes, have high exposure to ultra-processed foods, which could be harmful to their health and physical performance. School environments are capable of improving eating patterns. Our study is aimed at capturing changes in students’ food consumption three years after they enrolled at an experimental school, considered a model of health promotion in Rio de Janeiro city. We also aimed to depict the promising nature of the healthy eating promotion program implemented in the school and share the learnings from its implementation. Methods Our pilot study was a follow-up on the implementation of a school garden, experimental kitchen activities, and health promotion classes. We evaluated 83 adolescent athletes’ food consumption twice during the study: at its beginning (2013) and end (2016), by administering a food frequency questionnaire (FFQ) that inquired about the frequency of foods consumed in the past week. To evaluate how effectively the activities were established, integrated, and sustained in schools, the Garden Resources, Education, and Environment Nexus (GREEN) tool was used, and the school’s adherence to the school garden program was classified as high (scored 47 points out of 57). Results In 2013, 89 adolescents (mean ± SD 11.9 ± 0.4 years, 54% male) participated in the study, of which 83 continued until 2016 (14.8 ± 0.5 years, 55% male). In 2013, the mean frequency of raw salad and fruits consumption was 1.4 (CI [1.0–1.9]) and 4.3 (CI [3.8–4.9]) days per week, respectively. Three years later, the frequency of raw salad and fruits consumption was 2.2 (CI [1.6–2.7]) and 5.0 (4.5–5.5), respectively. Considering that five meals were offered at school (five days/week), it may be possible to assume that the program raised awareness on the importance of healthy eating. Conclusion Our results suggest that such integrated healthy eating promotion programs may improve adolescent athletes’ eating habits, by increasing the frequency of their consumption of unprocessed foods. This pilot study’s results inspired us to implement an expanded project at the municipal level. Since 2018, teachers who participated in this program are working with Rio de Janeiro’s Municipal Secretary of Education for Coordination of Curricular Projects. Some learnings from this pilot study on implementing the garden/experimental kitchen project in this school are being applied in 65 schools of the municipal network: joint activities must be fostered among students, teachers, and parents; healthy eating needs to be a respected value among adolescent athletes and become an example for parents and teachers. INTRODUCTION Sports-related food marketing promotes the consumption of energy-dense, nutrient-poor products as ultra-processed foods (Powell, Harris & Fox, 2013;Bragg et al., 2018). Ultraprocessed foods are industrial formulations with high energy density, rich in fat, simple carbohydrates, and nutrients directly related to a higher incidence of chronic diseases, such as obesity (Monteiro et al., 2016;Fardet, 2018). Natural or minimally processed foods, on the other hand, are sources of micronutrients that are beneficial for health (Louzada et al., 2015;Monteiro et al., 2016). Adolescents, athletes in particular, are vulnerable to external factors such as marketing strategies for ultra-processed foods (Bragg et al., 2018). Adolescent athletes consume a high quantity of low-nutritious quality foods, particularly sugar sweetened drinks, and a low quantity of vegetables and water, leading to an insufficient intake of micronutrients and fiber, and an elevated quantity of refined carbohydrates (Burrows et al., 2016;Sousa et al., 2008). To improve sportspersons' diet quality, effective strategies need to be identified. Schools are recognized as supportive environments to promote a healthy diet (Briggs, Safaii & Beall, 2003;Scherr et al., 2017;Hoque et al., 2016). School interventions should adopt an approach that integrates parents and the whole school with the curriculum, leading to hands-on experience (Storz & Heymann, 2017). School garden programs (Ozer, 2007) and experimental kitchens emerge as strategies to achieve these goals (Robinson-O'Brien, Story & Heim, 2009;Wang et al., 2010;Scherr et al., 2017). Despite the studies on school gardens, little is known about how gardens can be effectively integrated and maintained in a school (Burt, Koch & Contento, 2017). According to Ozer's definition (Ozer, 2007), a wellintegrated school garden program includes three main components of implementation: a garden site and gardening activities, formal curriculum (including ''hands-on'' education), and involvement of parents and the community. To identify how to put school gardening components and the successful school garden integration into operation, the GREEN tool was developed to test the operational school gardening components proposed by Ozer (Burt, Koch & Contento, 2017). School garden programs could be considered as multi-component interventions to promote healthy eating in the school environment. Instead of evaluating the isolated effects of each component, it is crucial to consider the integrated effects of the actions that make up the intervention (Scherr et al., 2017;Burt, Koch & Contento, 2017). Considering that adolescent athletes are vulnerable to the consumption of unhealthy foods, nutritional education programs, such as school garden programs, may impact food choice and eating habits (Christian et al., 2014;Wells, Myers & Henderson, 2014). However, in general, studies combining nutritional education actions with experimental gardens and kitchens are of short duration and are conducted without proper integration with the school curriculum (Utter, Denny & Dyson, 2016), which can reduce their effects. So, the aim of this study was to explore changes in students' food consumption three years after their enrollment at an experimental school considered as a model of health promotion in Rio de Janeiro city, and to present lessons learnt from the school's implementation of the promising healthy eating promotion program. Study design and participants This pilot study was developed in an experimental full-time sports-oriented public school located in the central region of Rio de Janeiro. The school's pedagogical model includes three axes: academic excellence, support for the student's life project and education values. Healthy eating promotion activities are inserted in the context of these three axes. The students were enrolled at school in February 2013 and the present study began in August 2013. All students in the 6th grade (n = 102) were invited to participate in the study. However, only 89 students participated in the data collection (baseline) study in 2013, out of which 83 were followed up until 2016, when they were in the 9th grade. Data collection always occurred in the second semester (from August to December). The students, unlike those of other Brazilian public schools, undertook 100 min of sports daily. The modalities offered were swimming, judo, badminton, athletics, soccer, volleyball, and table tennis. In addition to sports, they also attended physical education classes of 50 min per week, as in other Brazilian schools. The adolescents in this study were classified as adolescent athletes, and they were enrolled at a school with specific sports purposes. They participated in training, skill development, and were engaged in competition, according to the definition found in Sports Dietitians Australia Position Statement: Sports Nutrition for the Adolescent Athletes (Desbrow et al., 2014). Healthy eating promotion actions Once a week, they had classes on Health Promotion (a mandatory subject) and were exposed to school gardening and experimental kitchen activities. ''Health Promotion'' aimed at raising awareness of the importance of cultivating healthy habits. Two elective subjects (''Gardening'' and ''Flavor and Art'') were implemented in the school's curriculum with the objective of attracting students to participate in gardening and cooking activities. The school garden and experimental kitchen were built at the school for the promotion of scientific research with a grant from a Brazilian agency. The GREEN tool (Burt, Koch & Contento, 2017) was used to assess the degree of school garden integration, and also as quality control on the actions implemented in the school. This integrated program's actions, description and categorization according to Ozer (2007) are summarized in Table 1. The activities of the elective subjects were organized by the respective teachers of Arts (''Flavor and Art''), Mathematics or Physical Education (''Gardening''). The classes were supported by a group of researchers from Rio de Janeiro's Nutrition Institute of the State University. Every week, the art teacher selected some healthy preparations, often made with • Food environment and policies The construction of the physical space of the school garden and the experimental kitchen was done with the direct involvement of teachers and parents through their participation in the planning, conception of ideas, choice and acquisition of materials. A group of parents, who were beneficiaries of a municipal fellowship program, helped to carry out the maintenance of the garden. Each semester, this group was partially renewed. • vegetables and spices harvested from the garden. Many times, the culinary preparations were chosen with the intention of diminishing rejection of some vegetables, such as eggplant (Solanum melongena L.) or bitter tomato (Solanum aethiopicum L.), both collected in the school garden. Students were also encouraged to suggest culinary preparations from their own homes. Food consumption Food consumption was evaluated twice during the study: at the beginning (2013) and at the end (2016), by administering an FFQ that inquired about the frequency of foods consumed in the past week (from ''never'' to ''everyday'' in the previous seven days) of 12 food items (beans, cooked vegetables, raw salad, fruits, milk-that were natural or minimally processed foods; French fries, fried snacks, processed meat, crackers, cookies, candies, soft drinks-that were processed and ultra-processed foods). This questionnaire had been previously validated (Tavares et al., 2014a). Data analysis Characteristics of students were described as frequency or mean and standard deviation (SD). Results from the GREEN tool were described as absolute values. Food consumption was determined by considering the frequency of consumption per week (prevalence, mean and 95% confidence interval). Ethical aspects This study was approved by the Ethics Committee of the Pedro Ernesto University Hospital (CEP/HUPE 1.020.909) and the Municipal Secretariat for Education (07/005.242/14). Parents and the student participants signed the informed consent form. The GREEN tool classified the sports-oriented school as a well-integrated school garden since out of a total of 57 points, it scored 47 (resources and support score = 12/15; physical garden score = 13/15; student experience score = 16/18; and school community score = 6/9 points). Furthermore, the consumption of some ultra-processed foods (French fries, fried snacks, candies, and soft drinks) did not seem to have increased during the three years of adolescence. On the other hand, when ultra-processed foods, such as crackers and cookies were served at school, the frequency of consumption seemed to increase. DISCUSSION This study found positive results in adolescent athletes' frequency of consumption of natural or minimally processed foods three years after they were enrolled in an experimental school in which the multi-component educational program was implemented. This result is important since ultra-processed foods have been pointed out as being unhealthy, rich in energy and poor in protective micronutrients, antioxidants and fiber (Monteiro et al., 2016;Fardet, 2018) and adolescent athletes need an adequate dietary intake due to growth, health maintenance, and optimal athletic performance (Croll et al., 2006). Since studies on school gardens and experimental kitchens use different methodologies and tools to evaluate their results, making direct comparisons is difficult (Robinson-O'Brien, Story & Heim, 2009;Davis, Spaniol & Somerset, 2015). Most authors rate as successful the increase in the consumption of fruits and vegetables by students who maintain direct contact with the cultivation, harvest, and preparation of the produce in the vegetable garden (Burt, Koch & Contento, 2017), as well as with the culinary preparations and tasting (Lakkakula et al., 2010;Chen et al., 2014). Some factors of the present program's modus operandi seem to have been decisive for its acceptance and approval by students, teachers, and parents throughout the years in which it was executed. The involvement and communication among us and all those involved was the focus of this pilot study. All implemented actions, such as the choice of crop diversity or how the kitchen should be designed, were based on the suggestions given by parents, students, and teachers. In general, studies combining nutritional education activities, experimental gardens and kitchens are of short duration and are conducted with elementary students (Davis, Spaniol & Somerset, 2015), without proper integration with the school curriculum (Somerset et al., 2005). One of the mechanisms deployed for students' adherence to the present program was the creation of elective subjects involving gardening and culinary activities. In addition, activities involving other disciplines such as Math, Arts and Biology were often carried out in an integrated way with the activities of this healthy eating program. To our knowledge, no study or long-term interventions have been performed with adolescent athletes enrolled in a sports-oriented school. Studies conducted in different countries with the introduction of an integrated garden in the school curriculum strengthen the actions of nutritional education (Morris & Zidenberg-Cherr, 2002;McAleese & Rankin, 2007) and resulted in an increased consumption of vegetables (McAleese & Rankin, 2007). The actions that focus on practical tasting or cooking activities, such as the use of experimental kitchens, have also proved to be effective in the preference and consumption of natural food by students in the United States (Lakkakula et al., 2010;Chen et al., 2014). By promoting interest in food preparation, this type of intervention stimulates students to make healthier food choices both at school, and at home with the family (Hyland et al., 2006;Lakkakula et al., 2010), and the preparation of vegetables and fruit juices starts getting more frequent (Wang et al., 2010;Chen et al., 2014). To our knowledge, none of these studies included culinary activities for adolescent students. In the present study, changes in the frequency of consumption of natural or minimally processed foods corroborate the previous findings and confirm the relevance of the school multi-component actions on the overall quality of student nutrition. However, despite the changes in the frequency of consumption of natural foods, the frequency of consumption of soft drinks, French fries, fried snacks, and candies did not vary appreciably during the study. Nonetheless, the percentage of regular consumption of processed and ultra-processed foods was also high among adolescents in recent national surveys (Tavares et al., 2014b;Borges et al., 2018). Aerenhouts et al. (2008) found that consumption of soft drinks contributed considerably to higher energy intake in adolescents practicing field training. Soft drink consumption might negatively affect physical and sprint performance capacity (Aerenhouts et al., 2008). Male adolescents who consumed soft-drinks tended to have an unbalanced high-fat and low-carbohydrate diet. Female adolescents who consumed soft-drinks had a higher body-fat percentage than those who did not consume (Sousa et al., 2008). Despite knowing the harmful health effects of the consumption of soft drinks, food companies' use of sports to promote unhealthy consumption of food/beverage by young athletes is associated with healthy products (Bragg et al., 2018). This fact intensifies the need for implementation of public health policies, such as school garden programs. Furthermore, it is known that the consumption of soft drinks among adolescents is greater when their parents are habituated to consuming it at home (Yee, Lwin & Ho, 2017). Therefore, in our study, parents' participation in programs to promote healthy eating should be expanded beyond their participation in the care and organization of school gardens and the semiannual meetings. Creating a context of respect with multiple adults, in which adults know students' core values and are empathic about underlying causes of behavior was one of the lessons learned from this pilot study and has been considered as an important step to influence adolescent behavior (Yeager, Dahl & Dweck, 2018). In contrast, when ultra-processed foods, such as crackers and cookies were served at school, the frequency of consumption seemed to increase. This result shows that, besides the actions carried out by this program, the menu offered by the municipal school feeding network should be based on ''real'' food because students acquire habits that are formed at school. It is especially important considering that 25% to 31% of the students were beneficiaries of the Bolsa Família program, which is a Brazilian cash transfer program. Some limitations were observed in the present study. Food consumption data was obtained by administering a questionnaire that only elicited details on the frequency of consumption of food markers of a healthy diet (based on natural or minimally processed foods) or unhealthy diet (ultra-processed foods). This questionnaire has been used in Brazil to monitor the health of children and adolescents by the Brazilian Ministry of Health (2009, 2012 and 2015). Furthermore, it is a simplified FFQ focusing on food markers related to risk and prevention of chronic diseases, but not covering the diversity of the diet. Additionally, considering that this was an experimental sports-oriented school, it was not possible to separate the effects of the healthy eating promotion program from the other actions that this type of school promotes. The lack of a comparison group was the major limitation of the study. Therefore, it will be important to carry out a controlled study involving full-time sports-oriented schools where this program has not yet been implemented. Nevertheless, the results of this pilot study inspired the implementation of an expanded project at the municipal level. Since 2018, teachers who participated in this program are working with Rio de Janeiro's Municipal Secretary of Education for the Coordination of Curricular Projects. Some lessons from this pilot study on implementation of this school's garden/experimental kitchen project are being applied in 65 schools of the municipal network: collaborative actions by students, teachers and parents; making healthy eating a respected value among adolescent athletes and setting an example for parents and teachers. To sum up the strengths of this pilot study: it helped us to understand how to achieve improvements in dietary behaviors and sustain the garden-based programs in schools; our school multi-component program was formulated considering the school garden domains proposed by Ozer (2007); integration between the school and the school garden program was tested using the GREEN tool (Burt, Koch & Contento, 2017); all participants were homogeneous regarding sports training in specific modalities offered by the school. Additionally, our study is in agreement with the Academy of Nutrition and Dietetics, School Nutrition Association, and Society for Nutrition Education and Behavior's position that recommends specific strategies for healthy food (Academy of Nutrition and Dietetics, 2018), as well as the International Olympic Committee consensus statement on youth athletic development that emphasizes dietary education for young athletes leading to optimal eating patterns to support health, normal growth and sport participation demands, with emphasis on a balanced diet (Bergeron et al., 2015). CONCLUSIONS In conclusion, an adequate school environment, made up of facilities that encourage health promotion actions, structured subjects, trained teachers, sports orientation, and the development of an integrated curriculum, may help adolescent athletes to improve their eating habits. The contact of the adolescent athletes with the school garden and the experimental kitchen, as well as their involvement with local activities, may contribute to increasing their consumption of healthy foods (natural or minimally processed foods) and decrease their consumption of unhealthy foods (processed and ultra-processed foods). This is of extreme relevance at this stage of life, especially considering the nutritional demands generated by sports. Finally, improving the dietary pattern and quality of food consumption of these athletes will help them to promote health by optimizing performance and providing positive benefits beyond the adolescence phase.
2019-09-17T01:04:16.085Z
2019-08-28T00:00:00.000
{ "year": 2019, "sha1": "58a1fbe1dca7bd977f1ae080f18df5adda2b0dd8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.7601", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d64b5f12f8d8d30f5f115afb26c25340540b65ba", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
213451661
pes2o/s2orc
v3-fos-license
Parametric study of 3D printed microneedle (MN) holders for interstitial fluid (ISF) extraction The need for novel, minimally invasive diagnostic, prognostic, and therapeutic biomedical devices has garnered increased interest in recent years. Microneedle (MN) technology has stood out as a promising new method for drug delivery, as well as extraction of interstitial fluid (ISF). ISF comprises a large portion of the extracellular fluid in living organisms yet remains inadequately characterized for clinical applications. Current MN research has focused on the fabrication of needles with different materials like silicone, carbon, and metals. However, little effort has been put forth into improving MN holders and patches that can be used with low cost MNs, which could effectively change how MNs are attached to the human body. Here, we describe different 3D-printed MN holders, printed using an MJP Pro 2500 3D printer, and compare the ISF extraction efficiencies in CD Hairless rats. We varied design parameters that may affect the skin-holder interface, such as throat thickness, tip curvature, and throat diameter. MN arrays, with insertion depths of 1500 μm, had extraction efficiencies of 0.44 ± 0.35, 0.85 ± 0.64, 0.32 ± 0.21, or 0.44 ± 0.46 µl/min when designed with flat, concave, convex, or bevel profile geometries, respectively. Our results suggest ISF extraction is influenced by MN holder design parameters and that a concave tip design is optimal for extracting ISF from animals. The future direction of this research aims to enable a paradigm in MN design that maximizes its efficiency and engineering performance in terms of volume, pressure, and wearability, thereby automatizing usage and reducing patient intervention to ultimately benefit remote telemedicine. Introduction Advancement in modern drug delivery methods have unlocked numerous opportunities in the treatment and diagnosis of diseases (Chandel et al. 2019;Jain 2014;Li et al. 2017;Lim et al. 2018;Pandey et al. 2019). Ingestion, inhalation, absorption, and intravenous injection are the most common methods of delivering drugs (Jain 2014). Transdermal drug delivery methods have also been used when non-invasive methods are preferred (Akhtar et al. 2020;Gonnelli and McAllister 2011). However, non-needle based methods may not be suitable for larger molecular weight compounds like vaccines (Gonnelli and McAllister 2011). On the other hand, typical injections with hypodermic needles can cause pain and discomfort for patients with possible damage to veins and bruising. In this context, microneedles (MN) have gained significant research interest for innovative drug delivery and monitoring methods (Akhtar et al. 2020;Kiang et al. 2017;Rzhevskiy et al. 2018). MN technology has also recently been utilized for interstitial fluid (ISF) extraction and analysis. ISF comprises a large portion of the extracellular volume in living organisms (Cengiz and Tamborlane 2009). However, until recently, there has been a lack of adequate technology for the extraction of sufficient volumes of ISF for downstream analysis. Thus, ISF has been inadequately characterized for biomedical or clinical applications. We recently devised MN arrays (MAs, which includes microneedles extending from a 3D-printed holder) that are able to extract upwards of 20 lL and 60 lL of ISF in 1 h from both humans and rats, respectively Taylor et al. 2018;Tran et al. 2018). These volumes of ISF are adequate for downstream analysis such as transcriptomics , proteomics , and metabolomics Taylor et al. 2018). MAs are designed to penetrate up to the intracutaneous layer to avoid blood extraction (Gartstein et al. 2002;Kiang et al. 2017) and pain. With this technique, drug delivery can be achieved with minimal pain and minimal damage to skin and veins. This technique has also been experimentally verified to assist in extraction of ISF from the dermis layer Taylor et al. 2018;Tran et al. 2018;Wang et al. 2005). MNs have also been used for drug and vaccine delivery (He et al. 2019;Kim et al. 2012). Since MNs are minimally-invasive, compared to conventional injecting devices, they are generally easy to use and less painful for human use (Gill et al. 2008;Haq et al. 2009;Wang et al. 2005). This demonstrates the great market potential for MNs as prognostic, diagnostic, and drug delivery methods (Lee et al. 2020). Extensive research has been conducted into the structural design, materials selection, and fabrication of MNs/ MAs (Kim et al. 2012;Chaudhri et al. 2010). Research has also shown that the tip design of a MN may affect the efficiency of fluid extraction (Ma and Wu 2017). ISF flow and skin biomechanics have also been studied in relation to various MN extraction techniques such as hydrogel and hollow microneedles (Samant and Prausnitz 2018). Other parameters to consider for the MN design are size, diameter, insertion depths, and types of surface coatings used on the MN (Kim et al. 2012). However, a limited amount of literature can be found for the design of the MN holder itself, which houses the MN and interfaces with the skin region in proximity to the extraction site. Table 1 shows the different MN holder design types, available to date, used to extract ISF or to apply a drug. It is important to note that fluid extraction from the deep dermis layer not only depends on the MNs but also on the MN holders used to house the MNs and to apply them to the tissue. One of the important observations when using different types of MAs is that extraction may also be influenced by the pressure exerted by the holder during the fluid extraction process Samant and Prausnitz 2018). It would be of interest to users to enable extraction of ISF with a reduced amount of pressure. One advantage of designing an MA for ISF extraction without pressure includes the ease of patient use, so that the individual patient can apply the MA without a prescribed pressure input. Another benefit is that the system becomes automatic. Previous research has shown that 3D printed MN holders are able to aid in the collection of the ISF using capillary tubes without applying any mechanical suction to the skin . Results demonstrated that volume can be obtained without pressure under certain design types. However, to date, there is little knowledge about the design of MNs/MAs and the volume of the ISF extracted. Here, we present the design, fabrication, and testing of different models of MN holders in order to improve the maximum amount of ISF extracted. More specifically, we conducted a parametric study of 3D printed MN holders for ISF extraction. Such holders can be used in conjunction with several MNs, thus forming the MA. Overall, we developed 8 different 3D printed MAs with design parameters that were altered to achieve optimum ISF extraction rates. The following sections of this paper outline the various methods and materials. Fabrication design, methods, and testing procedures This section covers the conceptual development of the different topologies proposed. The design optimizations explore various interfaces that will enhance or augment the performance of the MN, under assumptions of similar pressure with limited supervision. The fabrication of such devices is also described in order to emphasize the quick manufacturing aspect of the proposed prototype. Finally, the authors outline the basic testing protocols informing the results, and their relevance towards the validation of the new MN. MN holder designs We primarily focused on the throat/tip region of the MN holders. Four different designs were developed, each having a unique interface with the skin. Such interfaces differed due to the geometric profile of the throat/tip and the contact surface area of the tip. Figure 1 illustrates two of the MN holder types: flat and concave when in contact with skin. Given that each holder is applied to the skin in the same manner, the contact surface of the profile geometry determines the pressure applied to the skin. The flat tip has more contact surface hence, less net pressure compared to a concave tip. The holders were designed in such a manner as to allow 1000 lm or 1500 lm of the MN to be inserted into the skin. However, the change in skin surface geometry due to applied pressure may vary this depth, which may cause variations in the ISF extraction volume. The MN holders used are the result of rapid prototyping without complex manufacturing processes. Hence, these designs are meant to be portable and recreated without much technical expertise. The MN holders utilize commercially available MNs, such as BD Ultra-Fine Pen Needles that are pre-sterilized and can be installed in the 3D-printed MN holders without requirement of direct supervision from medical practitioners. We believe the simple design for producing these holders would allow for the democratization for off-site diagnosis using MAs. 3D printing of MN holders SketchUp computer aided design (CAD) software was used for designing the MN holders. The software was used to specify dimensions up to sub millimeter precision and designs were exported to an object file (.stl) that could be read in the 3D printer's user interface, called 3D Sprint (3D Systems, Inc., Rock Hill, SC, USA). The resolution of printer varies depending on the orientation of the specimen placed on the build plate. The print resolution is highest in the z direction which is 0.02 mm. MN holders were 3D printed using a commercially available ProJet MJP 2500 printer (3D Systems, Inc.) using a VisiJet Ò M2R-GRY build material and a VisiJet Ò M2 SUP support material, both from 3D Systems. The 3D printer uses support material to provide adhesion to the main print material and makes the printing mechanism more agile, hence able to print more complex shapes. After a layer of material is deposited on the printing bed, it is exposed to a flash of ultraviolet rays to cure the material. After printing, the MN holders were placed at -20°C for 5 min to release the printed holders from the base plate. MN holders were then placed in a steam bath for 15 min to remove the wax support material and subsequently placed in a hot oil bath for another 15 min to remove all traces of the wax support. MN holders were then cleaned using hot tap water and soap and left at room temperature to dry. Figure 2 details the design, printing, and post-processing steps. MA assembly and animal testing The animal care and use program of the University Of New Mexico (UNM) is accredited by AAALAC International and the UNM's animal care and use committee approved all experiments. CD hairless, Crl:CD-Prss8hr, rats (Charles River Laboratories, Wilmington, MA) were used for the studies. ISF was extracted using our previously published methods . Briefly, animals were anesthetized using 2% isoflurane, and MAs were applied to extract ISF. Ultra-fine Nano PEN needles (BD, Franklin Lakes, NJ) were placed into the 3D-printed MN holders to form the MA. Each needle was attached to a 1-5 ll calibrated pipet capillary tube (Drummond Scientific Co., Broomall, PA). The array assembly was then pressed into the dermal tissue of the rats and held in place for exactly 2.0 min per extraction. The volume of ISF extracted in each needle and the total ISF extracted per MA was recorded for each extraction. All animals had a terminal cardiac puncture performed at the conclusion of the experiments. MA design and printing We investigated the effect of different throat/tip geometry profiles for the holders. Figure 3 shows the four MA (a) (b) Throat/tip t d Fig. 1 Flat (a) and Concave (b) tips of MN holders interfacing with human skin. Parameters include throat thickness t and throat diameter d designs that were tested: flat, convex, concave, and beveled. The flat prototype (Fig. 3a) is considered the base design, and all other designs include a modification of the throat/tip of this base design. Table 2 shows the tip parameters that were modified to create different versions of the design. Additionally, each MA prototype was designed and tested with both a 1000 lm and 1500 lm needle insertion length. MA prototype testing in CD hairless rats We tested all 8 MA prototypes in CD Hairless rats and found that only the concave (CVE 1500) prototype had significantly better extraction rates, compared to the flat (FLT 1500) base prototype (p = 0.03). Extraction rates for each of the prototypes are detailed in Table 3 and Fig. 4. We did not measure any significant differences in extraction rates for the holders with 1000 lm versus 1500 lm needle lengths (Fig. 5). However, the different tip geometries were found to have varying effects on ISF extraction. When a sharp curvature like concave (Fig. 3c) is introduced in the design, it can accelerate fluid extraction measured in extraction volume per unit time (ll/min). This could be due to localized pressure differences around the needle. Alternately, the faster extraction rates measured in the concave models could be due to a compression effect, Technologies (2020Technologies ( ) 26:2067Technologies ( -2073Technologies ( 2071 where the sides of the concave holder tip essentially act to push the skin down at the holder interface while lifting the skin in between the throat of the MN holder and, thus, slightly increasing needle penetration depth and/or localized pressure. Moreover, the concave (CVE) model, as shown in Figs. 2, 3a, pushes the skin inwards toward the needle, whereas the convex (CVX) model pushes the skin away from the needle. This could also account for both the increased extraction rate measured using the concave prototypes, as well as the decreased percent of total needles that also extracted blood in the concave models (Table 3). Conclusions Four different types of MAs were designed and 3D-printed, each with both 1000 lm and 1500 lm needle insertion depths. The four MN holder designs (flat, concave, convex, and beveled) all had different extraction rates, however, only the concave design had significantly increased ISF extraction rates, compared with the flat base model (p = 0.03). We did not measure any significant differences in extraction rates using 1000 lm versus 1500 lm needle insertion depths. Future studies measuring the local pressure differences between MA prototypes could further aid in the development of MA assemblies and patches for the successful extraction and analysis of ISF for biomedical and clinical applications. These results suggest that the specific geometry of the microneedle holder throat may be a critical factor in further optimizing interstitial fluid collection. Compliance with ethical standards Conflict of interest The authors declare no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
2020-02-06T09:09:22.419Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "8bd9899cdc8fd60dc1de0d439f50397e8b91c19f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00542-020-04758-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2c98872b51a05db4becc0184df5978aa13bb3aca", "s2fieldsofstudy": [ "Engineering", "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
15045618
pes2o/s2orc
v3-fos-license
Multi-Shot Person Re-Identification via Relational Stein Divergence Person re-identification is particularly challenging due to significant appearance changes across separate camera views. In order to re-identify people, a representative human signature should effectively handle differences in illumination, pose and camera parameters. While general appearance-based methods are modelled in Euclidean spaces, it has been argued that some applications in image and video analysis are better modelled via non-Euclidean manifold geometry. To this end, recent approaches represent images as covariance matrices, and interpret such matrices as points on Riemannian manifolds. As direct classification on such manifolds can be difficult, in this paper we propose to represent each manifold point as a vector of similarities to class representers, via a recently introduced form of Bregman matrix divergence known as the Stein divergence. This is followed by using a discriminative mapping of similarity vectors for final classification. The use of similarity vectors is in contrast to the traditional approach of embedding manifolds into tangent spaces, which can suffer from representing the manifold structure inaccurately. Comparative evaluations on benchmark ETHZ and iLIDS datasets for the person re-identification task show that the proposed approach obtains better performance than recent techniques such as Histogram Plus Epitome, Partial Least Squares, and Symmetry-Driven Accumulation of Local Features. INTRODUCTION Person re-identification is the process of matching persons across non-overlapping camera views in diverse locations. Within the context of surveillance, re-identification needs to function with a large set of candidates and be robust to pose changes, occlusions of body parts, low resolution and illumination variations. The issues can be compounded, making a person difficult to recognise even by human observers (see Fig. 1 for examples). Compared to classical biometric cues (eg. face, gait) which may not be reliable due to non-frontality, low resolution and/or low frame-rate, person re-identification approaches typically use the entire body. While appearance based person re-identification methods are generally modelled in Euclidean spaces [8,11,24], it has been argued that some applications in image and video analysis are better modelled on non-Euclidean manifold geometry [28]. To this end, recent approaches represent images as covariance matrices [3], and interpret such matrices as points on Riemannian manifolds [12,28]. A popular way of analysing manifolds is to embed them into tangent spaces, which are Euclidean spaces. This process which can be interpreted as warping the feature space [27]. Embedding manifolds is not without problems, as pairwise distances between arbitrary points on a tangent space may not represent the structure of the manifold accurately [12,13]. In this paper we present a multi-shot appearance based person re-identification method on Riemannian manifolds, where embedding the manifolds into tangent spaces is not required. We adapt a recently proposed technique for analysing Riemannian manifolds, where points on the manifolds are represented through their similarity vectors [2]. The similarity vectors contain similarities to class representers. We obtain each similarity with the aid of a recently introduced form of Bregman matrix divergence known as the Stein divergence [13,25]. The classification task on manifolds is hence converted into a task in the space of similarity vectors, which can be tackled using learning methods devised for Euclidean spaces, such as Linear Discriminant Analysis [5]. Unlike previous person reidentification methods, the proposed method does not require separate settings for new datasets. We continue the paper as follows. In Section 2 several recent methods for person re-identification are briefly described. The proposed approach is detailed in Section 3. A comparative performance evaluation on two public datasets is given In Section 4. The main findings are summarised in Section 5. PREVIOUS WORK Given an image of an individual to be re-identified, the task of person re-identification can be categorised into two main classes. (i) Single-vs-Single (SvS), where there is only one image of each person in the gallery and one in the probe; this can be seen as a oneto-one comparison. (ii) Multiple-vs-Single (MvS), or multi-shot, where there are multiple images of each person available in gallery and one image in the probe. Below we summarise several person re-identification methods: Partial Least Squares (PLS) [24], Context based method [31], Histogram Plus Epitome (HPE) [4], and Symmetry-Driven Accumulation of Local Features (SDALF) [8]. The PLS method [24] first decomposes a given image into overlapping blocks, and extracts a rich set of features from each block. Three types of features are considered: textures, edges, and colours. The dimensionality of the feature space is then reduced by employing Partial Least Squares regression (PLSR) [30], which models relations between sets of observed variables by means of latent variables. To learn a PLSR discriminatory model for each person, oneagainst-all scheme is used [9]. Nearest neighbour is then employed for classification. The Context-based method [31] enriches the description of a person by contextual visual knowledge from surrounding people. The method represents a group by considering two descriptors: (a) 'center rectangular ring ratio-occurrence' descriptor, which describes the information ratio of visual words between and within various rectangular ring regions, and (b) 'block based ratio-occurrence' descriptor, which describes local spatial information between visual words that could be stable. For group image representation only features extracted from foreground pixels are used to construct visual words. HPE [4] considers multiple instances of each person to create a person signature. The structural element (STEL) generative model approach [16] is employed for foreground detection. The combination of a global (person level) HSV histogram and epitome regions of foreground pixels is then calculated, where an image epitome [15] is computed by collapsing the given image into a small collage of overlapped patches. The patches contain the essence of textural, shape and appearance properties of the image. Both the generic epitome (epitome mean) and local epitome (probability that a patch is in an epitome) are computed. SDALF [8] considers multiple instances of each person. Foreground features are used to model three complementary aspects of human appearance extracted from various body parts. First, for each pedestrian image, axes of asymmetry and symmetry are found. Then, complementary aspects of the person appearance are detected on each part, and their features are extracted. To select salient parts of a given pedestrian image, the features are then weighted by exploiting perceptual principles of symmetry and asymmetry. The above methods assume that classical Euclidean geometry is capable of providing meaningful solutions (distances and statistics) for modelling and analysing images and videos, which might not be always correct [27]. Furthermore, they require separate parameter tuning for each dataset. PROPOSED APPROACH Our goal is to automatically re-identify a given person among a large set of candidates in diverse locations over various non-overlapping camera views. The proposed method is comprised of three main stages: (i) feature extraction and generation of covariance descriptors, (ii) measurement of similarities on Riemannian manifolds via the Stein divergence, and (iii) creation of similarity vectors and discriminative mapping for final classification. Each of the stages is elucidated in more detail in the following subsections. Feature Extraction and Covariance Descriptors As per [4,8], to reduce the effect of varying background, foreground pixels are extracted from each given image of a person via the STEL generative model approach [16]. We note that it is also possible to use more advanced approaches, such as [21]. Based on preliminary experiments, for each each foreground pixel located at (x, y), the following feature vector is calculated: indicate gradient magnitudes and orientations for each channel in RGB colour space. We note that we have selected this relatively straightforward set of features as a starting point, and that it is certainly possible to use other features. However, a thorough evaluation of possible features is beyond the scope of this paper. Given a set F = {f i } N i=1 of extracted features, with its mean represented by µ, each image is represented as a covariance matrix: Representing an image with a covariance matrix has several advantages [3]: (i) it is a low-dimensional (compact) representation that is independent of image size, (ii) the impact of noisy samples is reduced via the averaging during covariance computation, and (iii) it is a straightforward method of fusing correlated features. Riemannian Manifolds and Stein Divergence Covariance matrices belong to the group of symmetric positive definite (SPD) matrices, which can be interpreted as points on Riemannian manifolds. As such, the underlying distance and similarity functions might not be accurately defined in Euclidean spaces [23]. Efficiently handling Riemannian manifolds is non-trivial, due largely to two main challenges [26]: (i) as manifold curvature needs to be taken into account, defining divergence or distance functions on SPD matrices is not straightforward; (ii) high computational requirements, even for basic operations such as distances. For example, the Riemannian structure induced by considering the Affine Invariant Riemannian Metric (AIRM) has been shown somewhat useful for analysing SPD matrices [14,20]. For A, B ∈ S d ++ , where S d ++ is the space of positive definite matrices of size d × d, AIRM is defined as: where log(·) is the principal matrix logarithm [25]. However, AIRM is computationally demanding as it essentially needs eigendecomposition of A and B. Furthermore, the resulting structure has negative curvature which prevents the use of conventional learning algorithms for classification purposes. To simplify the handling of Riemannian manifolds, they are often first embedded into higher dimensional Euclidean spaces, such as tangent spaces [18,19,22,29]. However, only distances between points to the tangent pole are equal to true geodesic distances, meaning that distances between arbitrary points on tangent spaces may not represent the manifold accurately. As an alternative to measuring distances on tangent spaces, in this work we use the recently introduced Stein divergence, which is a version of the Bregman matrix divergence for SPD matrices [25]. To measure dissimilarity between two SPD matrices A and B, the Bregman divergence is defined as [17]: where A, B = tr A T B and φ : S d ++ → R is a real-valued, strictly convex and differentiable function. The divergence in (4) is asymmetric which is often undesirable. The Jensen-Shannon symmetrisation of Bregman divergence is defined as [17]: By selecting φ in (5) to be − log (det (A)), which is the barrier function of semi-definite cone [25], we obtain the symmetric Stein divergence, also known as the Jensen Bregman Log-Det divergence [6]: The symmetric Stein divergence is invariant under congruence transformations and inversion [6]. It is computationally less expensive than AIRM, and is related to AIRM in several aspects which establish a bound between the divergence and AIRM [6]. Similarity Vectors and Discriminative Mapping For each query point (an SPD matrix) to be classified, a similarity to each training class is obtained, forming a similarity vector. We obtain each similarity with the aid of the Stein divergence described in the preceding section. The classification task on manifolds is hence converted into a task in the space of similarity vectors, which can be tackled using learning methods devised for Euclidean spaces. Given a training set of points on a Riemannian manifold, X = {(X1, y1), (X2, y2), . . . , (Xn, yn)}, where yi ∈ 1, 2, . . . , m is a class label, and m is the number of classes, we define the similarity between matrix Xi and class l as: where δ(·) is the discrete Dirac function and where n l is the number of training matrices in class l. Using Eqn. (7), the similarity between Xi and all classes is obtained, where i ∈ 1, 2, . . . , n . Each matrix Xi is hence represented by a similarity vector: si = [ si,1, si,2, . . . , si,m ] T Classification on Riemannian manifolds can now be reinterpreted as a learning task in R m . Given the similarity vectors of training data, S = {(s1, y1), (s2, y2), · · · , (sn, yn)}, we seek a way to label a query matrix Xq, represented by a similarity vector sq = [sq,1, sq,2, . . . , sq,m ] T . As a starting point, we have chosen linear discriminant analysis [5], where we find a mapping W * that minimises the intra-class distances while simultaneously maximising inter-class distances: where SB and SW are the between class and within class scatter matrices [5]. The query similarity vector sq can then be mapped into the new space via: We can now use a straightforward nearest neighbour classifier [5] to assign a class label to xq. We shall refer to this approach as Relational Divergence Classification (RDC). EXPERIMENTS AND DISCUSSION In this section we evaluate the proposed RDC approach by providing comparisons against several methods on two person re-identification datasets: iLIDS [31] and ETHZ [7,24]. The VIPeR dataset [10] was not used as it only has one image from each person in the gallery, and is hence not suitable for testing MvS approaches. Each dataset covers various aspects and challenges of the person re-identification task. The results are shown in terms of the Cumulative Matching Characteristic (CMC) curves, where each CMC curve represents the expectation of finding the correct match in the top n matches. In order to show the improvement caused by using similarity vectors in conjunction with linear discriminant analysis, we also evaluate the performance of directly using the Stein divergence in conjunction with a nearest neighbour classifier (ie. direct classification on manifolds, without creating similarity vectors). We refer to this approach as the direct Stein method. iLIDS Dataset The iLIDS dataset is a publicly available video dataset capturing real scenarios at an airport arrival hall under a multi-camera CCTV network. From these videos a dataset of 479 images of 119 pedestrians was extracted and the images were normalised to 128 × 64 pixels (height × width) [31]. The extracted images were chosen from nonoverlapping cameras, and are subject to illumination changes and occlusions [31]. We randomly selected N images for each person to build the gallery set, while the remaining images form the probe set. The whole procedure is repeated 10 times in order to estimate an average CMC curve. We compared the performance of the proposed RDC approach against the direct Stein method, as well as the algorithms described in Section 2 (SDALF and Context based) for a commonly used setting of N = 3. The results, shown in Fig. 2, indicate that the proposed method generally outperforms the other techniques. The results also show that the use of similarity vectors in conjunction with linear discriminant analysis is preferable to directly using the Stein divergence. Fig. 2. Performance on the iLIDS dataset [31] for N =3, using the proposed RDC method, the direct Stein method, SDALF [8], context based method [31]. HPE results for N =3 were not provided in [4]. ETHZ Dataset The ETHZ dataset [7,24] was captured from a moving camera, with the images of pedestrians containing occlusions and wide variations in appearance. Sequence 1 contains 83 pedestrians (4857 images), Sequence 2 contains 35 pedestrians (1936 images), and Sequence 3 contains 28 pedestrians (1762 images). We downsampled all the images to 64 × 32 (height × width). For each subject, the training set consisted of N randomly selected images, with the rest used for the test set. The random selection of the training and testing data was repeated 10 times. Results were obtained for the commonly used setting of N = 10 and are shown in Fig. 3. On sequences 1 and 2, the proposed RDC method considerably outperforms PLS, SDALF, HPE and the direct Stein method. On sequence 3, RDC obtains performance on par with SDALF. Note that the random selection used by the RDC approach to create the gallery is more challenging and more realistic than the data selection strategy employed by SDALF and HPE on the same dataset [4,8]. SDALF and HPE both apply clustering beforehand on the original frames, and then select randomly one frame for each cluster to build their gallery set. In this way they can ensure that their gallery set includes the keyframes to use for the multi-shot signature calculation. In contrast, we haven't applied any clustering for the proposed RDC method in order to be closer to real life scenarios. CONCLUSION We have proposed a novel appearance based person re-identification method comprised of: (i) representing each image as a compact covariance matrix constructed from feature vectors extracted from foreground pixels, (ii) treating covariance matrices as points on Riemannian manifolds, (iii) representing each manifold point as a vector of similarities to class representers with the aid of the recently introduced Stein divergence, and (iv) using a discriminative mapping of similarity vectors for final classification. The use of similiarity vectors is in contrast to the traditional approach of analysing manifolds via embedding them into tangent spaces. The latter might result in inaccurate modelling, as the structure of the manifolds is only partially taken into account [12,13]. Person re-identification experiments on the iLIDS [31] and ETHZ [7,24] datasets show that the proposed approach outperforms several recent methods, such as Histogram Plus Epitome [4], Partial Least Squares [24], and Symmetry-Driven Accumulation of Local Features [8]. ACKNOWLEDGEMENTS NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy, as well as the Australian Research Council through the ICT Centre of Excellence program. Fig. 3. Performance on the ETHZ dataset [24] for N = 10, using Sequences 1 to 3 (top to bottom). Results are shown for the proposed RDC method, direct Stein method, HPE [4], PLS [24] and SDALF [8].
2014-03-03T22:44:17.000Z
2013-09-15T00:00:00.000
{ "year": 2014, "sha1": "1fed22edbd350a99842930e5877caaec0641f4a5", "oa_license": null, "oa_url": "https://research-repository.griffith.edu.au/bitstream/10072/400950/2/Sanderson457581-Accepted.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1540480958cfd9d3aaa6e1345601c851f1d16023", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
245550893
pes2o/s2orc
v3-fos-license
The lncRNA H19/miR-766-3p/S1PR3 Axis Contributes to the Hyperproliferation of Keratinocytes and Skin Inflammation in Psoriasis via the AKT/mTOR Pathway Background The pathogenesis of long noncoding RNAs (lncRNAs) and microRNAs (miRNAs) are well studied in psoriasis. However, little is known about how specific lncRNAs and miRNAs affect the mechanism of psoriasis development and which pathways are involved. Objectives To explore the role of the lncRNA H19/miR-766-3p/S1PR3 axis in psoriasis. Methods miRNA and lncRNA microarrays were performed using IL-22-induced HaCaT cells and psoriatic lesions, respectively. Fluorescence in situ hybridization and quantitative reverse-transcriptase polymerase chain reaction were used to detect the expression of miR-766-3p and lncRNA H19. Luciferase reporter assays were used to identify miR-766-3p/lncRNA H19 and miR-766-3p/S1PR3 combinations. CCK-8 and ELISA were performed to evaluate the proliferation of keratinocytes and the secretion of pro-inflammatory cytokines. Western blot analysis was used to detect the expression of S1PR3 and its downstream effector proteins. Results MiR-766-3p was upregulated in both HaCaT cells treated with the psoriasis-related cytokine pool (IL-17A, IL-22, IL-1 alpha, oncostatin M, and TNF-alpha) and tissues. Overexpression of miR-766-3p promoted keratinocyte proliferation and IL-17A and IL-22 secretion. LncRNA H19 and S1PR3 were demonstrably combined with miR-766-3p by luciferase reporter assay. lncRNA H19 repressed proliferation and inflammation, which were reduced by the miR-766-3p. AKT/mTOR pathway effected proliferation and inflammation by the lncRNA H19/miR-766-3p/S1PR3 axis. Conclusions We established that downregulation of lncRNA H19 promoted the proliferation of keratinocytes and skin inflammation by up-regulating miR-766-3p expression levels and inhibiting activation of S1PR3 through the AKT/mTOR pathway in psoriasis. Introduction Psoriasis is a chronic inflammatory, immune-mediated disease that manifests in the skin, joints or other systematics, especially cardiovascular system. It is associated with both physical and psychological burdens [1]. It is characterized by erythaematous scaly patches or plaques [2]. Psoriatic skin lesions are the result of an intricate interplay between the innate and adaptive components of the immune system [1,3]. Recent studies have shown that epigenetics is an important component of psoriasis aetiology in general [2][3][4]. However, the exact underlying mechanisms regulating immunological dysfunction have not been completely elucidated. Long noncoding RNAs (lncRNAs) are a group of RNAs that are longer than 200 nucleotides and have no ability to encode proteins [5]. However, lncRNAs play an important role in the control of cell fates during development and cause some human disorders by facilitating chromosomal deletions and translocations [6]. LncRNA H19 has been widely investigated in many diverse disorders. For instance, lncRNA H19 plays crucial roles in several inflammatory diseases, such as cardiovascular disease, atherosclerosis, osteoarthritis, and collagen-induced arthritis [7][8][9][10]. Additionally, specific lncRNAs, such as MSX2P1, MIR31HG, and PRINS, have been shown to participate in the regulation of psoriasis by influencing the hyperproliferation of keratinocytes and their and inflammatory capabilities [11][12][13]. Our previous study reported the differential expression of lncRNAs in biopsies obtained from psoriasis patients and healthy volunteers using a microarray [14]. Interestingly, lncRNA H19 was found to have a downregulation (0.248121-fold) in the study. However, whether lncRNA H19 influences psoriasis remains unknown. microRNAs (miRNAs), noncoding small RNAs (~22 nucleotides), were discovered to play an important role in many disorders, regulating diverse biological processes such as development and cell apoptosis, proliferation and differentiation [15]. Many miRNAs were found to have effects on psoriasis. For example, the downregulation of miR-145-5p expression contributes to hyperproliferation and inflammation in psoriasis [16]. In contrast, miR-744-3p promoted keratinocyte proliferation while inhibiting their differentiation [17]. Recently, convincing evidence has indicated that miR-766-3p can induce or inhibit multiple human cancers and suppress inflammatory responses [18][19][20]. The role of miR-766-3p in psoriasis, however, remains elusive. In the study, we demonstrated that lncRNA H19 was markedly down-regulated in tissue samples and HaCaT cells treated with IL-17A, IL-22, IL-1 alpha, oncostatin M, and TNF-alpha. LncRNA H19 might serve as a sponge for miR-766-3p to up-regulate S1PR3 levels and regulates the proliferation of keratinocytes and skin inflammation by AKT/mTOR pathway in psoriasis. Therefore, our findings may provide novel evidences for the clinical therapeutic strategies for psoriasis treatment. Patients and Sample Collection. Six specimens were taken from patients diagnosed with psoriasis vulgaris at the Qilu Hospital of Shandong University who had received no systemic treatments, phototherapy or externally used drugs for at least 3 months before the skin biopsies. Healthy skin from surgical operations was used as control. The study was approved by the ethics committee of Shandong University, China, and all patients provided written informed consent. 2.6. Luciferase Reporter Assay. The recombinant pmirGLO-H19-MUT (mutant) and pmirGLO-H19-WT (wild-type) plasmids and the pmirGLO-S1PR3-MUT (mutant) and pmirGLO-S1PR3-WT (wild-type) plasmids were purchased from RiboBio (Guangzhou, China). Cells were cotransfected with a miR-766-3p mimic or negative control (50 nM) and pmirGLO-H19-WT or pmirGLO-H19-MUT; pmirGLO-S1PR3-MUT and pmirGLO-S1PR3-WT (250 ng per well) with Lipofectamine 2000. The luciferase level was detected using the Dual-Luciferase Reporter Assay System (Promega, U.S.A.) after 48 h. Quantitative Reverse Transcriptase Polymerase Chain Reaction (qRT-PCR). Total RNA was extracted from the samples described above using TRIzol reagent (Invitrogen, U.S.A.). The miRNA and mRNA expression were detected according to the manual of the All-in-One miRNA qRT-PCR Detection System (GeneCopoeia, U.S.A.) and the Pri-meScript RT reagent kit with gDNA Eraser and TB Green Premix Ex Taq II (Takara, Japan). The expression levels were normalized to U6 or GAPDH. The sequences of the miR-766-3p and U6 primers were designed by GeneCopoeia (HmiRP0794 and HmiRQP9001). LncRNA and mRNA sequences used for qPCR are shown in Table 1. We conducted independently repeated experiments at least three times and determined expression by the 2-ΔΔCt formula. 2.11. Statistical Analysis. Data from at least three independent experiments were presented as the mean +/-standard deviation (SD). Comparisons were performed using Student's t-test, and p <0.05 was considered statistically significant. microRNA Microarray Validation and Target Prediction. According to the results of the microarrays in our previous studies [21,22], the expression of miR-766-3p was increased 2.548-fold, and we verified its expression in psoriasis tissues and M5-stimulated HaCaT cells by qRT-PCR (Figures 1(a) and 1(b)]). We found that the expression of miR-766-3p was significantly upregulated in psoriasis tissue samples versus normal control tissues. Similarly, miR-766-3p expression in HaCaT cells stimulated with M5 was significantly higher than that in control. We performed fluorescence in situ hybridization (FISH) to detect the expression of miR-766-3p in the epidermis of psoriasis tissues, and found that miR-766-3p expression was upregulated in psoriatic tissues versus control tissues (Figure 1(c)). 3.3. MiR-766-3p Negatively Regulates lncRNA H19. To investigate the mechanism of miR-766-3p in psoriasis, we predicted its potential target lncRNA using LncBase v.2 and the microarray we analysed previously (Figure 3(a)) [14]. MiRNAs usually work through a ceRNA network, which functions as a sponge [23]. There were eight lncRNAs iden-tified as potential binding targets of miR-766-3p that are downregulated in psoriasis. We then detected the expression of lncRNA H19 which demonstrated a 0.248121-fold decrease in expression in psoriasis. We found that lncRNA H19 expression was decreased in M5-induced HaCaT cells by qRT-PCR (Figure 3(b)). The FISH results showed that the expression of lncRNA H19 was downregulated in psoriatic skin (Figure 3(c)). The qRT-PCR results indicated that miR-766-3p overexpression considerably reduced lncRNA H19 expression and downregulation of miR-766-3p expression increased the expression of lncRNA H19 (Figure 3(d)). The overexpression of lncRNA H19 via the PGMLV-H19 plasmid was successful (Figure 3(e)) and correspondingly reduced the expression of miR-766-3p (Figure 3(f)). The potential binding sites between lncRNA H19 and miR-766-3p are shown in Figure 3(g) and a luciferase reporter assay 3 Mediators of Inflammation was performed to explore the relationship between miR-766-3p and lncRNA H19. Our results showed that the relative activity of miR-766-3p was decreased in the wt lncRNA H19 group and showed no significant difference in the mut lncRNA H19 group (Figure 3(h)). These results demonstrated that lncRNA H19 can directly combine with miR-766-3p. Discussion Psoriasis is a chronic inflammatory, immune-mediated disease [1] which is induced by many pro-inflammatory cytokines, such as IL-17A, TNF-a, IL-22 and IL-23 [27]. IL-23/ IL-17A axis plays a central role in the development of psoriasis. The combination of IL-17A, IL-22, IL-1 alpha, oncostatin M, and TNF-alpha (M5) inhibits the differentiation of keratinocytes and prolongs keratinocyte life [28]. Several studies have suggested that miRNAs might play key roles in psoriasis, including in proliferation regulation and cytokine secretion. In the future, miRNA may be identified as a biomarker and treatment target for psoriasis [29]. The AKT/mTOR pathway plays an important role in epidermal homeostasis control. AKT promotes cell proliferation and inhibits apoptosis. mTOR, downstream of AKT, facilitates cell proliferation, inhibits maturation [30] and regulates the release of pro-inflammatory mediators of keratinocytes [31]. In this study, we found an aggravated association between miR-766-3p expression and psoriasis. Overexpression of miR-766-3p was identified to facilitate the proliferation of keratinocytes and their inflammatory capabilities. LncRNA H19 and S1PR3 were the upstream and downstream targets of the miR-766-3p sponge. MiR-766-3p antagonized the negative regulation of proliferation and inflammation induced by lncRNA H19. The lncRNA H19/ miR-766-3p/S1PR3 axis might act via the AKT/mTOR pathway. Our miRNA microarray showed the expression of 20 different miRNAs were altered >2-fold including 15 upregulated and 5 downregulated miRNAs [22]. MiR-766-3p expression increased 2.548-fold in the microarray and is involved in many proliferative cancers and inflammatory disorders [18][19][20]. We conducted experiments to verify the expression and function of miR-766-3p in a psoriatic cell model. We found that miR-766-3p expression increased in paraffin-embedded psoriasis tissue by FISH and that miR-766-3p RNA levels were increased by qRT-PCR which verified the result of miRNA microarray. Next, we tested miR-766-3p expression in M5-induced HaCaT cells and found the same result. By a CCK-8 proliferation assay, we corroborated that miR-766-3p positively regulates the LncRNA-miRNA-mRNA network may play important role in psoriasis and some of them are predicted or proved [32]. Furthermore, we wanted to discuss the ceRNAmediated network of miR-766-3p which acts as a molecular sponge. We predicted the potential lncRNAs that combine with miR-766-3p and searched for lncRNAs that were also identified in the microarray on psoriasis tissue. Downregulation of lncRNA H19 in psoriasis tissues compared with normal tissues was observed in profiling studies [33]. We confirmed that the expression of lncRNA H19 was downregulated in psoriatic tissue and cells. Moreover, the positive and negative regulatory effects of miR-766-3p and LncRNA H19 were reciprocal. LncRNA H19 is one of the bestunderstood lncRNAs and plays a vital role in inflammatory diseases such as cardiovascular disease, atherosclerosis, and osteoarthritis, collagen-induced arthritis [7][8][9][10]. LncRNA H19 promotes cell proliferation in several disorders, such as pancreatic cancer, hepatocellular carcinoma and bladder cancer [34][35][36]. In contract, lncRNA H19 inhibits cell proliferation in pituitary tumours [37]. LncRNA H19 competes with coding Dsg1 for miR-130b-3p, thereby leading to increased Dsg1 expression, which promotes keratinocyte differentiation [38]. We found that overexpression of lncRNA H19 impeded cell proliferation and IL-17A and IL-22 secretion. We predicted that lncRNA H19 effects psoriasis by sponging miR-766-3p in proliferation and inflammation. S1PR3 was predicted to be the target gene of miR-766-3p, which is associated with the psoriasis-related Ras/pERK and PI3K/AKT pathways [24,25] and acts as a proinflammatory cytokine [26]. We confirmed that S1PR3 was downregulated in psoriasis tissues and M5-induced HaCaT cells. MiR-766-3p negatively regulated S1PR3. Subsequent functional studies disclosed the function of the lncRNA H19/miR-766-3p/S1PR3 axis. We found that miR-766-3p reversed the effect of overexpression of lncRNA H19 on S1PR3 expression. Meanwhile, the upregulation of miR-766-3p could rescue the proliferative and inflammatory effects exerted on keratinocytes by lncRNA H19 overexpression. The lncRNA H19/miR-766-3p/S1PR3 axis affected the activation of AKT/mTOR pathway and proliferation of keratinocytes and skin inflammation. In conclusion, we have established that downregulation of lncRNA H19 promoted the proliferation of keratinocytes and skin inflammation by up-regulating miR-766-3p expression levels and inhibiting activation of S1PR3 through the AKT/mTOR pathway in psoriasis. Our findings help to better understand the pathogenesis of psoriasis and may provide molecular bases for the treatment of psoriasis. Data Availability The data used to support the findings of this study are available from the corresponding author upon request.
2021-12-30T16:04:32.359Z
2021-12-28T00:00:00.000
{ "year": 2021, "sha1": "7d9cac28624d69dabd1742d2c3430ede32fc0fca", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2021/9991175", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0ae6caae663f811d834d99349b84fbc1c65ec3a5", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
76203009
pes2o/s2orc
v3-fos-license
TO STUDY THE RELATION BETWEEN DIABETICS AND OBESITY AMONG PATIENTS WITH UNCONTROLLED HYPERTENSION BACKGROUND: To study the relation between diabetics and obesity among patients with uncontrolled hypertension. METHODS: The study was conducted in the department of General Medicine of Yenepoya Medical College, during May 2013 to August 2013. All the patients with hypertension who provided informed written consent were recruited to the study (n = 300). A pretested interviewer-administered questionnaire was used for data collection from all the subjects. RESULTS: The study result shows that the Mean age group of the study population was 52± 11.2 years. Among the study population 60 % (180) were males and 40 % (120) were female. The mean of average systolic and diastolic blood pressures (BP) were 130.42 ± 13.81 mmHg and 85.03 ± 7.22 mmHg respectively. Uncontrolled BP was present in 45.2% (n = 136) of patients, of which Resistant hypertension was present in 24 % (n =72). Uncontrolled BP was due to therapeutic inertia in 25.7% of the study population. Those with diabetes mellitus, obesity (BMI > 27.5 kg/m2) and those who were older than 55 years were significantly higher in the resistant hypertension group than in the non-resistant hypertension group. CONCLUSION: A significant proportion of the hypertensive patients were having uncontrolled hypertension. Nearly 24% of the population was suffering from resistant hypertension which was significantly associated with the presence of obesity and diabetes mellitus. INRODUCTION: Hypertension is a common non-communicable disease that is prevalent worldwide; it leads to numerous disabling complications such as stroke, atherosclerosis, retinopathy, chronic kidney disease and cardiac failure. Majority of patients (>90%) with hypertension suffer from essential or primary hypertension, while the remaining minority have secondary hypertension. {1} Long term optimization and control of blood pressure is essential to avoid morbidity and mortality in these patients. Resistant hypertension is defined as "Suboptimal control of blood pressure despite using three antihypertensive agents inclusive of a diuretic, and patients who need 4 or more drugs to control blood pressure. {2} Studies have shown that older age, obesity, excessive use of alcohol, and high sodium intake are strongly correlated with poor control of hypertension. {3} Furthermore, patients with uncontrolled blood pressure are more likely to have target organ damage and have higher cardiovascular risks than patients with well controlled blood pressure. {4} Uncontrolled blood pressure affects patients mental, physical and social well-being, while also increasing the health care expenditure of a country. Cardio and cerebra-vascular diseases for which hypertension is an important risk factor, are the leading causes of hospital deaths in India. {5} Page 2447 hypertension. The present study aims to identify the relation between diabetics and obesity among patients with uncontrolled hypertension. METHODOLOGY: This descriptive cross sectional study was conducted over a period of 3 months from May 2013 to August 2013. All the patients with hypertension who provided informed written consent were recruited for the study (n = 300). A pre-tested interviewer-administered questionnaire was used for data collection form all the subjects. STUDY POPULATION AND SAMPLING: Study population was selected from the cardiology outpatient clinic. 300 subjects were selected from the hypertensive patients visiting for follow up to the cardiology clinic. Random selection was done from the list of patient visited the clinic. Patients who gave the informed written consent were included in the study. Few patients were excluded from the study due to poor follow up. STUDY INSTRUMENT AND DATA COLLECTION: A pre-tested expert-validated interviewer administered questionnaire was used for data collection from all the patients. The following data were collected; socio-demographic details, duration of disease, medication history, risk factors, complications and other co morbidities. The following risk factors were evaluated; history of smoking, alcohol consumption, drugs (Non-Steroidal Anti-Inflammatory Drugs, Steroids and Oral Contraceptive Pills), family history, high salt intake and presence of obesity and diabetics mellitus. The antihypertensive drugs currently used by the patients were recorded according to their classes and drugs used for other co-morbidities were also documented. Patient's compliance to treatment was also evaluated. Hypertension treatment targets were < 140/90 mmHg for patients without any comorbidities and < 130/90 mmHg for patients with diabetes mellitus and renal disease. {6} Obesity was defined as BMI ≥ 27.5 kg/m2, based on WHO criteria for Asians population. {7} High salt intake was defined as an intake of sodium > 3 mg/day based on Food Frequency Questionnaires. Current cigarette smokers were defined as adults aged ≥18 years who reported having smoke ≥100 cigarettes during their lifetime and who now smoke every day or some days. Current alcohol consumption was defined as ≥ 1 alcoholic drink per month. Presence of diabetes mellitus, ischemic heart disease, chronic kidney disease and hyperlipidaemia were confirmed by perusal of previous clinic records of the patients. RESULTS: Three hundred and forty adults with hypertension were invited for the study, of which 300 consented to participate in the study and completed the questionnaires. Mean age was 52± 11.2 years (range 35-80), and 60 % (180) were males. Majority of the study population 70.5% n-210) were the age of 55 years. Majority of the study population (n = 216 (72%) had one or more co morbidities and ischemic heart disease (n = 180/60.3%), hyperlipidaemia (n = 174/ 58.0%) and diabetes mellitus (n = 153/51.3%) were the commonest co-morbidities. The study result shows that Either systolic (≥140 mmHg or >130 mmHg in diabetics) or diastolic (≥90 mmHg or >80 mmHg in diabetics) blood pressure values measured during two recent clinic visits one month apart, were high in both visit in 39.1% (n =117) of patients. Among these 117 patients, 62 (52.4%) of them were using 3 antihypertensive drugs including a diuretic. Another 11% (n = 33) of patients who were having normal blood pressures, were using 4 or more antihypertensive drugs. {8} Among the study population most commonly used drug was anti-platelets (72.6%). The most commonly used anti-hypertensive drug was ACE inhibitors (54.5%) followed by βblockers (51.6%) and Calcium Channel Blockers (CCBs) (47.3%). In the resistant hyper-tension group, the most commonly used anti-hypertensive drug was β-blockers (71.7%) followed by ACE inhibitors (69.8%) and CCBs (54.7%). {9} The usage of ACE inhibitors, α-blockers, β-blockers, furosemide, spironolactone and thiamine diuretics were significantly more in the resistant hypertension group than in the non-resistant hypertension group. DISCUSSION: The proportion of poorly controlled hypertensive patients with sub optimal drug management was 27.8%. It is the physicians' failure to increase the intensity of treatment among patients with uncontrolled hypertension, a phenomenon known as therapeutic inertia. Distinguishing therapeutic inertia from other causes for uncontrolled hypertension is an important initial step to identify strategies to improve care offered to these patients. Majority of patients in both resistant (79.2%) and non-resistant (63.8%) hypertension groups were obese. Our results also demonstrate that obesity was a significant factor associated with resistant hypertension in the logistic regression analysis. Obesity is associated with more severe hypertension, a need for an increased number of antihypertensive medications, and an increased likelihood of never achieving blood pressure control This epidemic of obesity and obesity-related hypertension is paralleled by an alarming increase in the incidence of diabetes mellitus and chronic kidney disease. We observed a statistically significant relationship between diabetes mellitus and resistant hypertension in the logistic regression analysis. There are wide ranges of anti-hypertensive available for the treatment of hypertension. Among them diuretics play major role in blood pressure control. However most of the patients (63.5%) in our study sample were not on any diuretic, including furosemide, spironolactone and thiazide diuretics. It has been said that combinations of the thiazide-type and potassium-sparing subclasses may be highly effective, providing nearly optimal therapy for some, and might be considered more often in the treatment of hypertension. ACE inhibitors are seen as more appropriate for first-line use when other high-risk conditions are present, such as diabetes. It is clear that it is an important role in the treatment of hypertension. In our study sample ACE inhibitors were the most commonly used anti-hypertensive drug. There were 26 patients on sole ACE inhibitor therapy, out of which 12 were having diabetes mellitus and 8 had un-controlled blood pressure. STUDY LIMITATION: This study has a limitation that resistant hypertension are limited by the high cardiovascular risk of patients within this subgroup, which generally precludes safe withdrawal of medications; the presence of multiple disease processes (eg., sleep apnea, diabetes, chronic kidney disease, atherosclerotic disease) and their associated medical therapies, which confound interpretation of study results; and the difficulty in enrolling large numbers of study participants. Expanding our understanding of the causes of resistant hypertension and thereby potentially allowing for more effective prevention and/or treatment will be essential to improve the long-term clinical management of this disorder. CONCLUSIONS: In our study significant number of hypertensive patients were identified as having uncontrolled hypertension. This study identify that increase in age, diabetics & obesity are 3 of the strongest risk factors for uncontrolled hypertension, the incidence of resistant hypertension will likely increase as the population becomes more elderly and heavier. Knowing the prevalence of these co-morbidities is important for determining the size of the population that may benefit from strategies to reduce blood pressure. {2-3} Therapeutic inertia seems to contribute significantly towards the presence of uncontrolled blood pressure and its role and causative factors needs further evaluation.
2019-03-13T13:30:39.956Z
2015-02-17T00:00:00.000
{ "year": 2015, "sha1": "fad4df0070b999495eb596ce14d25c64da3f272a", "oa_license": null, "oa_url": "https://doi.org/10.14260/jemds/2015/356", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bd05e25e78a23711d8137ba0dbdddd15b9a9157e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7134964
pes2o/s2orc
v3-fos-license
Functional electrical stimulation of gluteus medius reduces the medial joint reaction force of the knee during level walking Background By altering muscular activation patterns, internal forces acting on the human body during dynamic activity may be manipulated. The magnitude of one of these forces, the medial knee joint reaction force (JRF), is associated with disease progression in patients with early osteoarthritis (OA), suggesting utility in its targeted reduction. Increased activation of gluteus medius has been suggested as a means to achieve this. Methods Motion capture equipment and force plate transducers were used to obtain kinematic and kinetic data for 15 healthy subjects during level walking, with and without the application of functional electrical stimulation (FES) to gluteus medius. Musculoskeletal modelling was employed to determine the medial knee JRF during stance phase for each trial. A further computer simulation of increased gluteus medius activation was performed using data from normal walking trials by a manipulation of modelling parameters. Relationships between changes in the medial knee JRF, kinematics and ground reaction force were evaluated. Results In simulations of increased gluteus medius activity, the total impulse of the medial knee JRF was reduced by 4.2 % (p = 0.003) compared to control. With real-world application of FES to the muscle, the magnitude of this reduction increased to 12.5 % (p < 0.001), with significant inter-subject variation. Across subjects, the magnitude of reduction correlated strongly with kinematic (p < 0.001) and kinetic (p < 0.001) correlates of gluteus medius activity. Conclusions The results support a major role for gluteus medius in the protection of the knee for patients with OA, establishing the muscle’s central importance to effective therapeutic regimes. FES may be used to achieve increased activation in order to mitigate distal internal loads, and much of the benefit of this increase can be attributed to resulting changes in kinematic parameters and the ground reaction force. The utility of interventions targeting gluteus medius can be assessed in a relatively straightforward way by determination of the magnitude of reduction in pelvic drop, an easily accessed marker of aberrant loading at the knee. Background There has been increasing recognition of a biomechanical basis for joint pathology in osteoarthritis (OA), and with this hope that a new generation of disease-modifying therapies might follow. Of particular significance is the emergence of aberrant joint loading as driver of disease. In healthy individuals, the medial compartment of the tibiofemoral joint bears 2.5 times the load borne by the lateral compartment [1]; in patients, it is the usual site of manifestation of OA of the knee [2]. Once OA is established, the external adduction moment of the knee (EAM), a more readily determined correlate of the internally acting medial knee joint reaction force (JRF), has been shown to predict disease severity [3] and risk of progression [4], suggesting its utility as a clinical biomarker targeted for reduction. When working in the clinical domain there is a clear need for accurate measures of internally acting forces, but studies have shown significant inter-individual variation in the relationship between the EAM and the medial knee JRF [5]. Previously, measurement of internal forces had been possible only through the use of instrumented internal prostheses [6,7], but advancements in computational musculoskeletal modelling now enable their reliable determination non-invasively, facilitating wide-scale data collection. Musculoskeletal models perform inverse dynamics analysis within the context of a rigid body framework provided by the skeleton, where muscles act as force generators able to cause accelerations. Kinematic (joint angles and segment positions) and kinetic (ground reaction force (GRF)) data are used to formulate the equations of motion at each time step; solving these yields muscle, joint and ligament forces. The system is indeterminate -there are more solutions than there are equations -reflecting the large number of possible combinations of muscular activations that may result in a given movement. This necessitates the application of certain assumptions regarding muscular activations in a process known as optimisation, which models empirical ideas about how force generation might be shared optimally amongst muscles. In mathematical terms, optimisation generally involves finding solutions that minimise an objective function reflecting the total sum of some function of muscle stresses [8]. The mathematics has basis in physiology; the incorporation of muscle stress into the objective function results in force generation scaling roughly with muscle size, with the biggest muscles contributing most to the movement task. This means that individual muscle stresses are kept low and so their capacity for prolonged or repeated contraction high. Thus, as pointed out by Crowninshield and Brand [8], minimising the objective function is equivalent to maximising muscular endurance, and this feature of the model makes it particularly suited to the description of slow, repetitive activities such as walking. Accordingly, the force predictions obtained during walking have been well validated by comparison with values recorded from instrumented prostheses [9]. Whilst reliability and non-invasiveness are the principal advantages of the modelling technique, it also enables prediction of the effects of biomechanical manipulation for other system variables. For example, taking a subject's pre-existing kinematic and kinetic dataset of level walking, it is possible to modify the objective function to reflect a greater activation of selected muscle groups and observe the consequences for the medial knee JRF over the course of the dynamic activity. Because input kinematic and kinetic data are constant between control and test conditions, this affords an opportunity to consider the effects of changes to muscular activation patterns in isolation. It has been hypothesised that muscular factors drive joint pathology through the initiation and perpetuation of aberrant loading [6]. OA patients tend to suffer a number of muscular deficiencies, with loss of muscle strength out of proportion to loss of muscle cross-sectional area [10], a finding that illustrates the importance of factors beyond gross muscle structure in determining force generation. Muscle activation, a term that describes the coordinated neuromuscular process of aggregated myofibril recruitment leading to contraction, is one of these factors. Asked to contract the quadriceps maximally, patients with knee OA fail to achieve the same proportion of maximum force generation as do controls [11], suggesting that muscle force production in patients with OA might be increased by strategies targeting increased activation alone, without the need for increased muscle size. Decreased activation in patients has been attributed variously to pain inhibition, lack of motivation and neuromuscular dysfunction. With significant pathology of the central nervous system, as arises from stroke for example, muscular activation can be reduced to an extent that gives rise to frank weakness. Here, treatments that aim to increase activation, and thus power, must bypass the damaged central nervous system, and one way in which this can be achieved is by direct activation of muscle and nerve with electrical current, a technique that has widespread clinical use under the term functional electrical stimulation (FES) for the correction of foot drop following stroke [12]. Of course, the same technique may be used to increase activation in those without defined neurological dysfunction, allowing direct control of muscular power and timing, although this effect remains to be exploited clinically. In the early stages of knee OA physiotherapy makes up a large part of treatment, comprising gait retraining regimes or strategies to increase muscular force generation, both of which may lead to alterations in muscular activation patterns [13]. These changes are made in the hope of favourable alterations to joint kinetics and resulting mitigation of pathological processes in articular cartilage. However, the optimal choice of muscle targets for inclusion in physiotherapy routines remains contentious. Traditionally, routines have focussed on those muscles at close proximity to the knee joint, but this is an approach that lacks a firm biomechanical basis; rather, a biomechanically sound analysis leads one to consider the hip musculature. As the controllers of frontal-plane pelvic motion the muscles around the hip, particularly gluteus medius, play a major role in stabilisation of the pelvis during gait [14]. Contraction of gluteus medius during stance phase limits contralateral pelvic drop; weakness of the muscle manifests as medial excursion of the bodily centre of mass as the pelvis drops towards the swing leg. To prevent instability this must be compensated, and this is achieved by shifting the torso in the opposite direction, towards the stance side, with each step, giving rise to the distinctive waddling motion known as the Trendelenburg gait, a clinical sign of gluteus medius weakness [15]. Replication of this increased trunk sway during gait in healthy individuals has been shown to reduce the EAM measured at the knee, confirming the importance of frontal plane motions of the central and upper body for the loads experienced more distally [16]. In patients with OA of the knee, pelvic kinetics have been shown to be of specific relevance to outcomes, with greater internal hip abduction moments during gait protecting against progression of disease from baseline to 18 months [17]. The results of clinical interventions specifically targeting the hip musculature have been mixed. In a large randomised controlled trial of hip muscle strengthening Bennell and colleagues [18] found no change in the EAM of patients with medial knee OA. Evidence for benefit is obtained from a more recent uncontrolled study by Thorp and colleagues investigating the use of intensive therapy directed towards gluteus medius in conjunction with more traditional quadriceps and hamstrings training [14]. Such a regime enabled subjects with OA of the knee to reduce the magnitude of the EAM by an average of 9 %. The discrepancy in outcomes might be accounted for by a greater emphasis on muscle activation in the latter study, where subjects were encouraged to learn the perceptions associated with contraction of gluteus medius. In Bennell's study, hip abductor strength was indeed improved with training, but the authors acknowledged that the increased muscular capacity may not have been activated effectively during walking, an analysis supported by the observation that subjects showed increased pelvic drop following training. Pelvic drop is increasingly recognised as an important kinematic variable affecting loading at the knee. Thorp's group posited a feasible biomechanical basis for the observed reduction in the EAM following training, hypothesising that decreased pelvic drop shifted the ground reaction force vector towards the stance leg (later alising it), reducing the varus torque and thus the medial load acting at the knee. The present study aimed to test this hypothesis. In the first instance, a virtual simulation of increased gluteus medius activation was performed in a normal walking dataset, through a manipulation of model optimisation parameters as described above. Such simulation allowed an analysis of the effects of increased activation in isolation without kinematic or ground kinetic change. FES was then used in a novel application to experimentally augment muscular activation, allowing an analysis of the full effects of changes to muscular activation in a real-world setting. Throughout, the tools of musculoskeletal modelling were used to determine internally acting forces. Motion capture Healthy subjects were sought for participation in the experimental protocol. Emails were sent to various mailing lists of Imperial College London, and posters were placed around the college campus. The experimental setup for gait analysis comprised a set of ten Vicon optoelectronic cameras (Vicon MX system, Vicon Motion Systems Ltd, Oxford, UK) trained on a level walkway with a force plate (Kistler Type 9286AA, Kistler Instrumente AG, Winterthur, Switzerland) at its centre. Twelve single infrared reflective markers and two threemarker clusters were used to form a model of lower limb mechanics [8]. For each subject recorded data included a single static trial, where the subject stood motionless in neutral posture, and multiple dynamic trials. During the latter trials subjects walked normally across the walkway, taking several steps prior to landing with the right foot entirely on the force plate, and continuing for several steps thereafter; no instruction was given regarding walking speed. Between ten and 15 dynamic trials were recorded. Following completion of motion capture of normal walking trials, the skin of the right gluteal region was prepared with 70 % isopropyl alcohol skin wipes and FES gel electrodes (PALS® Platinum, Axelgaard Manufacturing Co., Ltd, Fallbrook, CA, USA) were placed on the area overlying the right gluteus medius, along its line of action. The muscle was located by palpation, within the triangle formed by the right anterior superior iliac spine, right posterior superior iliac spine and the greater trochanter of the right femur. Gluteus maximus was avoided, as was the area superior to the iliac crest. The electrodes were connected to a two-channel electrode stimulator (OCHS II, Odstock Medical Limited, Salisbury, UK) limited to a maximum current of 80 mA, with asymmetrical biphasic current waveforms of frequency 45 Hz. In order to check for effective electrode positioning the subject was asked to maintain left-legged stance while the stimulator was activated, with observation for ensuing abduction of the right leg to indicate contraction of gluteus medius. After confirming acceptability with the subject, the applied current was increased stepwise with actuation after each increase. The final stimulation current was chosen as that producing an abduction angle of 30-45°of the right leg whilst being tolerable. Subjects walked down the gangway with electrodes in situ, and FES was activated prior to right foot strike such that stimulation was maximal for the period of right stance. After several trial runs during which the subject became accustomed to the required timing, motion capture commenced. Again, between ten and 15 dynamic trials were recorded. Data processing Data quality was contingent upon adequate marker visualization to allow reconstruction of relatively uninterrupted three-dimensional marker trajectories. Three control and three FES trials were taken for analysis from the lattermost recorded trials that fulfilled these requirements, to allow for adaptation to the imposed patterns of muscular activation. Initial processing was performed using Vicon Nexus® (1.85) and Matlab® (2015a; The MathWorks Inc., Natick, MA, USA). Data filtering was performed in Matlab using a low-pass fourth order Butterworth filter [19]. A cutoff frequency of 4 Hz was used on the basis of previous work showing that most of the frequency spectrum of the angular signals during walking lies below this threshold [20] and because the impact phase, in which the majority of the highfrequency information is contained, was not of primary interest. Filtering was uniformly applied to kinematic and kinetic data to prevent the introduction of artifacts resulting from incongruences between ground reaction force data and segment accelerations [21]. An opensource musculoskeletal model, Freebody (v1.1) [22], was used for subsequent data processing to determine internal forces. The model's predictions of tibiofemoral JRF during gait have been validated using data from instrumented prostheses [23], and predicted muscle force waveforms have been shown to demonstrate high levels of concordance with known electromyography envelopes [22,24]. The first part of the operation of Freebody involved the determination of coordinates of internal points (for example, bony landmarks and musculotendinous intersections) in a subject-specific frame of reference. This was achieved by scaling using the measurements of gender-matched subjects for whom three-dimensional position data of internal points were available, obtained using magnetic resonance imaging (method described in [23]). Processed data were then taken as input by a Matlab® implementation of optimisation using static trial data for model calibration, to determine muscle, joint and ligamentous forces for each sampled frame. Simulation of increased gluteus medius activation Only normal walking trials were analysed for the purposes of simulation. Two separate optimisation routines were carried out for each trial: the first to reflect normal walking as the control condition, the second to simulate increased activation of gluteus medius through a manipulation of model parameters. For the former, the objective function used is described by: for the latter: where F i is the force output of the i th muscle element, F imax defines the i th muscle element's force at maximum contraction and n is the total number of muscle elements (163) [25]. F imax is calculated for each element from peak cross-sectional area, which is determined using subjectspecific measurements and anatomical dataset values. The effect of discriminating between muscles by use of the variable weighting, c, is to alter the relative contributions of those muscles to the force-generating task defined by the input kinematic and kinetic data. Those muscles to which a lower value is attributed are 'favoured' by the model in driving a given movement because increasing their activation (increasing F i in the above equations) contributes less to the objective function to be minimised, compared to increased activation of other muscles. With regards to the above equations, given the same kinematic and kinetic data, use of equation (2) results in a relatively greater activity apportioned to gluteus medius, with corresponding changes in other model variables including other muscular forces and joint reaction forces such that overall model constraints are satisfied. The value of 0.25 for c was chosen with the aim of inducing an increase in muscular force production corresponding roughly to the increase in maximum gluteus medius strength observed following training regimes in patients with knee OA, reported variously at 13, 19 and 50 % [14,18], using data from a recent modelling study (unpublished observations, Xu R and Bull A). Application of the two different optimisation routines to each trial consecutively produced paired model outputs for each normal walking trial of each subject, representing the control and FES-simulated conditions. This distinction provided the basis for comparison during subsequent data analysis. Experimental implementation of FES to gluteus medius All trials (normal walking and FES) were taken for analysis. Freebody was used to calculate muscle and joint reaction forces by systematic application to data from each trial. For normal walking trials, the optimisation protocol employed equation (1) as the objective function. For FES trials, a modification was performed to account for the increased activation of gluteus medius, in a manner identical to that implemented in the preceding computer simulation, using equation (2) as the objective function. The distinction between normal walking trials and FES trials provided the basis for comparison during subsequent data analysis. Statistical methods Statistical analyses were performed using Matlab® and applied consistently to data obtained from both the modelling and experimental studies. Time-integrated measures (impulses) were determined using the trapezoidal method of numerical integration [26], and all impulses were normalised to bodyweight to facilitate inter-subject comparison. Analysis of the medial knee JRF impulse was performed separately for mid-stance (17-50 % of stance) and terminal stance (51-83 %), and for the whole of stance phase. Early stance phase and the pre-swing phase were omitted in order to reduce the number of statistical tests performed and because these phases do not normally contain either of the two peaks of the stereotypical JRF-time curve within them. Components of the GRF were transformed into a local coordinate frame of reference defined by the evolving lower limb geometry in each frame, and impulses were calculated for these transformed forces. Thus the vertical component was defined collinear with the long axis of the shank, from the mid-point of the ankle to the mid-point of the tibial plateau. An intermediary plane containing this vector and that passing through the long axis of the foot, from mid-ankle to the head of the second metatarsal, was determined and used to calculate the anteroposterior component, defined as a vector orthogonal to the vertical component and lying within this plane. Finally, the mediolateral component was defined by a vector orthogonal to the vertical and anteroposterior components. In determining overall differences between conditions for joint reaction and muscular forces, GRF components and kinematic parameters, normality of the underlying distributions was assumed and two-way analyses of variance (ANOVAs) with repeated measures were performed. These tests took all individual un-averaged trial data into account (two conditions with three replications for each, per subject), with nominal variables given by subject and condition (condition defined as normal walking, FES-simulated or FES). Three different comparisons were made: normal walking versus FES-simulated, normal walking versus FES and FES-simulated versus FES. A significance threshold of p = 0.05 was applied throughout. For intervariable correlations coefficients of determination were calculated (presented as adjusted R 2 values) along with estimates and 95 % confidence intervals for beta coefficients, with p values determined by ANOVA. Results Of the 16 healthy subjects who agreed to participate in the experimental protocol, one male subject was excluded on the basis of significant recent lower limb injury, leaving 15 who underwent testing (13 male, two female, age range 21-28, body mass index 21.6 ± 2.5 kgm −2 ). The range of stimulation currents administered to subjects varied from 31 mA to 80 mA (mean 52 mA). All participants tolerated FES well, and optimisation of both types was successful for all trials of all 15 subjects. Simulation of increased gluteus medius activation In simulations of increased gluteus medius activation, total impulse of the medial knee JRF was reduced by 4.2 % on average (p = 0.003) compared to normal walking, with a 6.5 % decrease in the magnitude of the mid-stance impulse (p = 0.001) and a 3.9 % decrease in the terminalstance impulse (p = 0.070). Gluteus medius impulse was 33 % greater with FES simulation (p < 0.001) compared to normal walking (Fig. 1). Experimental implementation of FES to gluteus medius Medial knee JRF Thirteen of 15 subjects showed reductions in the medial knee JRF impulse in FES trials compared to normal walking. There were statistically significant decreases at mid-stance (p < 0.001), terminal stance (p < 0.001) and for the whole of stance (p < 0.001) when these phases were analysed independently. The average reduction in the total impulse across all subjects was 0.15 bodyweight-seconds, equivalent to a 12.5 % decrease from control; see Fig. 2. Mean reductions in peak force with FES were 13.8 % for the first peak and 18.4 % for the second peak of the medial knee JRF (p < 0.001 in each case) ( Table 1). Muscular forces Mean gluteus medius impulse was 15 % greater in FES trials compared to control (p < 0.001). Kinematics Peak excursions were compared for selected joint angles, revealing widespread kinematic change in FES trials compared to normal walking. Of particular significance were changes to the extent of pelvic drop in the frontal plane, towards the swing leg, during stance phase. In normal walking trials subjects tended to show some drop of the pelvis below the horizontal, peaking around 25 % of stance. With FES, the joint angle-time curve was up-shifted, with a 46 % reduction in peak pelvic drop (p < 0.001); see Fig. 3. Times to peak remained relatively unchanged. Areas under the curve (AUCs) were computed for each joint angle-time curve by integrating across time, and averaging within condition across all subjects and all trials. Comparison of simulated and real increases in gluteus medius activation FES-simulated trials were compared with real FES trials. While both resulted in reductions in the medial knee JRF impulse compared to control, reductions were significantly greater in real FES trials, where a further 8.6 % decrease was observed (p < 0.001). Discussion The effects of specific muscular augmentation of gluteus medius on the medial knee JRF during level walking were investigated using motion capture and musculoskeletal modelling. Application of FES to gluteus medius during walking facilitated an average reduction in the medial knee JRF impulse of 12.5 % compared to non-stimulated trials. In a previous uncontrolled study it was hypothesised that reduced pelvic drop in the frontal plane might lead to a reduction in medial knee loading [14]. A novel analysis of kinematics and kinetics was performed to test this hypothesis, ultimately lending weight to it. Crucially, across subjects, there were positive correlations between kinematic changes and changes to both the GRF and the medial knee JRF, with reductions in the medial knee JRF scaling with both the extent of reduction of pelvic drop and the degree of lateralisation of the GRF. Simulations of FES using a single dataset (thus neglecting kinematic effects) resulted in reductions in the medial knee JRF, but these were significantly lower than those obtained with real-world implementation of FES (where kinematic changes were often profound). The following explanation is proposed: FES activates gluteus medius during stance, which through increased contraction reduces the extent to which the pelvis drops towards the swing leg. This effect lateralises the bodily centre of mass (shifting it towards the stance leg) and in doing so lateralises the GRF vector. Lateralisation of the GRF reduces its moment arm about the knee and thus the resultant varus torque, in turn reducing the medial compressive force and so the medial knee JRF (Fig. 6). The study's significance lies in its confirmation of the central importance of proper gluteus medius function for the protection of the knee in established OA. This is something that remains to be fully reflected in physiotherapy regimes designed for patients, which often focus on those muscle groups at closer proximity to the joint. The results presented here support a primary role for gluteus medius rehabilitation in all such regimes. In addition, the study raises the possibility of intervention with FES to reduce the medial knee JRF in early OA. The mean values in reduction of peak medial knee JRF found here, of 13.8 % and 18.4 % for the first and second peak respectively, compare favourably with published reductions in the peak EAM following physiotherapy, where a mean reduction of 9 % was observed and patients reported large reductions in pain scores following intervention [14]. Whilst much of the previous work suggesting benefit from reduction of the medial knee JRF has focussed on peak values, many of the analyses of the results of the present study were performed using impulse. In the case of joint reaction force, this was partly to better approximate a marker of the disease-causing process. Studies in bovine cartilage support the existence of stress thresholds above which significant cellular damage begins to accumulate [27] but given the difficulty of applying a threshold in any meaningful manner in healthy subjects, whose knee JRFs, as measured by the EAM, tend to be much smaller in magnitude than those typically observed in OA patients [28], it was decided to use total area under the curve, or impulse, as the comparative metric. By taking account of time, this provided the added benefit of facilitating the intervariable correlations that were central to informing the study's conclusions. In patients with knee OA the method has been validated to some extent by a study which showed that the external knee adduction impulse was more discriminative than peak knee adduction moment in predicting radiographic grade of knee OA [29]. The size of the proportional difference between the median values of the adduction impulse for patients in the moderate and mild OA groups was approximately 20 %, a value exceeded easily by the proportional reduction in the medial knee JRF impulse of five of the subjects tested in the present study. Validity of model predictions Predicted model outputs were of an appropriate scale of magnitude, and JRF waveforms showed typical doublepeaked shapes. The mean peak value for the medial knee JRF in control trials found in the present study of 2.6 bodyweights fits well within the scale of values found in other modelling studies and indeed shows no major discrepancy from those recorded in instrumented prostheses [30][31][32][33]. Significant inter-individual variation in the medial knee JRF was observed, as seen in patients; interestingly those subjects who showed the highest loads at baseline were among those who showed the greatest load reductions with FES. Modification of the objective function was necessary to accurately model observed increases in the activation of gluteus medius in FES trials. By stimulating externally, the biological pathways through which muscle activation is regulated are bypassed, thus invalidating the use of optimisation in its standard form. The question was thus raised as to how best to reflect the increased contraction in the elements of gluteus medius and the interaction of this augmentation with the force outputs of other muscles. This is an open question. Whilst static optimisation has been verified to produce reasonable muscle force estimates in normal walking, its use in modelling FESinduced muscular contraction is a novel application. Ultimately, the value of 0.25 used for the constant, c, applied to gluteus medius activation was based on empirical observations regarding the extent of increased force produced using a range of different values. Temporal characteristics of muscle activation matched well with published data previously obtained during level walking [34] and the magnitudes of gluteus medius force obtained (around 1.5 to 2 bodyweights at peak) correspond roughly to those obtained elsewhere using musculoskeletal modelling (1 to 1.5 bodyweights), allowing for the fact that the latter figures are taken from a study into muscular forces in OA patients which are expected to be somewhat smaller [35]. Additional analysis of FES trials (not documented here) employing a standard objective function as per equation (1) demonstrated that the b. Reduction in pelvic drop AUC versus reduction in mediolateral GRF impulse (R = 0.75, R 2 = 0.54, β = 0.0053, 95 % CI [0.0028, 0.0079], p < 0.001). c. Reduction in mediolateral GRF impulse versus reduction in medial knee JRF impulse (R = 0.88, R 2 = 0.75, β = 11.64, 95 % CI [8.11, 15.17], p < 0.001). AUC area under the curve, GRF ground reaction force, JRF joint reaction force effect of altering gluteus medius force outputs on the medial knee JRF was relatively small compared to the overall effect, as apparent from closeness in the respective JRF-time curves derived using each objective function. In future, advances in electromyography and signal processing might provide a means by which to obtain quantitative data regarding the dynamics of muscle activation with FES. For now, taking all of the evidence together, the existence of a plausible biomechanical explanation for the obtained results, backed up strongly by the scaling of effects seen with the degree of change in pelvic drop and with the degree of lateralisation of the GRF, permits a high level of confidence in the accuracy of the model-predicted reductions in the medial knee JRF. Limitations It remains to be seen if the effects generated here in young, healthy subjects can be replicated in patients with OA who differ from the test cohort in their morphology, bodily composition and kinematics [36,37]. OA patients are predisposed to a number of muscular imbalances, including gluteus medius weakness [10]. A common problem with the use of FES is fatigue resulting from supraphysiological stimulation frequencies required to cause muscle activation and weakness can exacerbate this, limiting the potential for long-term stimulation. On the other hand, pre-existing muscular insufficiency might potentiate clinical effects achievable with FES. The widespread use of FES technology in foot drop shows that it can be successfully applied to elderly patients with comorbidities [12]. Use of FES may be limited by discomfort. Stimulation was on the whole well-tolerated, with none of the tested subjects complaining of more than moderate discomfort and all completing the entire experimental protocol. At 20-30 minutes testing with FES was brief, however, and may not be indicative of long-term tolerability. Moreover, there was significant variation in tolerated currents and the size of effects induced by stimulation at a given current. The latter might reflect differences in bodily composition at the stimulation site. Subjects with more subcutaneous tissue between skin and muscle are likely to require higher stimulating currents to obtain the same effect. OA patients are likely to carry more subcutaneous fat than subjects from the test cohort, potentially damping muscle activation. Joint reaction forces at the lateral compartment of the knee, and at the hip, were analysed and demonstrated to be unchanged with FES. Though this is reassuring, effects of the kinematic change induced by FES on the integrity of other load-bearing structures need to be investigated further. Ultimately, the success of applying FES in patients will depend upon adequate beneficial effects obtainable within thresholds of discomfort and fatigue, and without excessive unwanted kinematic effects elsewhere. The finding of non-negligible effects on the medial knee JRF with simulation (and, therefore, in the absence of any kinematic change) may indicate that benefit can be obtained at low stimulation levels, with all the advantages that this entails in terms of tolerability and safety. • We accept pre-submission inquiries • Our selector tool helps you to find the most relevant journal Submit your next manuscript to BioMed Central and we will help you at every step:
2018-04-03T06:18:04.152Z
2016-11-03T00:00:00.000
{ "year": 2016, "sha1": "7852928833b7df8a0155e17e79dbfc26f07db85d", "oa_license": "CCBY", "oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/s13075-016-1155-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7852928833b7df8a0155e17e79dbfc26f07db85d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232170424
pes2o/s2orc
v3-fos-license
Mapping the phase diagram of the quantum anomalous Hall and topological Hall effects in a dual-gated magnetic topological insulator heterostructure We use magnetotransport in dual-gated magnetic topological insulator heterostructures to map out a phase diagram of the topological Hall and quantum anomalous Hall effects as a function of the chemical potential (primarily determined by the back gate voltage) and the asymmetric potential (primarily determined by the top gate voltage). A theoretical model that includes both surface states and valence band quantum well states allows the evaluation of the variation of the Dzyaloshinskii-Moriya interaction and carrier density with gate voltages. The qualitative agreement between experiment and theory provides strong evidence for the existence of a topological Hall effect in the system studied, opening up a new route for understanding and manipulating chiral magnetic spin textures in real space. In recent years, condensed matter physics has seen a growing interest in studying the interplay between topology in momentum space and topology in real space. The former often manifests in nontrivial band structures in momentum space arising from the combined effects of some fundamental symmetry and strong spin-orbit coupling, while the latter (also a product of spin-orbit coupling) is associated with chiral magnetic spin textures in real space [1-3]. The quantum anomalous Hall (QAH) effect [4][5][6][7][8][9][10][11][12], induced by a non-trivial Berry curvature in a topological system with broken time-reversal symmetry, provides convincing evidence of topology in momentum space. It is characterized by a quantized Hall resistance and a vanishing longitudinal resistance at zero magnetic field and has been realized in magnetically doped topological insulators (TIs) [8][9][10][11][12]. The topological Hall effect (THE), induced by the interaction of itinerant charge carriers with chiral spin textures such as magnetic skyrmions or chiral domain walls, is regarded as a signature of topology in real space [3]. The THE manifests as an excess Hall voltage superimposed on the usual hysteretic anomalous Hall voltage that arises in magnetic conductors. Such a signature has been observed and interpreted as evidence for the THE in many systems, including MnSi [13,14], MnGe [15], FeGe [16], SrIrO 3 /SrRuO 3 interface [17,18], magnetically doped TI heterostructures [19][20][21], and TI/BaFe 12 O 19 heterostructures [22]. Given this context, it is valuable to identify model systems wherein the QAH and THE can be systematically studied as a function of some easily tuned system parameters. We recently studied the THE in one such model system, TI sandwich heterostructures of (Cr 0.15 (Bi,Sb) 1.85 Te 3 -(Bi,Sb) 2 Te 3 -Cr 0.15 (Bi,Sb) 1.85 Te 3 , referred as CBST-BST-CBST below) [23]. By using a bottom gate to tune the chemical potential, we showed how a single sample could be continuously tuned from the QAH effect regime to the THE regime. In this Letter, we further extend the tunability of this model system by adding a top gate and demonstrate how a dual gating scheme enables the mapping of a phase diagram of the concurrence of the QAH effect and the THE as a function of the chemical potential and the asymmetry in the potential between the top and bottom surfaces. In particular, we find that the THE is enhanced when top and bottom gate voltages have different signs and that it is quenched when these gate voltages have the same sign. We also demonstrate that the THE arises because the asymmetric potential induced by dual gates leads to a Dzyaloshinskii-Moriya (DM) interaction. We used a VEECO 620 molecular beam epitaxy system to grow 3 quintuple layer (QL) CBST -5QL BST -3QL CBST heterostructures on SrTiO 3 (111) substrates (MTI Corporation). The SrTiO 3 substrates were first soaked in deionized water at 90 • C for 1.5 hours and thermally annealed at 985 • C for 3 hours in a tube furnace with flowing oxygen gas. The heat-treated SrTiO 3 substrates were then outgassed under vacuum at 630 • C (thermocouple temperature) for 1 hour. After outgassing, the substrates were cooled down to 340 • C (thermocouple temperature) for the heterostructure growth. High purity Cr (5 N), Bi (5 N), Sb (6 N), and Te (6 N) were evaporated from Knudsen effusion cells. The cell temperatures were precisely controlled to obtain the desired beam equivalent pressure (BEP) fluxes of each element. The BEP flux ratio of Te/(Bi+Sb) was higher than 10 to prevent Te deficiency. The BEP flux ratio of Sb/Bi was around 2 to tune the heterostructure's chemical potential close to the charge neutral point. The heterostructure growth rate was˜0.25 QL/min, and the pressure of the MBE chamber was maintained at 2 × 10 −10 mbar during the growth. After the growth, the heterostructures were capped with 10 nm Te in situ at 20 • C for protection during the fabrication process. Heterostructures were then fabricated into Hall bar devices using photolithography and Ar + plasma dry etching. The top gate was defined by a 40 nm Al 2 O 3 dielectric layer and 5nm Ti/50nm Au contacts deposited by atomic layer deposition and electron beam evaporation, respectively. We carried out magnetotransport measurements in a commercial He3 fridge (Oxford Heliox) for temperatures higher than 0.4 K and in a commercial Leiden Cryogenics dilution refrigerator at T = 60 mK. Bottom and top gate voltages were applied using the SrTiO 3 Figs. 1 (d) ). At the base temperature T = 60 mK, we mapped out the top and bottom gate voltage dependence of the QAH effect and the THE. As shown in Fig. 2(a), if the bottom gate V b is changed while the top gate V t is fixed at 0 V, the heterostructure can be tuned away from the QAH regime, particularly at negative V b . Away from the QAH regime, at magnetic fields larger than the coercive field, a careful examination of the Hall resistance shows that it is slightly different during the upward and downward magnetic field sweeps. The excess Hall resistance after crossing the magnetic reversal transition is a signature of the THE. Therefore, by tuning V b , we observe a crossover from the QAH effect to the THE. This is due to the formation of chiral domain walls in the presence of a strong DM interaction [23]. The signature of the THE becomes more obvious if we apply a positive V t while keeping V b fixed at a negative value (Fig. 2(b)). If we fix V b and change V t , the THE behaves differently: unlike the case of V b -modulated THE, it becomes more pronounced at positive V t but vanishes under negative V t . In Fig. 2(c), In order to understand the behavior of the THE, we propose a simple capacitance model ( Fig. 3(a)). The top and bottom gates act as capacitors that inject or repel electrons in the top and bottom surfaces, respectively. Therefore, the top gate chiefly tunes the chemical potential of the top surface, while the bottom gate principally affects the chemical potential of the bottom surface. As a result, an asymmetric potential between top and bottom surfaces is induced, leading to the breaking of inversion symmetry in the sandwich heterostructure. Due to the broken inversion symmetry, the overall DM interaction from two surface is nonzero, giving rise to the THE. As shown in Fig. 3(b), the sandwich heterostructure enters the QAH effect regime in the dark blue area. In this region of the phase diagram, conduction from surface and bulk carriers is minimized; thus, the sample shows a perfect QAH effect with no THE signal. Away from the QAH regime where surface carriers start to contribute, a finite DM interaction is induced, and the THE appears. Furthermore, by creating an asymmetric potential by V b and V t , ρ T HE yx reaches maximum shown in the red area. We now theoretically evaluate the DM interaction and the bulk carrier concentrations in magnetic TI heterostructures based on the model developed in Ref. [23]. This model involves both the topological surface states and the bulk valence band quantum well states, both of which have been shown to be crucial in understanding the temperature and chemical potential dependence in the previous report [23]. The topological surface states can be described by the effective Hamiltonian H SS = v F (k y σ x −k x σ y )τ z +Uτ z +m 0 τ x +H SS,ex , where σ is the Pauli matrices for the spin while τ stands for two surfaces. Here v F is the Fermi velocity, U is the asymmetric potential and m 0 describes the hybridization between two surface states at the top and bottom surfaces. The exchange coupling between surface states and magnetic moments can be described by the Hamiltonian H SS,ex = −M t · σ 1+τz 2 − M b · σ 1−τz 2 , where M t and M b label the magnetization at the top and bottom surfaces, respectively. The valence band quantum well states are described by the effective Hamiltonian H QW = ε 0 (k) + N(k)τ z + A(k y σ x − k x σ y )τ x + Uτ x + H QW,ex , where σ still labels spin and τ stands for two orbitals instead. Here ε 0 (k) = C 0 + C 1 k 2 , N(k) = N 0 + N 1 k 2 and C 0 , C 1 , N 0 , N 1 , A are material dependent parameters while U is still for the asymmetric potential. The exchange coupling between quantum well states and magnetic moments is given by where M is the magnetization. We assume the magnetization is uniform throughout the is expected to proportional to |χ xz | 2 ρ, whose dependence on µ and U is shown in (e). Here we choose q x = 0.005Å −1 and q y = 0. whole system and thus M t = M b = M. As the strength of DM interaction is directly determined by the off-diagonal components of the spin susceptibility [23], particularly the components χ xz and χ yz , we next discuss the behavior of χ xz in our model (χ yz can be directly related to χ xz due to the rotation symmetry of our model). The zero-frequency spin susceptibility is given by and T is the temperature. Fig. 4(b) shows χ xz as a function of asymmetric potential and chemical potential. The behavior of χ xz can be understood from the energy dispersion in Fig. 4(a), which shows that the surface Dirac cones are close to the top of valence band quantum well states [24,25]. The asymmetric potential can split the two spin states of the valence band and the surface states at the opposite surfaces. These splittings generally give rise to non-zero χ xz , as shown in Fig. 4(c) for two different chemical potentials. We also notice that when the Fermi energy lies between the two spin bands of the valence quantum well states, a larger χ xz can be induced, as clearly shown in Fig. 4(c), in which the red line is for µ = −0.02eV that crosses the valence band top and the blue line is for µ = 0.02eV that only crosses surface state. This is consistent with the experimental observation that the observed TH resistance is enhanced for negative back gate voltage V b , which is expected to mainly tune the chemical potential to the valence bands. The front gate voltage V f is expected to mainly tune the asymmetric potential U in our model and one can see from Fig. 4(c) that a large U can give rise to a strong enhancement on χ xz . This is consistent with the observation in Fig. 3(b). We note that when the chemical potential crosses only the surface states, we still see a finite χ xz , but in experiments, the TH resistance almost vanishes. This is because the TH resistance can only come from the bulk carriers and it vanishes when the system is in the QAH regime with almost vanishing bulk carriers. Therefore, we also show the bulk carrier concentration ρ in Fig. 4(d) and if we assume the TH resistance is proportional to both spin susceptibility |χ xz | 2 and bulk carrier concentration, the behavior of TH resistance is shown in Fig. 4(e). To summarize, we have studied the gate dependence of the THE in dual-gated magnetic TI sandwich heterostructures. We observed a crossover from the QAH effect to the THE by tuning the bottom gate, particularly under a negative bottom gate voltage. The magnitude of the THE increases with V b because this induces an asymmetric potential between the two opposite surfaces. By applying V t and V b , this asymmetric potential and the mag-nitude of the THE can be enlarged or quenched by changing the relative sign of V t and V b . Since the THE in magnetic TI sandwich heterostructures provides evidence of chiral magnetic domain walls, the manipulation of the THE by dual gates provides a simple way to investigate and understand chiral magnetic spin textures in real space. We note that the good correspondence between our experimantal observations and theory rules against simpler explanations [26] for the THE signals in our transport measurements. Our study will also motivate more studies on nontrivial quantum phenomena in magnetic TI multilayer heterostructures and facilitate the development of the proof-of-concept TI-based spintronic devices.
2021-03-11T06:29:55.749Z
2021-03-10T00:00:00.000
{ "year": 2021, "sha1": "2d6f742430d6774a97821e298ba32e4be0b363fa", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.3.L032004", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "2d6f742430d6774a97821e298ba32e4be0b363fa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
232323029
pes2o/s2orc
v3-fos-license
Safety and Efficacy of a Novel Intubating Laryngeal Mask during the recovery period following Supratentorial Tumour Surgery Objective To assess safety and efficacy of a novel intubation laryngeal mask airway (ILMA) during the recovery period following supratentorial tumour surgery. Methods Patients who underwent supratentorial tumour surgery at our centre from January 2012 to December 2016 were eligible for this prospective randomised, parallel group study. We developed a novel ILMA using closely fitting laryngeal masks (No. 4/5) with 7.0/7.5 mm endotracheal tubes (ETT) plus screw fixators and anti-pollution sleeves. Results In total, 100 patients were intubated with the novel ILMA and 100 the ETT. There were no differences between groups in haemodynamic variables, oxygen saturation, exhaled CO2, or bispectral index all recorded during the 72-hour recovery period. However, there were significantly fewer incidences of coughing, less fluid drainage and lower haemoglobin levels in surgical fluid in the ILMA group compared with the ETT group. Conclusion Our novel ILMA device was associated with reduced coughing, fluid drainage and blood in surgical drain during the recovery period following supratentorial tumour surgery. Introduction The standard laryngeal mask airway (sLMA) was clinically approved in the UK in 1988 and opened a new chapter in anaesthesia, mainly for management of difficult airways. [1][2][3][4] Unlike endotracheal tubes (ETTs) which can stimulate the trachea and cause haemoptysis, unstable circulatory dynamics, and reduced comfort levels, sLMAs are associated with easy nontraumatic intubation and minimal somatic and autonomic responses. [5][6][7] However, sLMAs have some disadvantages, including limited protection against aspiration of gastric contents and lower seal pressures. 8,9 A modified version of the sLMA, the intubating laryngeal mask airway (ILMA), that combines a mask with a modified tracheal tube, has been designed for guided tracheal intubations. 10 Compared with the sLMA, the ILMA is shorter and wider and allows for large ETTs up to 8 mm in diameter. The ILMA has been shown to be a useful device in the management of patients with difficult airways. 11 For this study, we developed an ILMA using secondgeneration closely fitting laryngeal masks (No. 4/No. 5), ETTs (sizes 7.0/7.5/ 8.0 mm), screw fixators and anti-pollution sleeves. Following compliance tests to determine the best combination of closely fitting mask and ETT, we compared the ILMA with standard ETTs in the anaesthesia of patients undergoing supratentorial tumour surgery at our hospital and assessed efficacy and safety in the recovery period. Methods Patients who underwent supratentorial tumour surgery at Nanfang Hospital, Southern Medical University from January 2012 to December 2016, were eligible for this prospective, randomized, parallel group study. To be included in the study patients were !18 years of age and their physical status was American Society of Anaesthesiologists (ASAI)-II grade. Patients excluded from the study had the following: difficult airways (i.e., history of snoring, mouth opening <3 cm, Mallampati class III or IV, limited mandibular advancement and/or thyromental distance <6 cm); lung disease (including chronic obstructive airway disease); liver and/or kidney dysfunction; severe heart disease (including pre-operative ejection fraction <45%, sick sinus syndrome, second or third-degree atrioventricular block with no pacemaker); language/hearing or communication difficulties; abnormal blood coagulation; hypertension; diabetes; history of brain surgery. The ILMAs used in this study consisted of second-generation closely fitting laryn- (v) No. 5 mask plus ETT 8.0 mm. The end of each mask was fitted with a screw fixation system and an anti-pollution sleeve was placed over the end of the ETT. For each of the five groups of ILMAs, the following were determined: holding force; locking/ unlocking time of the screw fixation; conversion time between mask and ETT mode; airway leakage; airway resistance; folding sequence of anti-pollution sleeve. The results of these compliance studies indicated that No. 4 laryngeal mask matched with ETT 7.0 mm and No. 5 mask matched with ETT 7.5 mm were the optimal combinations. The final folded length of the ILMAs ranged from 2-3 cm ( Figure 1). Following routine induction of anaesthesia using midazolam, propofol, sufentanil, and cistrotracurium, patients were randomly allocated to a ILMA or an ETT device for intubation. The randomization process was performed using an automatic assignment system that concealed allocation. Anaesthesia was maintained with sevoflurane 2.5-3.5% and the tidal volume was set at 600 ml and respiratory rate at 12 breaths/ min. Patients were transferred to surgical ICU following the operation. Blood pressure (BP), heart rate (HR), incidence of coughing; drainage of the surgical wound, haemoglobin levels in surgical drain; the amount of carbon dioxide in exhaled air (ETCO 2 ), oxygen saturation (SpO 2 .) and bispectral index (BIS) were monitored 5 mins before the extubation and thereafter for 72 hours. The BIS monitor ranges from 0 (non-responsive) to 100 (fully awake). Patients provided written informed consent and the study was approved by the Ethics Committee of Nanfang Hospital, Southern Medical University. Statistical analyses Data were analysed using the Statistical Package for Social Sciences (SPSS V R ) for Windows V R release 2019 (IBM Corp., Armonk, NY, USA). All tests were twosided and a P-value <0.05 was considered to indicate statistical significance. Data were expressed as the mean AE standard deviation (SD). The Kolmogorov-Smirnov/Shapiro-Wilk test was used to determine if variables were normally distributed. Student t-test was used to compare normally distributed continuous variables and Mann-Whitney U test was used for non-normally distributed variables. Results In total, 200 patients (108 men/ 92 women) who underwent supratentorial tumour surgery at our hospital between 2012 and 2016 were eligible for the study. Sizes of the supracerebral brain tumours ranged from 2.1 Â 1.1 cm to 8.1 Â 11.2 cm. One hundred patients were allocated an ILMA and 100 patients an ETT. The ILMA group consisted of 58 men/42 women and the ETT group 50 men/50 women. There were no differences between groups in average age or weight. The ILMA group was 59 AE 19 years of age and weighed 60 AE 18 kg, whereas the ETT group was 64 AE 21 years of age and weighed 56 AE 21 kg. The incidence of coughing during the extubation period was statistically significantly (P < 0.01) lower in the ILMA group (6.8 AE 1.7%) compared with the ETT group (52.3 AE 12.3%). The values for cumulative drainage of surgical wounds in the ILMA group at 1, 2, 3, 12, 24, 48 and 72 hours after extubation were statistically significantly (P < 0.01) lower compared with those in the ETT group (data not shown). Following extubation, haemoglobin levels in the surgical drain at 1, 2, 3, 12, 24, 48 and 72 hours were statistically significantly (P < 0.01) lower compared with those in the ETT group (data not shown). Discussion Although tracheal intubation is advantageous in ensuring that the airway is safe during anaesthesia, the procedure is associated with several complications such as tooth and oropharyngeal injuries, laryngeal spasm, laryngeal oedema, dislocation of arytenoid cartilage, and arrhythmias and increased blood pressure. [12][13][14] To overcome these problems, the LMA, was developed and has been in clinical use since 1988. 1 The LMA is a non-invasive, easy-tooperate ventilation tool that can be rapidly placed in the throat without touching the Abbreviations: HR, heart rate; sBP, systolic blood pressure, SpO 2, oxygen saturation; ETCO 2, carbon dioxide in exhaled air; BIS, bispectral index (0100 [fully awake]); ILMA, intubating laryngeal mask; ETT, endotracheal tube glottis and trachea; it causes little stimulation to the respiratory tract and has no effect on HR and BP. 15 However, the mouth of the classic LMA has a doublesided barrier that blocks the passage of the ETT and the ventilation tube is slender so even the largest LMA can only guide the insertion of a 6.0 mm ETT. In addition, it may be difficult to withdraw the laryngeal mask after a successful tracheal intubation. 11 Also, laryngeal masks have limited protection against aspiration of gastric contents and lower seal pressures. 8,9,16 The intubating laryngeal mask was developed to correct the aforementioned shortcomings. 10,17 Currently, three types of intubating laryngeal masks are used in clinical practice (Fastrach TM , CTrach TM , and Cookgas TM ). [17][18][19] The Fastrach LMA consists of a rigid airway tube, an integrated guiding handle, an elevator bar, an airway tube, and a cuff. 17 The tube has a 30 angle with the mask, which is consistent with the natural bending of the oropharynx and is convenient for both placement and guiding. The CTrach system has a built-in optical fibre bundle and its proximal end is connected to a display via a magnetic connector; the distal end is under the epiglottic elevating bar. 18 The optical display allows clinicians to look directly at the throat while completing tracheal intubation. The Cookgas LMA has a short transparent ventilation tube with a large lumen, and a large bending angle; a 15 mm connector can be removed prior to intubation which is an effective method for increasing the inner diameter of the ventilation tube and allowing the cuffed ETTs to pass in a smooth manner. 19 In this present study, we developed an ILMA and found optimal designs were a No. 4 mask/ ETT 7.0 and No. 5 mask/ ETT 7.5 mm plus screw fixators and antipollution sleeves. We used the novel ILMA in the general anaesthesia of patients undergoing supratentorial tumour surgery. Patients with brain tumours may show fluctuating cardiovascular haemodynamic dysfunction which can bring about unwanted complications. 20 Therefore, the smooth induction of anaesthesia and its maintenance could lessen any untoward hemodynamic fluctuations for these patients and reduce the occurrence of stress reactions. Indeed, haemodynamic changes such as tachycardia, hypertension, and arrhythmias during laryngoscopy and intubation can cause serious complications in patients with coexisting cardio-or cerebrovascular diseases. 5 Importantly the coughing reflex which can occur with extubation of traditional ETTs can cause activation of the sympathetic-adrenal system leading to further increases in the incidence of cardiovascular events in these patients. 21,22 We found our device was safe and could be smoothly transitioned into both mask and ETT modes. We observed significantly fewer incidences of coughing, less fluid drainage and lower haemoglobin levels in the surgical drain in the group intubated with the ILMAs compared with those intubated with ETTs. Furthermore, there were no differences between groups in BP, HR, SpO 2 , ETCO 2 , or BIS during the recovery period. This study had some limitations. For example, only 200 patients were involved and the study was single-blind. In addition, it was not always possible to identify the tumour borders precisely on imaging and so the defect may have been estimated to be larger than it was. In conclusion, our novel ILMA was an effective airway device with a similar safety profile to traditional ETT in the recovery period but was associated with less coughing, less fluid drainage and less blood in the surgical drain. Declaration of conflicting interests The authors declare that there are no conflicts of interest.
2021-03-24T06:16:50.962Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "93f4afed99c6bfead309eda5eb066c9701f7c2d7", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0300060521999768", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "27ad1d48d6530f22f57be0c15b2c820d77099a19", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237566489
pes2o/s2orc
v3-fos-license
Scaling up care for perinatal depression for improved maternal and infant health (SPECTRA): protocol of a hybrid implementation study of the impact of a cascade training of primary maternal care providers in Nigeria Background The large treatment gap for mental disorders in low- and middle-income countries (LMIC) necessitates task-sharing approaches in scaling up care for mental disorders. Previous work have shown that primary health care workers (PHCW) can be trained to recognize and respond to common mental disorders but there are lingering questions around sustainable implementation and scale-up in real world settings. Method This project is a hybrid implementation-effectiveness study guided by the Replicating Effective Programmes Framework. It will be conducted in four overlapping phases in maternal care clinics (MCC) in 11 local government areas in and around Ibadan metropolis, Nigeria. In Phase I, engagement meetings with relevant stake holders will be held. In phase II, the organizational and clinical profiles of MCC to deliver chronic depression care will be assessed, using interviews and a standardized assessment tool administered to staff and managers of the clinics. To ascertain the current level of care, 167 consecutive women presenting for antenatal care for the first time and who screened positive for depression will be recruited and followed up till 12 months post-partum. In phase III, we will design and implement a cascade training programme for PHCW, to equip them to identify and treat perinatal depression. In phase IV, a second cohort of 334 antenatal women will be recruited and followed up as in Phase I, to ascertain post-training level of care. The primary implementation outcome is change in the identification and treatment of perinatal depression by the PHCW while the primary effectiveness outcome is recovery from depression among the women at 6 months post-partum. A range of mixed-method approaches will be used to explore secondary implementation outcomes, including fidelity and acceptability. Secondary effectiveness outcomes are measures of disability and of infant outcomes. Discussion This study represents an attempt to systematically assess and document an implementation strategy that could inform the scaling up of evidence based interventions for perinatal depression using the WHO mhGAP-IG in LMIC. Trial registration This study was registered on 03 December, 2019. https://doi.org/10.1186/ISRCTN94230307. Background It is now commonly accepted that the mental health treatment gap in low-and middle-income countries (LMIC) requires a shift in policy and health planning in which focused attention is given to the horizontal integration of mental health into primary and maternal health care. This is because not only is mental health care an integral component of holistic, person-centred maternal care, there are also inherent benefits to perinatal women when mental health is embedded within routine maternal care. Some of these benefits include early and increased detection of mental health conditions, improved accessibility to mental health care, reduced stigma, as well as an intimate link of mental health to the maternal care needs of perinatal women [1]. The case for the integration of mental health into routine maternal care is further strengthened by the fact that the material and human resources necessary to respond to the burden of mental disorders, including perinatal depression, are grossly inadequate in most LMIC. For example, Nigeria has about 250 practicing psychiatrists for a population of over 200 million people [2]. The situation is worse for other mental health specialists such as clinical psychologists and social workers. The few available specialists are mostly based in urban areas and are therefore inaccessible to the majority of the population who resides in rural settings. The integration of mental health into primary care requires that the providers at that level of care are empowered with the skills necessary for them to offer basic but essential service for common mental health problems. A recent situation analysis of maternal mental health in primary care in five LMIC (India, Nepal, Uganda, South Africa and Ethiopia) [3] found that while most of the countries had a national mental health policy that included maternal mental health, almost all of them did not have a national plan that included dedicated maternal mental health services. In all of the LMIC, perinatal women could only access mental health services through referral to mental health specialists at district or specialist centres, some of which were several kilometres away. Perinatal depression occurs in up to 10% of women prenatally and 13% postnatally in high-income countries [4]. There is evidence suggesting higher rates in LMIC. In a recent systematic review, the weighted mean prevalence for common perinatal mental disorders was 16% in the antenatal period and 20% postnatally in LMIC [5]. Perinatal depression is associated with long-term adverse consequences for maternal wellbeing and infant development. Perinatal depression is associated with suffering and loss of productivity [6], and is an important risk factor for maternal suicide [7]. Child adverse consequences of perinatal depression include pre-term birth and low birth weight, poor mother-child interactions, infant under-nutrition and stunted growth, elevated rates of diarrhoeal diseases, poor infant development, insecure attachment and higher rates of emotional and behavioural problems in infants of depressed mothers [8]. There are indications that these adverse child outcomes are worse in LMIC [9]. A large cohort study of more than 20,000 perinatal women in Ghana found that antenatal depression was associated with prolonged labour, peripartum and postpartum complications, non-vaginal delivery and newborn illness [10]. There is evidence that effective and cost-effective treatments exist for this condition [11,12] and that frontline primary care workers, including non-physician primary care providers and midwives, can deliver evidence-based intervention to affected mothers [13][14][15]. However, even in high income countries (HIC), only a minority of depressed persons get the care they need, with estimates suggesting that less than 50% of cases of postnatal depression are detected by primary health care professionals in routine clinical practice [16]. As part of efforts to facilitate task sharing and enable front-line providers deliver evidence based care for common mental health conditions, the World Health Organization, through a systematic, consultative and participatory process, developed the mental health Gap Action Programme Intervention Guide (mhGAP-IG) for use by non-specialists [17]. The mhGAP-IG presents an evidence-based framework for the management of priority disorders using protocols for clinical decision-making within routine clinical service in non-specialist settings. Depression, including that occurring in women in the perinatal period, is one of the priority disorders included in the mhGAP-IG. In pilot exploratory studies, we have shown that it is feasible to train non-physician primary care workers in the use of mhGAP-IG to deliver care for a range of mental health conditions, including depression [18]. In a subsequent fully-powered randomized controlled trial, women with perinatal depression, other than those with severe form of the disorder, treated by frontline providers using the basic psychological approaches described in mhGAP-IG had similar rate of remission as those treated with more intensive psychological interventions [19]. In that trial, treatment of depression according to the mhGAP intervention guidelines constituted the "enhanced care Keywords: Perinatal depression, Primary care, Implementation study, MhGAP-IG as usual" (the comparison group). There was no "care as usual" group as a second comparison group. At 6 month follow-up, two thirds of the women in the enhanced care as usual had attained remission from depression, suggesting that the mhGAP-IG might be a useful tool for scaling up care for perinatal depression by non-specialist frontline providers working in low-and middle-income countries. The availability of evidence-based guidelines and approaches for managing perinatal depression [11,12], with proven efficacy and cost-effectiveness, has opened up the possibility of scaling up service for this disabling condition. However, information is lacking as to how these guidelines and approaches can be delivered within an integrated routine perinatal care. There is a need to demonstrate the goodness of fit of these approaches to the extant health systems of LMIC, and that these approaches can be implemented in non-specialist, indeed non-physician, health care settings that characterize primary maternal care in most of sub-Saharan Africa. It will be important to empower community midwives and primary care providers with the skills to detect and respond to mental health conditions at the maternal primary care level. While the WHO has prepared, and made available several modules of training packages, there remains the need to find skilled trainers. In LMIC, where specialists are few and often overwhelmed by the demands of providing care for the teeming numbers of people in need, such specialists lack the time to provide the necessary training for the many end users (primary care workers) to improve service delivery. In an earlier study, we have demonstrated the impact of a cascade training format for the mhGAP-IG, (where mental health specialists (designated as Master Trainers) train non-specialist physicians and senior nurses (Trainers) who in turn deliver training to other primary care providers) in increasing the recognition and care for mental disorders in primary care [20]. Over the course of 12 months post-training, there was an average of 400% increase in the proportion of patients attending the clinics who received a mental, neurological or substance abuse (MNS) diagnosis [20]. There is a need to provide more robust evidence for the utility of this approach for scaling up care for mental health in routine primary care in this and other resource constrained settings. Specifically, there are outstanding issues that require attention. First, there is a need to more closely study factors that may facilitate or impede a sustainable approach to training primary care providers to use mhGAP-IG as a clinical support tool for the delivery of effective intervention for mental health conditions in routine practice at the primary care level. Second, it is important to demonstrate whether the skill acquired following training is retained beyond the immediate post-training period and what factors affect skills retention. Complementary to this is the need to evaluate what level of refresher training might be required to sustain the clinical competence of trained primary care workers. Third, the fidelity of use of mhGAP-IG specifications need to be determined as well as factors that might affect fidelity. Fourth, the effectiveness and cost-effectiveness of recognition and treatment of mental disorders by the trained providers need to be demonstrated. Objectives The overall aim of the programme of work is to study factors that may impede or facilitate the delivery of evidence-based intervention for perinatal depression by front-line clinicians using the mhGAP-IG in routine practice. The knowledge so gained, including that gained in the process of responding to barriers that may be encountered, will provide necessary information to facilitate the scaling up of the intervention in resourceconstrained settings. The specific objectives are to: (1) Identify optimal organizational and health systemlevel processes for the effective implementation of mhGAP-IG in routine maternal care at the primary care level. (2) Implement a training design that prepares primary care providers to deliver effective and evidencebased intervention for perinatal depression in a sustainable way and that builds a pool of trainers within the health system; (3) Assess the effectiveness of the intervention on maternal and infant outcomes; (4) Determine the barriers and facilitators of scaling up the intervention into routine perinatal care; and (5) Provide population-level estimates of the cost and impact of scaling-up care for perinatal depression. Design This hybrid (implementation-effectiveness) study will use a mixed-methods design and adopt a participatory research approach in all the stages of its implementation. We shall be guided by the Replicating Effective Programmes (REP) framework as it provides a roadmap for maintaining treatment fidelity while providing opportunities to tailor intervention to fit local needs, as well as specifying training and technical assistance strategies to maximize the chances for sustaining the intervention [21]. Evidence based interventions for perinatal depression and programmes for implementation have been tested in a variety of settings. This study will explore the factors affecting the use of a cascade training format to build the capacity of frontline primary maternal care providers in the use of mhGAP-IG to deliver evidence-based intervention for perinatal depression. We will assess the effectiveness of the intervention delivered by the providers following training in the use of mhGAP-IG by comparing the outcomes of women with perinatal depression who present for care before with those who present after the training of the providers. In line with the REP model, the study will be carried out in four phases: 1. Pre-condition, 2. Pre-implementation, 3. Implementation and 4. Maintenance and evolution (Fig. 1). This study will be implemented in the following four overlapping phases: Phase 1 (Pre-Condition) (months 1-6): formative study and concept development. Phase 2 (Pre-Implementation) (months 7-18): assessment of the organizational and clinical profile of maternal clinics. Study setting The study will be carried out in all the 11 local government areas (LGA) in and around the city of Ibadan. Patient recruitment will be from randomly selected primary health care clinics across the 11 LGAs. The selection will take into consideration the location of the clinics in either rural or urban setting. Each of the LGAs has an average of 10-14 primary health care clinics (PHC). From the average patient load of the clinics, we estimated that 20 clinics would be required to meet the calculated sample size within the study time frame (see below). Based on current estimated patient flow, the clinics were selected in such a manner that approximately half of the participants are recruited from rural and the other half from urban centres. Methods phase one Engagement activities We shall conduct consultative engagement meetings with stakeholder groups that include community leaders, policy makers, primary care providers and women to secure cooperation and obtain views relevant to the success of project implementation. Specifically, we shall conduct key informant interviews with selected facility managers involved in our previous trial to obtain information about how perinatal depression is viewed and understood and currently managed. Also, included in these engagement activities will be women who had perinatal depression at the time of our previous trial and had participated in it, and who had given permission to be contacted for future studies. Planning workshop A planning workshop will be organized, consisting of key players from our earlier mhGAP demonstration project and will include women who had received care for perinatal depression previously, primary health care workers (PHCW), nurses, primary care physicians, policy makers from the Ministry of Health as well as members of the research team. The focus of the workshop will be to: (1) review the programme of work relating to the project; (2) review experience of previous training of primary care workers using the mhGAP-IG and the mhGAP demonstration project and draw the relevant lessons from it; (3) design the public and policy engagement activities that will ensure project sustainability. A major outcome of the workshop will be the development of a Theory of Change map that highlights the assumptions, the barriers as well as the facilitators for successful project implementation and future scale-up [22]. Methods phase two Assessment of the organizational and clinical profile of maternal clinics This will be conducted in two parts: (a) Review of facility care profile, and (b) Recruitment and assessment of Cohort One perinatal women. Review of facility care profile Depression is commonly a chronic disorder and should ideally be treated within a chronic care model. We therefore seek to address the question: what is the current care arrangement in this facility for chronic conditions? To address this question, we will conduct key informant interviews with the facility managers of the 20 selected clinics for the current study to enquire about process of care for perinatal women, the structural features of the clinics that may have a bearing on the care of women with perinatal depression and the administrative environment for training and supervision of the PHCW. These interviews will be supplemented by information collected using the Assessment of Chronic Illness Care (ACIC) [23], a tool specifically designed to obtain information about organizational structure of a health facility towards delivering effective intervention for chronic conditions. The tool has been previously adapted for LMIC settings and will be further adapted for the specific focus of this project. Recruitment and assessment of cohort one We will recruit a cohort of women to address two questions during this phase of the study: What is the current rate of detection and treatment of perinatal depression? What is the experience of women receiving care in the clinics in regard to the attention given to their psychological health? Participants Consecutively registered women presenting for antenatal care in the selected clinics will be screened using the Edinburgh Postnatal Depression Scale (EPDS) to identify women with depression [24]. Women with depression (scoring 10 or more on the EPDS), and who consent to further participation in the study will be recruited. Procedure Consecutively registered women presenting for antenatal care will be screened with the EPDS after being attended to by the PHCW (Fig. 2 and SPIRIT guideline). All those who screen positive for depression, irrespective of whether they have been identified as having depression by the PHCW or not, and consent to participate in the study will constitute Cohort One. At an estimated prevalence of at least 10% and about 5% refusals, we expect to screen about 1800 women to recruit 167 women with scores at or above the cut-off level and consenting to participate in the study (see below for sample size estimation). This cohort will enable us determine how many of those who screen positive are identified by the PHCW and what treatment is offered to them. Even though this cohort is to document current treatment practices in these clinics for women with depression, any woman considered to be at high risk will be referred for treatment. This will include women with severe depression as indicated by an EPDS score of 18 or greater and those with suicidal ideation as indicated by a score of 3 or more on the tenth item of the EPDS. Severely depressed or suicidal subjects will be tracked to see if they took up referral and additional care, as part of process evaluation. Women who are too ill to cope with the interview or require urgent medical attention will be excluded from recruitment. Outcome assessments Baseline assessments will be conducted by research assistants within 72 h of screening positive and consenting to enter into the study in the respondent's home (or any other place of their choice) ( Table 1). The primary outcome assessment will be at 6-month postpartum with secondary assessment points being at 3-and 12-month postpartum, all at the participant's home. The outcome assessments from this cohort will be compared to those of the subjects in Cohort Two (see below) to determine the effectiveness of the interventions delivered following the training of the providers. Respondents will be administered a battery of questionnaires including: a short questionnaire to enquire about whether they have been asked questions about their psychological health by the maternal care providers and their experience of care in the clinic, the Client Service Receipt Inventory-Postnatal version (CSRI-PND) to evaluate their use of service in the previous month [25], the World Health Organization Disability Assessment Scale (WHO-DAS) to determine their level of disability [26], and the Patient Assessment of Chronic Illness Care questionnaire [23] to further examine their assessment of the profile of care in the facility. These postpartum assessments will collect information about infant feeding, and the child's vaccination history, experience of common illnesses (fever, diarrhoea) and developmental milestones. Development of a training manual We will develop a training manual for use by the Trainers with a guide that describes the methods of training primary care workers, including the organization of both didactic lectures, use of power point, and conduct of role plays. Cascade training Participants The supervisory physician and three of the most senior cadres of primary health care workers (this category include-nurses, senior community health officers [CHO]) from each of the 11 local government areas in which the study is to be implemented will be recruited as trainers (N = 44). These trainers will subsequently provide training for the end users-frontline primary care providers who deliver care for perinatal women. All the frontline providers in each of the selected clinics will be invited to participate in the trainings. We aim to train at least 200 frontline primary care workers. Training of trainers One Training of Trainers (ToT) workshop will be conducted by two psychiatrists (Master Trainers) with experience in the training of providers in the use of mhGAP-IG and perinatal mental health (Fig. 3). The trainers will be trained in the use of mhGAP-IG to manage perinatal depression (with a range of severity) and suicidality and how to train the end users and provide supervision and support to them. Training of frontline primary care providers Subsequent to the ToT, the trained trainers will conduct training workshops with the end-users of mhGAP-IG. This training will utilize the training materials developed by the Master Trainers and will be supervised by the Master Trainers. We plan for each training workshop to be facilitated by at least two of the trained Trainers. Each 2-day workshop will be attended by no more than 20 participants to ensure effective and interactive training. Participants will be tested pre-and post-training on knowledge and attitude. They will also provide structured ratings on the content and delivery of the training. At the end of each training workshop, the Master Trainer will have a de-briefing session with the Trainers during which a review of the workshop will be conducted and lessons learnt noted. Use of a screening instrument We will explore the impact of the incorporation of a short screening tool for depression into the routine clinical assessment of women presenting for antenatal care. After the training of the providers in the use of mhGAP-IG for the detection and treatment of perinatal depression, participating clinics will be divided into two groups. In one group of randomly selected clinics, providers will use the 2-item Patient Health Questionnaire (PHQ-2) to routinely screen consecutive women attending maternal clinics to determine need for further assessment for depression. Providers in the second group of clinics will not be using a screening tool before assessing the antenatal women for depression. Assessing knowledge retention Approximately 6 months after the initial training, 40 of the trained PHCW, randomly selected, will receive another test to determine the level of knowledge retention, the level of drift in the acquired skills and factors that are related to retention or drift. Such factors may be the characteristics of the providers, including their level Refresher training Approximately 6 months after the initial training of frontline primary care providers, a refresher training essentially like the initial one will be provided for the PHCW in the PHQ-2-using clinics. Participants at this training will also be tested pre and post training. Supportive supervision The frontline providers in all the clinics will have structured fortnightly supportive supervision conducted by the trainers. A structured checklist will be designed to cover clinic organization and service delivery activities geared towards identifying and providing care for perinatal depression, on which the supervisors can score each clinic on a Likert scale. The supervisors will also provide technical and other assistance to the frontline providers as needed. Participants Consecutive women making their first antenatal visit will be screened for depression with the EPDS after their consultation with the primary care providers (Fig. 2). Women who screen positive will be invited to participate in the study. At an estimated prevalence of at least 10% and about 5% refusals, we expect to screen about 3500 women to recruit 334 women with scores at or above the cut-off level (see section on sample size determination) and consenting to participate in the study. Women who screen positive, irrespective of whether they have been identified as depressed by the PHCW, and who provide informed consent to participate will constitute Cohort Two for the study. Those who screen positive but have not been identified by the primary care workers will be advised to see the providers again and notify them of their depression status. This will be particularly so for those who score 18 or more or have suicidal ideation on the EPDS and are therefore judged to have severe depression as well as those who endorse the item on suicidality on the EPDS. This is to ensure that they get timely treatment. The providers are also required to consult with the project psychiatrist for the management of patients with serious suicidal risk. Interventions The interventions will be based on treatment specifications in the WHO Mental Health Gap Action Programme Intervention Guide (mhGAP-IG) as adapted for the health system of Nigeria [27]. The mhGAP-IG depression module provides detailed guideline for the management of moderate to severe depression with special consideration for pregnant or breastfeeding women. The mhGAP-IG emphasizes the use of psychosocial interventions for depression in pregnant and breast-feeding women with the lowest effective dose of an antidepressant being used when there is no response to psychosocial treatments. In line with this, interventions for perinatal depression in this study will include psychoeducation, addressing current psychosocial stressors and reactivating social networks. Psychoeducation will be offered to every woman enrolled into the study as well as to their family members as appropriate. Psychoeducation involves an explanation of the diagnosis to the patient in simple language using local expressions while avoiding the labelling of 'mental illness/disorder' . The patient is helped to understand that the symptoms being experienced are not as a result of laziness or supernatural forces but an ailment that is common and amenable to treatment. Addressing current psychosocial stressors entails offering the patient an opportunity to talk about current psychosocial problems and, to the extent possible, the health worker addressing pertinent social issues and assisting the patient to solve the problems with the support of available community resources. In reactivating social networks, the patient is encouraged to re-initiate prior social activities that have been neglected on account of the illness. The health worker is expected to follow up the patient regularly. However, the choice of the number and frequency of the visits will be at the discretion of the attending health care provider. Low dose medications will be indicated for women who do not improve with the psychosocial treatments. Medication will only be given after consultation with the primary care physician. Primary outcomes The primary implementation outcome is change in the identification of perinatal depression between the two cohorts by the providers. The primary effectiveness outcome is difference in remission rates from depression at 6 months between the two patient cohorts. Remission is defined as an EPDS score of 5 or less at 6 months post-partum. Secondary effectiveness outcome The secondary effectiveness outcome will be (1) the difference in the level of disability between the two patient cohorts at 6 months post-partum as assessed with the WHO-DAS [28]; and (2) difference in infant growth and development outcomes between the two cohorts. Secondary implementation outcomes A range of mixed-methods assessments will be conducted. (1) Assessment of quality of intervention: We will conduct a detailed assessment of fidelity with mhGAP-IG specifications for treating depression by the providers. This will be done by research supervisors sitting in during clinical encounters between PHCW and depressed women. We will use the 18-item Enhancing Assessment of Common Therapeutic factors (ENACT) rating scale [29] to evaluate the extent to which providers are using the skills acquired during the training to provide appropriate psychological assessment and intervention for the women. ENACT is designed to evaluate the delivery of psychological interventions, especially as prescribed in the mhGAP-IG. We will modify it to also include decisions about referral and use of medication. Twenty five consultations will be rated with the tool. (2) Assessment of contextual factors affecting delivery of intervention: Qualitative interviews will be conducted with selected PHCW (N = 20) and women who recover from depression and remain well through the follow-up period (N = 15) and those who fail to make consistent recovery (N = 15) to understand contextual factors that enable/ inhibit the delivery of effective treatment using the mhGAP-IG. We will be interested to learn how integrated care of perinatal depression has improved overall health care (for example improved quality of communication with service users, improved service user satisfaction with care), how it has impacted on the functioning of the different components of care, including factors that facilitate or act as barriers to the delivery of intervention at a systems level (for instance information, human resources), as well as issues relating to stigma and discrimination. (3) Process evaluation: we will be conducting process evaluations alongside to enable us to (1) assess the quality of implementation, and (2) identify contextual factors that could affect the scaling up of interventions for perinatal depression. We will target two processes for detailed evaluation-the training of the primary care workers by the trained trainers and the delivery of the interventions for depression using the mhGAP-IG. • The training process: We will assess the changes in knowledge and attitude of the trainers following the ToT workshop using pre and post-test questionnaires, knowledge of depression scale [30], and depression attitude questionnaire [31] administered before and after the training sessions. We will evaluate the extent to which trainers demonstrate fidelity to the training procedure when they train the PHCW. Master trainers will sit in at the training workshops delivered by the trained trainers as non-participant observers. During these observations, the master trainers will document their assessment of the training procedure using a semi-structured observation pro forma specifically designed for this purpose. Some of the cascade training sessions will be videorecorded for further training purposes. • The Trainees: We will similarly assess the changes in the knowledge and attitude of the trainees who will be at the frontline of delivering the interventions. This will be done using: • Other process evaluation activities: We will document and link to our Theory of Change Map findings of assumptions/potential barriers and facilitators. We will also track and monitor turnover, availability and transfer of mhGAP-trained staff and supervisors, frequency of supervision sessions, women's attendance for follow-up sessions, and also track any contextual changes in the course of the study. 4. Observational component: Throughout the study period, trained research assistants and field supervisors will pay both scheduled and unscheduled visits to the facilities to document both the visible existing organisational and operational aspects of delivering care for perinatal depression using a pre-designed pro forma. This is essentially to supplement and triangulate the information obtained from key informant interviews and focus group discussions. 5. Effects of COVID-19: Although unexpected, the pandemic might affect the later stages of the recruitment and follow up period. If this happens, we plan to track factors related to COVID-19, for example, in terms of patient flow, ease and regularity of follow up, any increase in domestic violence due to lock down, given its strong association with depression in women. Blinding and protection against sources of bias The following steps will be taken to reduce the risk of bias in this study: (1) The study is designed to ensure that the risk of contamination between clinic clusters is low as women are unlikely to move from clinics using PHQ-2 and those that are not, due to their geographical spread and because there will be no publicity regarding the use of PHQ-2 in some clinics. (2) Outcome assessors at the women' homes are not involved in screening women at the clinics, and are blind to whether women received care in PHQ-2 clinics or not. Data collection and quality control Quality control of field work will be implemented by research supervisors and this includes random checks on the quality of interviews (conducted by physically observing at least 10% of the interviews conducted by a research assistant). Supervisors will also work with the Data Manager to check that research assistants have correctly captured study data. Data protection All data will be kept anonymously by using codes to identify individuals. Data will be uploaded to a server located in the central office where it will be cleaned and stored. Access to the datasets is possible for members of the research team through a password-protected entry. Sample size Experience from the control arm of our recently concluded randomized controlled trial of intervention for perinatal depression suggests that just over 70% recover at 6 months [32]. The control arm of that study received low intensity intervention based on the specifications of mhGAP-IG and delivered by trained PHCW. We will expect about the same rate of recovery in the current study. Prior to training, we assume a recovery rate of 55% at 6 months following delivery based on previous observations by us [19]. We think that this difference of 15% is clinically meaningful to promote changes in routine maternal care. In the said RCT, we were able to complete primary outcome data at 6 months postnatal for about 85% of the participants. We will plan to recruit about twice as many mothers after the training of the providers than before the training (that is a ratio of 2:1 following training) in order to provide more information about contextual and health system factors that help or hinder effective implementation. We estimate a sample of 167 prior to training and 334 post-training to detect a difference of 70% vs 55% (equivalent odds ratio = 1.8) with 80% power at the two-sided 5% alpha level. The correlations between the successive pre-and posttraining tests conducted for the providers during the demonstration project in the State of Osun was 0.37 and the within subject variance was 0.352 [18]. Using these values and with a projected power of 80% and Type I error of 0.05, we estimate the number required to detect a difference of 1 unit between tests scores (considered a meaningful change in the assessment tool we plan to use) to be 112. Assuming an attrition of 15%, we will need a total of 129 providers to demonstrate a meaningful change between pre-and post-training scores as well as stability of skills during refresher training on average 3 months after the first training. Study instruments Assessment of Chronic Illness Care (ACIC) [23] a tool specifically designed to obtain information about organizational structure of health facility towards delivering effective intervention for chronic conditions. The tool has been previously adapted for LMIC settings and will be further adapted for the specific focus of this project. This tool will be administered to facility managers by trained research assistants during the formative phase of the project. Patient Assessment of Chronic Illness Care questionnaire (PACIC) [23] will be used to evaluate patient's perspective on receipt of care delivered in the facility. This instrument will be administered to participating antenatal women in both cohorts during the follow up periods. Edinburgh Post Natal Depression Scale (EPDS): The EPDS is a 10-item screening instrument for depression. It has been validated and used in earlier studies of perinatal depression in Nigeria [33]. The EPDS will be used by trained research assistants to screen consecutive women presenting for their first antenatal visit after their consultation with the primary care providers, to determine their eligibility for enrolment into the study. Patient Health Questionnaire-2 (PHQ-2) [34]: This is a two item screening instrument for rapid screening of depression. It consists of the first two items of the 9-item Patient Health Questionnaire (PHQ-9) [35], which has been validated among Nigerian students [36]. This tool will be used by previously trained primary care providers in half of the participating maternal clinics to assist them in the detection of perinatal depression. Client Service Receipt Inventory-Postnatal version (CSRI-PND) will be used to evaluate the participating mothers' use of service in the month prior to the assessment time point [25]. The CSRI-PND is an adaptation of the Service Utilization Questionnaire (SUQ) [37] which we have used in previous studies in collaboration with its principal designer. We shall use this instrument in follow-up assessments by research assistants, to systematically collect resource-use data of the participating perinatal women, including any inpatient care, consultations with health providers, use of drugs and laboratory tests, and also time and travel costs associated with this service uptake. We have recently adapted the CSRI-PND for the Nigerian health situation by applying it to antenatal women [19]. WHO Disability Assessment Schedule (WHO-DAS): The WHODAS was developed for measuring functioning and disability in accordance with the International Classification of Functioning, Disability and Health across different populations. The WHODAS II has high internal consistency (Cronbach's alpha, α: 0.86), a stable factor structure; high test-retest reliability (intra-class correlation coefficient: 0.98); good concurrent validity in patient classification [28]. The WHO-DAS will be used by trained research assistants to collect disability data of the participating perinatal women during the follow up period. Infant well-being assessments: the current level of cognitive, language, personal-social, and fine and gross motor development of the offspring of the perinatal women will be assessed by research assistants with an infant well-being questionnaire. This questionnaire was developed by us and it incorporates questions on child nutrition, achievement of developmental milestones, illnesses, immunization and measurements of the length/ height and weight and administered at 3 months, 6 months and one year post delivery. Enhancing Assessment of Common Therapeutic factors (ENACT) rating scale: This is an 18-item tool that has been developed to provide reliable and valid assessment of therapist competence in a variety of cultural and service settings [29]. The tool will be used by research supervisors to evaluate the fidelity of the care providers during clinical encounters between them and the depressed women. Knowledge of Depression Scale: This is a 27-item multiple choice questionnaire designed to assess basic knowledge about depressions and its treatments [30]. Selected items from the tool will be used to test change in knowledge of frontline service providers during the training process as well as to assess knowledge drift in the periods after the training. Revised depression attitude questionnaire (R-DAQ) is a 22-item tool for examining clinicians' views and understanding of depression [31]. Item from the R-DAQ will be administered to frontline providers to assess changes in attitude during and after their training workshops. Data analysis Qualitative interviews will be transcribed. Interviews conducted with women will be conducted in Yoruba and will be translated into English following which backtranslation checks will be applied. The data generated will be analysed using thematic analysis with the assistance of a qualitative software package such as MAXQDA. We will use descriptive statistics to assess balance between Cohorts One and Two at baseline for both PHC and individual participant characteristics. In order to take appropriate account of the hierarchical nature of the data, we will use multivariate mixed effects logistic regression to estimate recovery from depression at three and 6 months for Cohort Two versus Cohort One, adjusting for baseline depression. These analyses will be repeated for depression, disability, and service use characteristics of the mothers and the growth profiles of the infants. We will conduct sensitivity analyses to assess the potential effect of missing data using multiple imputation methods. We will investigate the effect of adherence with the intervention using instrumental variable regression. Appropriate interaction terms will be entered into the primary regression analyses for recovery from depression in order to conduct pre-specified subgroup analyses that will include baseline symptom severity (EPDS score 10-18 vs 18 +) and duration (≤ 1 month¸ > 1 month). Data from the ACIC and the PACIC will be analysed using descriptive statistics. Quantitative data from the patient flow observation checklist will be analysed using descriptive statistics. All analyses will be conducted using STATA. Study status At the time of submitting this manuscript, the SPEC-TRA study is in the implementation phase. A theory of change map was produced from the formative phase of the study (Fig. 4). The recruitment of the first cohort of participants and six-month post-delivery assessments have been completed. The training of trainers' workshops was carried out. The trained trainers have completed the stepped-down cascade training of other health care workers in all the selected clinics. The recruitment of the second cohort of women as well as the 6-month postnatal assessments have been done for all participants. One year post-delivery assessment is currently ongoing. Experience on the field as the study progressed led to several iterations of the study protocol, each of which required ethical approval. This hampered the decision to publish until the final version of the work plan was clear and ethical approval was obtained. Discussion This study represents an attempt to systematically assess and document an implementation strategy that could inform the scaling up of evidence based interventions for perinatal depression using the WHO mhGAP intervention guide in low and middle income countries. While effectiveness trials have demonstrated that evidence based interventions for perinatal mental disorders can be delivered by trained primary care workers, there is need to understand contextual and other factors that may facilitate or impede the scaling up of these interventions in routine maternal care. Considering the prevailing resource constraints that characterize the mental health care systems across most LMICs, pragmatic and innovative approaches of providing training, support and supervision to primary care workers to improve the delivery of mental health care are needed. A feasible and sustainable approach is to equip more senior and experienced primary care providers with the necessary skills required to train and supervise the frontline health care workers in the delivery of care for common mental disorders with mental health specialists providing support. In the formative phase of SPECTRA, a key issue that was raised by stakeholders is how to improve the identification of perinatal depression by the trained providers. While training primary care providers is known to improve identification, available evidence suggests that, even in settings with high resources, up to 50% of women with the condition are not correctly identified. Studies have shown that the use of a depression screening instrument could aid the identification of such women [38]. In our previous trials of the effectiveness of depression treatment, we have used trained research staff to administer screening instruments to identify participants with probable depression for primary care providers to assess and make the diagnosis of depression [19,39]. We are yet to explore the feasibility of primary care providers incorporating the use of a depression screening tool into routine patient assessment. To assess this, at the implementation phase, half of the selected clinics will be randomly assigned to routinely administer a brief depression screening tool, the 2-item Patient Health Questionnaire (PHQ-2), to all women as a part of their assessments at each antenatal and postnatal visit. Women scoring 3 or more will be further assessed using the mhGAP-IG. In addition to improving the recognition of depression, routine screening might have the potential to enable health care providers to target women who are more at risk for further evaluation and thereby saving time. Our study will provide useful information about the feasibility, acceptability and impact of this approach in the identification and treatment of perinatal depression in routine maternal care in resource-constrained settings. We acknowledge the potential limitations of our study design. For example, in a classic uncontrolled beforeand-after study design like ours, changes observed in the clinics in terms of detection and treatment of perinatal depression, post intervention, may not be due to our intervention alone. They may also be due to other interventions not anticipated by us, such as new and unforeseen in-service training exposure for frontline primary care clinicians outside of our programme. We do not anticipate this in the life of the project because of the rarity of such opportunities to the clinicians. To take account of such unlikely development, approaches that are less prone to confounding of this sort, such as controlled before-and-after study design or a stepped wedge design might be suitable. We also note the limitation of self-report measures which may be prone to desirability tendency. However, we are collecting every information anonymously and clinicians are aware that no
2021-09-20T13:32:35.382Z
2021-09-20T00:00:00.000
{ "year": 2021, "sha1": "83ae3ae2cdfa4329a29eb4e383e2350fa6cdc0e0", "oa_license": "CCBY", "oa_url": "https://ijmhs.biomedcentral.com/track/pdf/10.1186/s13033-021-00496-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75a25d9ccea8260e8ad1044f10c9e385484fd2ae", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
238113263
pes2o/s2orc
v3-fos-license
Ways to improve the energy efficiency of renewable energy sources The ways of increasing the energy efficiency of the use of renewable energy sources are considered. The descriptions of universal energy complexes that simultaneously use converters of various types: solar rays, wind flows into electrical energy, patented at SKGMI (GTU) are given. The prospect of using renewable energy sources for autonomous power supply of objects in mountainous regions has been substantiated. Introduction Currently, there is a widespread introduction of renewable energy sources (RES) in various areas of energy supply [1][2], in connection with which the increase in the energy efficiency of the use of RES becomes important. The use of RES is characterized by different types of efficiency: economic, social, environmental, energy. These types are inextricably linked, and a separate consideration of each type is of a methodological nature, designed to accentuate certain parameters of the single process of using RES. Next, three types of RES will be considered: solar panels, wind farms and low-power hydroelectric power plants. Materials and methods The energy efficiency of the use of RES is determined by the following main parameters: the energy payback period; the price and quality of the generated energy; stability of output parameters; reliability of power supply, etc. The main ways to increase the energy efficiency of renewable energy sources are shown in Figure 1. The most significant ways are: increasing the efficiency of primary converters of natural energy flows (sun rays, wind, water flow) into electrical energy; reduction of the energy payback period; prevention of anomalous impact of natural environmental factors on RES facilities -manifestations of various kinds of environmental risks. These methods are integral, generalizing in nature and determine the general state of the facility that generates renewable energy. In figure 1, in addition to those already listed, other methods that are important for practice are also indicated. One of the main ways to increase the energy efficiency of renewable energy sources is to reduce the energy payback period of power plants based on renewable energy sources (photovoltaic, wind and micro HPPs). In this case, the full life cycle of the object is considered: production of the installation, including all auxiliary operations; installation with the necessary change in natural conditions; operation and disposal of used parts of the installation and further -the installation itself. For the The return period of the generation by the object spent on it during the entire life cycle of energy, during the return period of "bound" energy is called the term of the object's energy recoupment. The smaller the amount of "bound" energy and the greater the productivity (the volume of generated energy per unit time) of the facility, the shorter its energy payback period, therefore, the greater the efficiency of using the corresponding installation (facility). Reducing the energy payback period (usually measured in years) can be achieved by implementing measures in two directions: rationalizing the auxiliary operations of the life cycle; increasing the efficiency of the object itself, i.e. increased energy generation. If we consider the full life cycle of the use of solar cells, then it is necessary to highlight the following stages: extraction of the initial material (silicon), its purification, the technological process of manufacturing solar cells, installation of solar cells in a specific place, operation, dismantling, disposal. All stages are characterized by energy consumption. To the listed steps it is necessary to add transport operations and the corresponding energy consumption. You can write the following equation to determine the energy payback period: where W k -energy consumption at the "k" stage of the life cycle of the object under consideration, kW•h; W av -average annual energy production by the facility, kW•h/year; t ok -is the term of the object's energy recoupment, year. In the formula (1), W k and W av are determined by calculation or experiment for the object under consideration; the energy payback period is defined as the ratio: As already mentioned, to reduce the energy payback period, the energy intensity (energy consumption) should be reduced at all stages of the complete life cycle of RES use. In addition, it is necessary to increase the level of energy output by the facility for the development of renewable One of the ways to increase the energy efficiency of using renewable energy sources, as shown in figure 1, is the concentration of natural energy flows, an increase in their intensity. The sun's rays and wind are characterized by a low concentration of energy (the amount of energy per unit area of the working surface of the converter installation). This leads to the need to use various kinds of concentrators. Results In the installations developed at the North Caucasian Mining and Metallurgical Institute (State Technological University), Vladikavkaz [4][5][6][7][8] concentrators, respectively, of solar rays and air flow are used. To increase the intensity of solar irradiation in the wind turbine, reflectors are used that direct the sun's rays to the working surfaces (solar panels) of the installation; to increase the effect of the wind, a guiding device is used that changes the direction of air movement towards the working body -the blades of the wind generator drive. The use of concentrators can increase the productivity of the installation by approximately 1.2÷1.5 times. One of the problems associated with the use of renewable energy sources is the inconstancy of the parameters of natural energy flows used in energy generation. The wind can change speed and direction; solar irradiation can change with changes in weather conditions (clouds), with a change in the time of day, etc. All this leads to significant fluctuations in the output power of the installation. However, the power supply to consumers should not depend on fluctuations in the output power, therefore, wind and solar generators must have appropriate devices that stabilize their output parameters. One of the effective ways to stabilize the output parameters of generating sets using RES is the use of intermediate energy storage devices (figure 2). In figure 2 SES -natural energy flows of renewable energy sources, which in the primary converter (generator) are converted into electrical energy, which can be transferred partially to the consumer and partially to the storage device. The energy balance equation in general form will be: S G -output power of the converter (solar panel, wind generator, etc.); S P -the power consumed by the load; S N -the power transmitted to the storage device. With an excess of generated power, part of it is transferred to the storage device, and it operates in the consumer mode ("+" sign). If the generated power is insufficient (in comparison with the one required for normal operation of the consumer), it comes from the storage device and the power SН has a "-" sign (goes with a plus to the first part of equation (3)). In some cases, all the power from the generator goes to the storage device, the energy consumption of which must be significantly higher than the power consumption of the load in order to create a stability margin in the general energy system using RES. More formally, a real electrical circuit with power supply from renewable energy sources can be represented as a circuit with an active four-pole, shown in figure 3. An essential point when using an active four-port network is the condition that its power and energy capacity could compensate for both fluctuations in energy supply from RES and possible fluctuations in energy consumption by the load. To do this, its energy parameters in the calculation should be selected with a margin that takes into account the possibility of these fluctuations. As you know, an active four-pole, i.e. containing the EMF E VN inside itself, can be replaced by a passive one, if you short-circuit all the EMF contained inside the four-pole, replacing them with internal resistances, and add additional EMF into the equations of the four-pole: EMF E 01 into the primary circuit, and E 02 into the secondary EMF, which are equal to the voltages at the ends open clamps of this active four-port network. Using this technique, we write the equations of an active four-port network in symbolic form, for example, for a fairly common Z-form: When deriving equations (4), (5), it was assumed that the EMF of the sources does not depend on the currents in them. Based on the above reasoning, you can write down the equation of the four-port network in other forms and use them in accordance with the specifics of the task. The introduction of an intermediate four-port network with an energy storage device between the generator and the consumer is an effective means of stabilizing the output parameters of a RES power plant. For individual objects using energy supply from renewable energy sources, an important circumstance is the possibility of placing a generator set in close proximity to the consumer (for example, placing solar panels on the roof of a house to supply electricity to consumers inside it). At the same time, there are practically no energy losses for its transportation, which significantly increases the energy efficiency of the use of RES. In general, you can write: R GEN -full power of the generator set; Р PR -power of converting devices; Р TR -power of transmission (transmission) of energy to the consumer; Р N -power consumed directly by the load. To transport energy, appropriate converting devices and transmission lines or cables are required. Lack of transportation means direct consumption of the output power at the place of its production. At RES Е VN Load Actual problems of the energy complex 2020 IOP Conf. Series: Materials Science and Engineering 976 (2020) 012002 IOP Publishing doi:10.1088/1757-899X/976/1/012002 5 the same time, there are no losses both in converting devices and in power transmission lines. In addition, the reliability of the system increases, the production of energy becomes cheaper; the accompanying environmental effects decrease in general -the energy efficiency of the use of renewable energy sources increases. Discussion As an example of the spatial combination of a generator and a consumer, a road dividing block with an illumination system [8], which is supplied with electricity from renewable energy sources -wind and photovoltaic generators located directly on the block (Patent No. 196315 published on February 25, 2020, authors Yu.S. Petrov, M.K. Khadikov, A.K. Muzaev). On the upper part of the block there is a bladed wind wheel, using the effect of oppositely moving air flows (in accordance with the oncoming traffic of traffic flows on opposite sides of the block, which is part of a separation road barrier); solar panels are applied to the sides of the block; inside the block there are switching, storage and conversion devices, from which electricity is transmitted directly to the luminaire, fixed on the side surfaces of the block. Self-sufficiency in the lighting system of the road separation barrier allows for a reliable and efficient lighting system on motorways. The The principle of operation of a wind turbine consists in the simultaneous conversion of wind energy and solar energy into electricity. The benefits of this transformation are clear. Firstly, the power of the power plant increases with practically unchanged dimensions in comparison with the sizes of installations using only one type of conversion. Secondly, the reliability of generation increases, since in the absence of one source of energy, another can operate; termination of the action of two energy sources of different types at once is unlikely. Thirdly, the total density of energy taken from a unit volume of total space increases, i.e. there is a concentration of the used energy flows by superimposing them on each other in different parts of the general converter installation. Fourthly, the environmental damage from the operation of a hybrid plant is reduced in comparison with plants using only one type of RES (single-profile type). And, finally, it should be noted an obvious decrease in the energy payback period for hybrid installations -due to a significant increase in power. Theoretically, hybrid installations that combine renewable energy sources of any type (wind, solar, hydro and other converters) have the right to exist. The development of hybrid plants of a specific type depends on the availability of this type of renewable energy in one place, on the characteristics of the consumer, the environmental situation, etc. According to some scientists [9][10][11], hybrid installations for the simultaneous use of RES of various types have a great future. An increase in the energy efficiency of a RES installation can also be achieved by implementing a coordinated regime between the energy source and the load. As you know, in the case of a linear active two-terminal network with a known internal resistance R VN -for direct current and for alternating current (in the general case Z VN = R VN + jX VN , i.e. it consists of active R VN and reactive X VN components), the conditions for the release of maximum power in the load are as follows :  For direct current:  For alternating current: That is, the complex resistance of the load must be equal to the conjugate complex internal resistance of the energy source. In the event that the internal resistance of the energy source (active two-terminal network) is nonlinear, as, for example, for a solar battery, the relationship between the resistances of the source and the load becomes more complicated and for the analytical determination of the parameters of the matched mode one should have an analytical expression for the output current-voltage characteristic of the source. To increase the reliability of power supply to consumers, it is possible to create a network of autonomous power plants that use renewable energy sources, and form a single energy system that autonomously supplies a certain number of consumers. In addition to generating sets and consumers, such a network will include general energy storage and distribution system, the energy intensity of which will depend both on the number and power of consumers and on fluctuations in the generated power in the system. The use of various types of converters (wind, hydro, solar batteries) in the general generation network will significantly reduce the risks of critical fluctuations in the generated power in the general system (simultaneous failure of various installations is unlikely), and will increase its resistance to changes in external factors. Figure 4 shows a diagram of the application of a network of installations containing wind, hydro generators, and solar batteries. The network works as follows. The energy generated by the different installations is transmitted to the general switchgear together with the necessary information about the operation mode of the installations. From consumers to the switchgear, information is transmitted about the power required for the normal operation of consumers. The generated power is distributed among consumers; in case of its lack, the storage station is activated. With an excess of generated power, it replenishes the energy reserves of the storage station. The switchgear operation is organized in accordance with the information received from consumers, storage station and generator sets. In some cases, to increase the overall reliability of the network of complex RES facilities, the system can be supplemented with a diesel generator. It will be turned on in abnormal situations of a sharp decrease in the capacity of generating renewable energy. The operation of various generating sets using renewable energy sources on a common (autonomous) power grid with the corresponding consumers and a storage station will significantly increase the operational parameters of the system; improve its energy performance, and, consequently, the energy efficiency of using renewable energy sources. Mountain areas have wide opportunities for using renewable energy sources. However, in mountainous conditions there is a real danger of manifestation of environmental risks of various natures. The most typical of them are: river floods, mudflows in summer, snowfalls and avalanches in winter; showers, strong winds, rockfalls, etc. up to environmental disasters (for example, the gas- In a number of cases, and not only in mountainous areas, natural factors can cause accidents both in the centralized electric power system and in autonomous power supply systems using renewable energy sources. Energy efficiency, the stability of the operation of electrical installations in this case will significantly depend on the possible hazardous impact on them of environmental risks, in particular, mountainous areas, especially since the bulk of installations on renewable energy sources are located, usually, in mountain conditions due to the presence in them a large number of renewable energy sources of various types. For renewable energy installations, abnormal natural impacts will have the most destructive consequences, because these installations directly use natural energy flows (wind, water flow in a mountain river) in their own designs. Therefore, the prevention of the impact of natural hazards on generating sets using RES is of utmost importance. Environmental safety, and, consequently, the energy efficiency of installations using renewable energy sources, can be ensured by implementing appropriate measures both at the design stage and at the stage of plant operation. When designing, it is necessary to take into account the degree of environmental risks in the place of the future location of the installation, the dynamics of changes in the environmental situation in the relevant area. During operation, it is necessary to observe measures to prevent the impact of environmental risks on the generating sets. The algorithm of actions to prevent the dangerous impact of environmental risks on generating sets using RES is shown in figure 5. To ensure environmental safety in the area under consideration, first of all, information about the ecological state of the natural environment (OPS) is needed, which can be obtained, in particular, using a GIS (geographic information system) for the relevant area, as well as information on the parameters of natural energy flows (CES) -wind, river flows, sun rays, which are involved in the corresponding generator sets. The assessment of the degree of hazard of environmental risks is made by comparing information on the environmental parameters P i of the environment with their normalized P iN values: where δ i and δ iD -real and permissible, respectively, relative deviations of the "i" -th ecological parameter P i of the environment from the normalized P iN value. Cases of excess of P i values over P iN are considered, that is P i ≥ P iN . Cases P i < P iN certainly satisfy the safety conditions and therefore are not involved in further analysis. If all real deviations δ i satisfy condition (9), then the ecological situation is recognized as normal, not dangerous for the operation of generating sets with renewable energy sources. If any one parameter δ k (or several parameters) does not satisfy condition (9), then the situation is recognized as dangerous (according to one or several parameters) and the implementation of measures is required to reduce the influence of relevant natural factors on autonomous generating sets. After the implementation of the required measures, the monitoring of the ecological state of the environment and the parameters of natural energy flows is carried out again. If necessary, the described cycle is repeated. Prevention of the hazardous effects of abnormal manifestations of natural factors is a guarantee of stable operation of generating sets based on renewable energy sources, a significant way to increase the energy efficiency of renewable energy sources. Conclusion The use of renewable energy sources takes an increasing part in the general energy of almost any industrialized country and is increasing every year. To increase the energy efficiency of the use of renewable energy sources, it is necessary to: increase the efficiency of conversion plants for energy generation based on renewable energy sources; take measures to reduce the energy payback period of installations, focusing on their entire life cycle (production, operation, disposal); using concentrators of various kinds to increase the concentration of natural energy flows, converted into electrical energy; stabilize the output parameters of generating sets, in particular through intermediate energy storage. A promising direction in the development of renewable energy generators is the creation of hybrid plants that combine converters of various types. The practical advantage of autonomous conversion plants based on renewable energy sources is the possibility of their spatial location in the immediate vicinity of the consumer and the possibility of implementing a coordinated mode of the generator and load. However, it should also be noted that the reliability of the operation of autonomous generating sets powered by renewable energy sources depends on external conditions -on the ecological situation in the natural environment, in connection with which it is necessary to take timely measures to prevent the dangerous impact of environmental risks on the power supply system of autonomous generating sets using renewable energy sources.
2020-12-17T09:06:22.586Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "96e5ce5ccd90e863c3408e269c4d7deeb24027f0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/976/1/012002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a53ce69b629a08318e7db865cf5b74b68d09737f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
247476017
pes2o/s2orc
v3-fos-license
A Model of Job Parallelism for Latency Reduction in Large-Scale Systems Processing computation-intensive jobs at multiple processing cores in parallel is essential in many real-world applications. In this paper, we consider an idealised model for job parallelism in which a job can be served simultaneously by $d$ distinct servers. The job is considered complete when the total amount of work done on it by the $d$ servers equals its size. We study the effect of parallelism on the average delay of jobs. Specifically, we analyze a system consisting of $n$ parallel processor sharing servers in which jobs arrive according to a Poisson process of rate $n \lambda$ ($\lambda<1$) and each job brings an exponentially distributed amount of work with unit mean. Upon arrival, a job selects $d$ servers uniformly at random and joins all the chosen servers simultaneously. We show by a mean-field analysis that, for fixed $d \geq 2$ and large $n$, the average occupancy of servers is $O(\log (1/(1-\lambda)))$ as $\lambda \to 1$ in comparison to $O(1/(1-\lambda))$ average occupancy for $d=1$. Thus, we obtain an exponential reduction in the response time of jobs through parallelism. We make significant progress towards rigorously justifying the mean-field analysis. Introduction Consider a model consisting of n parallel servers, each having one unit of processing capacity and employing the processor-sharing (PS) scheduling discipline. Jobs arrive according to a Poisson process with rate nλ and each job brings a random amount of work, exponentially distributed with unit mean. Upon arrival of a job, d servers are chosen uniformly at random. We assume an idealised model where these d servers process the job simultaneously and the job leaves the system as soon as the total amount of work done on it by all d servers equals its size. Since the servers operate according to the PS discipline, the instantaneous rate obtained by a job at a given server is a fraction of the server's total capacity, which is shared equally by all the jobs present at that server. Our goal is to study the average delay of jobs in the system as a function of the parameter d, which captures the degree of parallelism in the system. We focus on the limiting regime where n tends to infinity. We note that for d = 1 the system reduces to n independent parallel servers for which the average delay is known to be 1/(1 − λ). However, for d ≥ 2, the servers' state are not independent of each other, since at any given instant a server interacts with the subset of servers with which it shares jobs. This makes the analysis for d ≥ 2 more challenging. The motivation to study this idealised model comes from modern data centres, which often handle computing jobs that are highly parallelisable. For example, machine learning jobs such as TensorFlow [1] and scientific computing jobs often require large numbers of parallel cores for efficient processing [30]. Other examples of large-scale parallelism include coded file retrieval systems [19] and parallel computing systems implementing the MapReduce framework [7]. The average time a job spends in such a system crucially depends on the degree of parallelism used, and on how the capacities of the processing cores are shared by jobs present in the system. Various queueing models have been proposed to study the effect of parallelism and scheduling on job delay. Such models are, in general, difficult to analyze due to the dependency between servers induced by parallel jobs. Many recent works analyse models in which jobs are divided into tasks and processed at multiple servers [15,32,27]; a job is considered to be completed when all its tasks finish processing. These are analysed as generalised Fork-Join (FJ) queuing models. Due to the assumption that servers operate a first-come-first-served (FCFS) policy, in these models a job may not be served simultaneously at multiple servers. Thus, the dependence among servers in these models is weak, in the sense that the servers are only coupled at arrival instants of jobs. Another class of models [2,28,33] studies systems consisting of a single central queue and multiple parallel servers. Jobs requiring multiple servers arrive into the central queue. If the required number of servers is available upon arrival, then the job joins all the servers simultaneously; otherwise, it waits in the queue until the required number of servers become available. In the models discussed above, servers do not have individual queues and as a result can serve only one job at a time. Our model differs in allowing each server to process multiple jobs simultaneously, and each job to receive service from multiple servers simultaneously. An important question in this context is how to select d, the degree of parallelism in the system. At one extreme, when d = 1, the average delay is known to be 1 1−λ , whereas the other extreme of d = n is equivalent to perfect resource pooling and has average delay of 1 n(1−λ) . But this comes at the cost of greatly increased communication between servers. To what extent can the benefits of resource pooling be realised with small values of d? Our key finding is that, in the large n limit for any fixed d ≥ 2, the average delay of jobs scales as O( 1 λ log 1 1−λ ) as λ increases to 1. This implies that even with a small degree of parallelism it is possible to obtain an exponential reduction in the average delay of jobs in heavy traffic. A small value of d also implies that the communication overhead can be kept low. Our main contributions are the following. Contributions In Section 3, we present a mean-field approximation of the steady-state average delay of jobs. To obtain this approximation, we study the evolution of a tagged queue, assuming that all the servers with which the tagged server shares a job are in their equilibrium distribution at all times. This assumption yields a Markovian evolution of the tagged queue with transition rates which depend on the equilibrium distribution. Solving for the equilibrium distribution of this Markov process yields a fixed point equation. We show that this fixed point equation has a unique solution for every λ < 1. We solve it to obtain the equilibrium occupancy distribution and the mean occupancy per server. We show that the occupancy is bounded by a constant multiple of − log(1 − λ) as λ increases to 1; this is in stark contrast to the 1/(1 − λ) scaling in the d = 1 case, and is reminiscent of the double exponential decay seen in the supermarket queueing model [31,22]. Using similar assumptions, we also obtain the evolution equations for the transient distribution of the mean-field model. In Section 4, we present evidence from extensive computer simulations to support the conjecture that the solution to the mean-field equations yields the invariant marginal occupancy distribution. We overlay plots of the empirical cumulative distribution function (cdf) from simulations on the theoretical cdf from the mean-field equations, for systems consisting of n = 100 and n = 500 servers, and different values of the arrival rate, λ. The plots show that the empirical and theoretical cdfs are very close to each other. Likewise, the mean value from simulations is very close to the theoretical mean. We also present simulation results which indicate that for large system sizes the stationary queue-length distribution of our model is insensitive to the type of service-time distribution as long as its mean remains unchanged. This is a very desirable property for system designers as it implies that system performance does not change with the change in the service time distribution of arriving tasks. Next, we turn to the problem of establishing rigorously that the equilibrium occupancy distribution in the system with n servers converges to the mean-field prediction as n tends to infinity. The analysis turns out to be significantly more challenging than other related mean-field models due to the fact that the dependence among the servers' states is much stronger in our model. Indeed, in our model, a server's state is coupled with every other server with which it shares a job, for as long as the job is present in the system. Thus, unlike many other mean-field models in the literature, the interaction between servers in our model is not restricted to specific epochs, such as arrival instants. Despite this difficulty, we make significant progress towards rigorously justifying the mean-field model. In Section 5, we show that the system is stable under the natural condition, λ < 1, that the rate at which work arrives into the system is smaller than the maximum rate at which it can be served. This shows the existence of a unique invariant distribution π n for the number of jobs at each server. We further show that the mean number of jobs at a server in equilibrium is bounded uniformly in n. This implies that the sequence of distributions (π n ) n∈N , is tight, and hence that it has subsequential weak limits. We conjecture that the limit is unique. Next, in Section 6, we show that the system, started empty, converges monotonically (in the stochastic order induced by the componentwise partial order) to its invariant distribution. This result is essential in studying the convergence of mean-field models to their corresponding fixed point. Using this monotonicity result, in Section 7 we show that the speed of convergence of the marginal occupancy distribution to π n is uniform in n. This result is a significant step towards showing propagation of chaos, i.e., that any fixed number of queues become asymptotically independent as the system size n tends to infinity. It shows that the marginal occupancy distribution for large n and large t can be close both to the stationary distribution and the distribution of a cavity process. In Section 9, we discuss in more detail how our theoretical results could be used to prove the convergence to a cavity process. However, it remains an open problem to define the cavity process and to show that for every fixed t the system converges to this cavity process as n → ∞. Our final contribution, discussed in Section 8, is to study a static version of the model, in which λn unit-sized jobs are each replicated on a set of d servers out of n, chosen independently and uniformly at random. Here, λ > 0 is a fixed constant which does not depend on n. We consider the problem of scheduling the servers so as to minimise the makespan, namely, the time until all jobs are completed. We show that with high probability, i.e., with probability tending to 1 as n tends to infinity, the makespan is bounded uniformly in n, as a function of λ. In other words, the jobs can be scheduled in such a way that the time for even the most heavily loaded server to complete its tasks does not grow without bound. This result prompts the question of whether one can find better algorithms in the dynamic setting than that of sharing server capacity equally amongst all jobs present at a server. This is an open problem. Related work Load-balancing in large systems has attracted a great deal of research. Here, we selectively discuss works which are related to the model or the analysis techniques studied in this paper. In the context of load balancing in parallel server systems, the SQ(d) algorithm was first analyzed independently in [31,22] as an alternative to the classical Join-the-Shortest-Queue (JSQ) scheme to reduce the overhead of job dispatching. In the SQ(d) algorithm, an incoming job is sent to the shortest among d queues chosen uniformly at random. Using mean-field techniques it was shown that in the limit as the number of servers tends to infinity, the marginal queue length distribution exhibits double exponential decay in the tail for d ≥ 2; this is in contrast to assigning new jobs uniformly at random, which gives rise to independent M/M/1 queues, which have a geometric queue length distribution (i.e., the queue length decays exponentially). A closely related algorithm, denoted LL(d), in which jobs join the least loaded of d servers was analyzed in the heavy traffic regime in [13]; this algorithm requires knowledge of the workload associated with each job. The analysis was extended to a number of other load-balancing policies in [14], in the heavy traffic regime. Bramson et al. [6] analyzed the SQ(d) and LL(d) algorithm for general service time distributions using the cavity method from statistical physics. They established that in the mean-field limit any finite number of queues become independent of each other. The approach adopted in this paper closely follows the general framework introduced in their work. A generalisation of SQ(d) to multi-component jobs is studied in [29]. Here, jobs consist of k components; d servers are sampled, and components are assigned to the least-loaded among these servers so as to balance their loads. This combines elements of parallelism and load-balancing. The above approaches assume knowledge of server loads when assigning jobs. An alternative that has been extensively studied recently involves placing replicas of a job at multiple servers, in the hope that at least one of them will have low load. Suppose each arriving job is replicated at d servers chosen uniformly at random. When one of these servers either starts working on a replica, or completes working on it, then replicas of the job at all d − 1 other servers are immediately removed; the former policy is called cancel-on-start (CoS) and the latter cancel-on-complete (CoC). These policies have been studied in [10,9,5], which give exact closed-form expressions for the invariant distribution and mean response time in small systems. These expressions involve a state space representation whose complexity grows rapidly in the number of servers and it is not easy to obtain insights into large-system asymptotics. Stability of replication policies was studied in [25] for the FCFS service discipline, and in [4] for several different service disciplines, including the one considered in this paper. Both stability and tail behaviour of such policies under the PS service discipline are analyzed in [26]. An alternative approach to dealing with server loads which are not known in advance is studied in [8]. Here, jobs are assigned to a random server upon arrival, but periodically sample other servers, and migrate if the sampled server is serving fewer jobs. The paper presents a mean-field analysis of the resulting model, as well as a rigorous justification of it. A variant of replication, known as coded computation, exploits the fact that many computationally intensive jobs are intrinsically parallelisable, i.e., they can be split into a large number of tasks which can be executed in parallel. Moreover, redundancy can be added via coding so that the job is complete when sufficiently many tasks have been completed. Concretely, a job may be split into k tasks, and a further m − k tasks created via coding. The resulting m tasks can be executed in parallel, and the job finishes as soon as any k tasks are completed. The latency in such a system is studied in [19,20]. System Model and Notation The system consists of n servers, each using processor sharing (PS) as the service discipline and each processing work at unit rate. This implies that if there are k ongoing jobs at a given server, then the instantaneous rate at which the server processes each job is 1/k. Jobs arrive into the system according to a Poisson processes of rate nλ. Each job brings in an amount of work which is exponentially distributed with unit mean, independent of all other jobs and of the arrival process. Upon arrival, a job is assigned to d servers, chosen uniformly at random and independent of the past and it is processed in parallel by all the servers to which it is assigned. When the total work done on a job by all servers to which it was assigned is equal to the work brought in by that job, the job leaves the system. This is equivalent to each job having d copies, each of which is sent to a different server. The servers work on the individual copies independently until the total amount of work done on the job by all servers (this is the sum of the work done on individual copies) equals the job's size. We shall assume that λ < 1. This condition is necessary for the stability of the system as it ensures that the rate at which work enters the system is strictly smaller than the maximum rate at which work can be processed by the system. We shall later establish that this condition is also sufficient for stability. We refer to jobs that are processed by a fixed subset of d servers as a class. Clearly, there are m = n d classes of jobs. Let X n i (t) denote the number of jobs of class i ∈ [m] in the system at time t ≥ 0, where we write [m] for the set {1, . . . , m}. It is easy to see that X n () = (X n i (), i ∈ [m]) is a Markov process on the state space S n = Z m + . For each class i ∈ [m], we denote by ∂ i the set of d servers that serves class i jobs and, for each server j ∈ [n], we denote by ∂ j the n−1 d−1 classes that are served by server j. Let Q n j (t) = i∈∂j X n i (t) denote the number of ongoing jobs present at server j at time t and let Q n (t) = (Q n j (t), j ∈ [n]). For simplicity, we refer to Q n j (t) as the queue length of server j at time t although there is no queue at the server and all jobs are processed in parallel. We denote by π n,0 (t) the queue-length distribution of a tagged server at time t starting from the empty state X n (0) = 0 1 and let π n denote the stationary queue-length distribution of a tagged server (when the system is stable). To analyse the mean response time of jobs for large system sizes, we need to characterise the limit of π n as n tends to infinity and this is the main goal of the paper. The standard technique for establishing the limit of π n involves analysing the empirical queue-length process y n () = (y n k (), k ∈ Z + ) defined as for each time t ≥ 0. Typically, the limit of π n coincides with lim t→∞ lim n→∞ y n (t), which is referred to as the mean-field limit of π n . However, it is important to note that for the system described above the queue-length process Q n () is non-Markovian, which implies that the empirical measure process y n () is also non-Markovian. This makes analysing the dynamics of y n () (and therefore computing lim t→∞ lim n→∞ y n (t)) significantly more challenging than the analyses in [31,22,6], as the standard results for density-dependent Markov processes [18] do not apply. Nevertheless, in the next section, we present a mean-field limit of π n based on a heuristic framework known as the cavity method in statistical physics, and analytically characterise the mean-field limit. Mean-field analysis As noted earlier, the queue-length process Q n () and the empirical queue-length process y n () are non-Markovian for our system. This makes characterising the mean-field limit of π n difficult using standard techniques. In this section, to characterise the limit π = lim n→∞ π n , we take a heuristic approach based on the cavity method used in statistical physics [21]. The cavity method, applied to our model, consists of analysing the invariant distribution of the queue length Q(t) of a single tagged server in the infinite system under the following assumptions: 1. The queues at all servers other than the tagged server are in the steady state (i.e., in the invariant distribution of the process Q()) and are independent of the queue length of the tagged server. 2. If a job has a copy present in the tagged server, then the other d−1 copies are randomly placed in the rest of the system, i.e., any copy present in the rest of the system has equal probability of being a copy of that job. Under the above assumptions, the process Q(t), t ≥ 0 is Markov and has state dependent transition rates which we compute below. The aggregate arrival rate at any given server in the n th system is λd, as the arrival rate into the system is nλ, and each arrival chooses a subset of d servers uniformly at random; there are m = n d such subsets, of which n−1 d−1 contain the tagged server. Since this is true for all n, we take this to be the arrival rate at the tagged queue in the infinite system. Thus, the transition rate of Q(·) from k to k + 1 is λd, for any k ∈ Z + . Next, for k ≥ 1, we compute the transition rate from k to k − 1. Each job in the tagged server is worked on at rate 1/k; this contributes a total rate of 1 to departures. In addition, each given job is processed at d − 1 other servers each of which (by the first assumption) has queue length distributed according to the steady state distribution π of Q(). Let Q j1 , . . . , Q j d−1 denote the stationary random queue lengths seen by the d − 1 other copies of a job present at the tagged server. Then, by the second assumption above, the distributions of Q j1 , . . . , Q j d−1 are the same for all jobs whose copy is present at the tagged server. Moreover, since jobs are placed randomly in the system, the probability that a copy ends up in a queue with length u is proportional to uπ u where π u is the u th component of the steady-state distribution π of Q(). Hence, we have where EQ denotes the mean queue length in stationarity. Since P(Q j l = 0) = 0, it follows that This is the instantaneous rate at which each copy of a job is worked on by servers other than the tagged server; in other words, it is the service received from the mean field. As this is the case for each job, we conclude that the transition rate from k to k − 1 is given by 1 + (d − 1)k 1−π0 EQ . We can separately obtain π 0 by work conservation. The total rate at which work arrives into the system is nλ, while the total rate at which work is done is n(1 − π 0 ), provided the system is stable and a stationary regime exists. Hence 1 − π 0 = λ. We now write down transition rates for the queue-length process Q() at the tagged server based on the heuristic arguments presented above. Henceforth, by the mean-field model, we refer to the Markov process Q() defined on Z + with the following stationary transition rates: All other transition rates are zero. We recognise this as an M/M/1 queue with reneging, where arrivals occur at rate dλ, services at rate 1, and each customer reneges independently at rate µ. It is stable for any µ > 0, and the invariant distribution solves the local balance equations, π i q i,i+1 = π i+1 q i+1,i , i ∈ N. Solving this recursion, we obtain where we follow the convention that an empty product is equal to 1. Notice that the invariant distribution π is a function of µ, which is defined in terms of EQ. But EQ is the mean of the invariant distribution, π. Thus, the mean-field equations yield the following fixed-point equation for the mean queue length, EQ In the theorem below we show that there exists a unique solution to the above fixed point equation for λ ∈ (0, 1) and characterise the mean of the resulting distribution. Theorem 1. Consider the mean-field model defined by the transition rates in (1). The following hold: 1. For any λ ∈ (0, 1), there is a unique EQ > 0 which solves (3). Moreover, if we set µ = (d − 1)λ/EQ, then π given in (2) is invariant for the Markov process with transition rates given in (1), and the mean of the distribution π is equal to EQ. For any Proof. To prove the first part, we define for x > 0 and u ∈ N the following function Clearly, x → π u (x) is a strictly decreasing function for each u ≥ 1, and hence, so is x → φ(x). It is also not hard to see that φ is continuous on (0, ∞), and that, by monotone convergence, which is a summable sequence for all d ≥ 2. Since π u (x) tends to zero for each u ≥ 1 as x tends to infinity, while π 0 (x) ≡ 1 − λ, it follows by dominated convergence that φ(x) tends to 1 − λ. Since φ(·) is a strictly decreasing and continuous function whose limit as x tends to zero is bigger than 1 and whose limit as x tends to infinity is smaller than 1, the equation φ(x) = 1 has a unique solution. We denote the solution by ξ(λ, d) to make explicit its dependence on the system parameters, λ and d. Finally, the mean queue length is given by EQ(λ, d) = 1/ξ(λ, d). To prove the second part of the theorem, observe from (2) that Taking logarithms on both sides and re-arranging, we get Substituting for u * from (4), we see that This establishes the second part of the theorem. The above theorem establishes the uniqueness of the distribution π which satisfies the fixed point equation (3) and also characterises the mean of this distribution. We conjecture that π is indeed the weak limit of π n as n → ∞. In the next section, we show numerical evidence to support this conjecture and throughout the rest of the paper we also make significant progress towards rigorously proving the conjecture. An important observation to make from Theorem 1 is that the mean queue-length under π (which by Little's law is proportional to the mean response time of jobs under π) scales as O(− log(1 − λ)) as λ → 1 for any d ≥ 2. This result is to be contrasted with systems with no parallelism, i.e., where d = 1. For d = 1, the servers behave as independent M/M/1 PS servers. Therefore, the mean queue-length is λ/(1 − λ) which scales as O(1/(1 − λ)) when λ approaches 1. Thus, we conclude that in heavy traffic, even a small amount of parallelism (e.g., increasing d from 1 to 2) can result in a significant reduction in the mean-response time of jobs. Transient analysis of Q(): So far we have analysed the stationary distribution π of the cavity process Q() which captures the evolution of a single tagged server in the infinite system. Building upon the same ideas, we now obtain the evolution equations for the transient distribution of Q(). We conjecture the transient distribution of Q() to be the limit of the empirical queue-length process Similarly, for the n th system, we defineȳ n k (t) = l≥k y n l (t) for each k ≥ 1,ȳ n 0 (t) = 1 for each t ≥ 0, and define the empirical ccdf of the queue lengths at time t to beȳ n (t) = (ȳ n k (t), k ≥ 0). To analyse the transient distribution of Q(), we use the same two assumptions as above, along with the additional assumption that, at any time t ≥ 0, the queues at all servers other than tagged server have the same distribution as Q(t). Then, using the same line of arguments as above, we can write the distribution-dependent transition rates of Q() as for each k ∈ Z + , where forȳ 1 (t) = 0 the value ofȳ 1 (t)/ l≥1ȳ l (t) is defined by its limit asȳ 1 (t) → 0. All other transition rates are zero. Note that these transition rates are consistent with the stationary transition rates defined in (1). Thus, using Dynkin's formula, it follows that the ccdfȳ() = (ȳ k (), k ≥ 1) of Q() satisfies the following system of ordinary differential equations for each k ≥ 1: We conjecture that the stochastic processȳ n () converges toȳ() defined above as n → ∞. Numerical results In this section, we compare the performance of systems with different values of d, the number of servers over which each job is parallelised, to study the benefits of job parallelism. We also compare simulation results with results of the mean-field analysis of Section 3. Finally, we provide results for non-exponential service time distributions. We simulated the system model described in Section 2, with different numbers of servers, n, and different values of λ, the normalised arrival rate. We carried out event-driven simulations, where each event was taken to be an arrival with probability λ/(1 + λ) and a 'virtual service' with the residual probability 1/(1 + λ). Once the type (arrival or virtual service) was allocated to an event, the event was then assigned to a server chosen uniformly at random. If the event type was an arrival, it then chose another d − 1 servers uniformly at random (without replacement) and joined the queue at all d of these servers. If the event type was a virtual service, and the queue at the chosen server was empty, nothing happened; otherwise, a job was chosen uniformly at random from amongst those present at the chosen server, and departed the system. A hundred snapshots of the simulated system were taken at well-separated epochs to ensure that there was very little temporal correlation between snapshots. Queue lengths at all n queues were recorded at these times, and used to compute empirical cdfs and mean values of queue lengths. The mean queue length is a key performance measure, related to the mean response time via Little's law. In Figure 1, we plot the mean queue length as a function of the normalised arrival rate λ for a system with n = 100 servers, for different values of d. Observe that as d increases, the mean queue length decreases. The decrease is most significant for higher values of λ (i.e., as the system approaches heavy traffic) and when parallelism increases from d = 1 to d = 2. This is in accordance with properties of the invariant distribution derived in Section 3. Indeed, as shown in Theorem 1, for d ≥ 2, the steady-state mean queue length scales as O(log 1 1−λ ) as λ increases to 1. This is a significant decrease compared to the O(1/(1 − λ)) scaling for d = 1. Next, we compare our simulation results to those obtained from the mean-field analysis. In Table 1 we compare the average queue length obtained from simulations with the average queue length obtained by solving (3) numerically. We observe a close match between the results for n = 100 and n = 500. Figures 2 and 3, we plot the empirical cdf of the queue length as well as the cdf calculated from (2), (3). Plots are shown for two different arrival rates, λ = 0.3 and λ = 0.9, and for the number of servers being either 100 or 500. The plots in Figure 2 are for d = 2 (i.e., each job is parallelised on 2 servers), while those in Figure 3 are for d = 4. For each value of λ, the figures show that the empirical cdfs for n = 100 and n = 500 are very close to the theoretical cdf from the mean field analysis; for λ = 0.3, they are virtually indistinguishable in both figures. These results provide additional evidence in support of the conjecture that the invariant distribution π n in the system with n servers converges to the solution π of the mean-field equations as n tends to infinity. So far we have presented numerical evidence to support the conjecture that π n converges to π as n → ∞. We now present simulation results to show that similar convergence takes place in the transient regime. More specifically, we show that the processȳ n () = (ȳ n k (), k ≥ 0) converges component-wise to the processȳ() = (ȳ k , k ≥ 0) defined in Section 3 as n → ∞. In Figure 4a and Figure 4b we have compared the first three components of the processesȳ n () andȳ() for n = 1000 and n = 50000, respectively. From the figures, it is clear that as n increases the processȳ n () concentrates on the mean-field processȳ(). While the mean-field analysis in the previous section has been carried out under the assumption of exponentially distributed service time, it is well known that the queue length distribution of the M/G/1 − P S queue is insensitive to the service time distribution but only depends on its mean. This has been proved for redundancy models in [17], when the service times have hyperexponential distributions with two components. It is natural to conjecture that insensitivity holds more generally. We now present numerical evidence from simulations that this is the case. In Figure 5, we have plotted the empirical cdf of the queue length distribution for Erlang service times with 1,2,4 and 8 phases; if there is 1 phase, service times are exponential. The load is kept constant at 0.8 and each job is assigned to 2 servers. The other simulation parameters are similar to the exponential service time setting. The empirical cdfs are very close to each other, demonstrating that they are insensitive to the service time distribution provided the mean is the same. Similar results are seen in Figure 6, where we compare empirical queue length cdfs for the Exp(1) service time distribution with those for two-component hyperexponentials with parameters (2, 1/2) and (4, 1/4); the weights for the components are chosen to ensure the same mean service time of 1. Again, the empirical queue length distribution is seen to be insensitive to the service time distribution. Stability and uniform bounds In this section, we show that the system defined in Section 2 is stable for all λ < 1. Moreover, we prove that in the steady state, the expected number of jobs at each server is bounded uniformly in n. This implies that the sequence (π n ) n of steady-state distributions of server occupancy is tight in Z + , which in turn guarantees the existence of sub-sequential limits. Both these results are essential in showing the . To prove these results, we analyze the drift of an appropriately defined Lyapunov function on the state space of the underlying Markov chain. Recall that X n i (t) denotes the number of jobs of class i ∈ [m] in the system at time t ≥ 0, and that Q n j (t) = i∈∂j X n i (t) denotes the number of ongoing jobs at server j ∈ [n] at time t. Note that the dependence of Q n () on X n () has not been made explicit in the notation. The process X n () = (X n i (), i ∈ [m]) is Markov on state space S n = Z m + , with transition rates from any state x ∈ S n given by x → x + e i , at rate r i,n suppressed in the notation), while e i denotes the m-dimensional unit vector with the i th component being one. The generator G X n of the Markov chain X n is given by for f : S n → R and x ∈ S n , where r i,n ± (x) are the transition rates from x to x ± e i as given by (7). We now establish the following theorem. Exp Erlang(2) Erlang(4) Erlang(8) Figure 5: Queue length distribution for Erlang service times, two servers, load=0.8. Proof. We prove the theorem by analyzing the drift of the Lyapunov function Ψ : S n → R + defined as follows From (8), we see that the generator G X n applied to Ψ and evaluated at x is given by where we have used the fact that j∈∂i 1 = d to obtain the second equality. Note that i∈[m] r i,n + (x) is just the total arrival rate into the system, which equals nλ. Similarly, i∈[m] r i,n − (x) is the total service rate when the system is in state x; this is the number of busy servers in state x, which we shall denote by B(x). Hence, substituting for the transition rates from (7), we obtain from the above that Now, where the second equality holds because each server serves n−1 d−1 classes; the last equality holds because each job is replicated at d servers, and hence summing the number of jobs at each server gives d times the total number of jobs. Next, by the Cauchy-Schwarz inequality, Substituting (11) and (12) in (10), we get where the first equality follows from the fact that m = n d = n d n−1 d−1 . As the number of busy servers, B(x), is at most n, it follows from (13) that, if j∈[n] q j > n(1 + λ)/2(1 − λ), then G X n Ψ(X) < 0. Also note that sup Hence, the drift of Ψ is bounded inside the compact set {x ∈ S n : j∈[n] q j ≤ n(1 + λ)/2(1 − λ)} and is negative outside this compact set. Thus, applying the Foster-Lyapunov criterion for stability (see, e.g., Proposition D.3 of [16]) the process X n () is positive recurrent. We now turn to the second statement of the theorem. By Proposition 1 of [11], it follows from (14) that Note that we cannot claim that E[G X n Ψ(X n (∞))] = 0 as this requires the stronger condition E[Ψ(X n (∞))] < ∞ which is not guaranteed by the mere existence of stationary distribution of X n (). Instead, taking expectation with respect to the stationary distribution in (13), we obtain where the last equality follows from the fact that, by positive recurrence, the steady-state rate of departures from the system, E[B(X n (∞))], must be equal to the rate of arrivals, nλ. Now, substituting the inequality E[G X n Ψ(X n (∞))] ≥ 0 in (16), we get from which the second statement of the theorem follows by noting that for each j ∈ [n] we have E[ j∈[n] Q n j (∞)] = nE[Q n j (∞)] due to exchangeablity of the stationary measure. Remark 1. Theorem 2 shows that if λ < 1, then stationary queue-length distribution exists and is unique. Let π n denote the stationary queue-length distribution of an individual server. From Theorem 2 it follows by Markov inequality that sup n P(Q n (∞) > C) ≤ λ/(C(1 − λ)). Hence, by choosing C sufficiently large we can guarantee that sup n P(Q n (∞) > C) ≤ for any > 0. This shows that the sequence (π n ) n of individual stationary queue-length distributions indexed by the system size is tight in Z + . Hence, by Prohorov's theorem the sequence (π n ) n has subsequential weak limits. However, to show the convergence of the whole sequence (π n ) n to a unique limit π, we need to further establish that all subsequential limits coincide. (17) that Remark 2. It follows from Combined with Little's law, this implies that the mean response time of jobsT n d in the nth system is upper bounded byT This bound is achieved with equality for d = 1. However, for d ≥ 2 and large n, the stationary distribution computed using mean-field heuristics of Section 3 suggests that the bound could be lowered to O( 1 λ log 1 1−λ ) for λ close to 1. Remark 3. The upper bound in (17) also implies that Hence, for 2 ≤ d ≤ n−1 we have E[X n i (∞)] → 0 as n → ∞. This implies that X n i → 0 in probability for each i ∈ [m]. Monotonicity In this section, we show that the chain X n () is monotonic with respect to its starting state. Specifically, we show that if X n () andX n () are two copies of the same chain with initial states satisfying X n (0) ≤X n (0), then the sample paths of the chains can be constructed on the same probability space such that X n (t) ≤X n (t) for all t ≥ 0. Here, X ≤ Y for X, Y ∈ S n means X i ≤ Y i for all i ∈ [m]. The main result of this section is as follows. Theorem 3. Consider two Markov chains X n () andX n (), both evolving according to the transition rates given by (7). Let X n (0) ≤X n (0). Then, there exists a coupling between the chains such that X n (t) ≤X n (t) for all t ≥ 0. In other words, X n (t) ≤ stX n (t) for all t ≥ 0. Proof. We call the systems corresponding to processes X n andX n as smaller and larger systems, respectively. We denote the queue lengths in these systems by Q n andQ n respectively, and observe that if X n i (t) ≤X n i (t) for all i ∈ [m], then Q n j (t) ≤Q n j (t) for all j ∈ [n]. The coupling is described as follows: Let the current instant be denoted by t and assume that X n (t) ≤X n (t). We shall specify a way of generating the next event and the time s for the next event such that the inequality X n (s) ≤X n (s) is maintained just after the next event has taken place. An event can be either an arrival or a departure of a class i job for some i ∈ [m]. For each class i ∈ [m], we generate the time until the next arrival as an exponential random variable with rate nλ/m for both systems. Hence, for both systems, arrivals of jobs of each class occur at the same instants. We now turn to departures. If X i (t) <X i (t) for some i ∈ [m], then we generate the time until the next class i departure as independent exponential random variables with rates r i,n − (X n (t)) and r i,n − (X n (t)) in the X n andX n systems, respectively. Otherwise, if X i (t) =X i (t) for some i ∈ [m], then observe that We first generate the time till the next class i departure in the larger system as an exponential random variableZ with rate r i,n − (X n (t)). Then, we generate the time till the next class i departure for the smaller system as Z = min(Z, Y ) where Y is another independent exponential random variable with rate r i,n − (X n (t)) − r i,n − (X n (t)), which we showed was non-negative. (If an exponential random variable has rate 0, we formally set it to be +∞.) Since Z ≤Z, the next class i departure event occurs earlier in the smaller system than in the larger system. Once all the event times have been generated in the way described above, the next event is taken to be the one whose next event time is the earliest. Due to the construction above, it is clear that X n (s) ≤X n (s) holds just after the next event time s. This completes the proof of the theorem. Proof. We construct two processes Y n () andȲ n () such that Y n (0) = 0 = X n (0) andȲ n (0) = X n (t − s) ≥ Y n (0) and both evolve according to the rates given by (7). According to the coupling constructed in Theorem 3 we have X n (s) = Y n (s) ≤ stȲ n (s) = X n (t). It is important to note that X n (t) ≤X n (t) implies that Q n (t) ≤Q n (t), where Q n andQ n denote the corresponding queue-length processes. Hence, from Theorem 3 it follows that Q n (t) ≤Q n (t) for all t ≥ 0 if X n (0) ≤X n (0). Also, from Corollary 1 it follows that Q n (s) ≤ st Q n (t) for all 0 ≤ s ≤ t if Q n (0) = 0. Uniform convergence to stationary distribution In Section 5, we showed for any fixed n ∈ N that the Markov process X n () is positive recurrent under the condition λ < 1, which we assume throughout. Hence, by irreducibility, X n () has a unique invariant distribution. Consequently, so do the joint and marginal queue length processes Q n () and Q n j (). Denote by π n the invariant queue length distribution at the first server, which is the same as at any other server by exchangeability. In the previous section, we showed that, if the Markov process X n () is started in the empty state X n (0) ≡ 0, then Q n 1 () converges in distribution monotonically to π n . In this section, we strengthen this result by showing that the speed of convergence, made precise below, does not depend on n. Our main result in this section is stated below. Theorem 4. Let π n,0 (t) denote the queue-length distribution of the first server at time t ≥ 0, starting from the empty system at t = 0 (i.e., X n i (0) = 0 for all i ∈ [m]). Let π n denote the stationary queue-length distribution of the first server. Then, for each > 0, there exists τ = τ ( ) (not depending on n) such that where d T V (·, ·) denotes the total variation distance. Before providing a detailed proof, we outline the approach. We consider two chains X n () and X n (), both evolving according to the transition rates given by (7), with X n () being started empty andX n () being started in the invariant distribution. ClearlyX n (t) is in the invariant distribution at all times t. Denote by Q n () andQ n () the queue lengths induced by X n () andX n (), respectively; then, for any t > 0 and j ∈ [n], Q n j (t) has distribution π n,0 (t) whileQ n j (t) has distribution π n . We couple the chains as described in the last section, so that the system started empty is always dominated by the one started in equilibrium. Now, where the first inequality holds for any coupling, and the second inequality holds because queue lengths are whole numbers and differ by at least one when they differ. We shall bound the last quantity as a function of t. In order to do so, we define a distance W : S n × S n → R + , as follows: W (x n ,x n ) = 1 n n j=1 |q n j − q n j |, whereq n and q n are the queue lengths corresponding to the configurationsx n andx respectively. Note that W is not a metric on S n ×S n because it may be zero for two different configurations which induce the same queue lengths. However, it is a metric when restricted to the subset {(x n ,x n ) ∈ S n × S n : x n ≤x n }, which is the subset on which we work. We have the following lemmas. Lemma 1. Consider two Markov chains X n () andX n (), both evolving according to the transition rates given by (7), with X n (0) ≤X n (0). Then for any 0 ≤ u ≤ v we have Proof. We couple the Markov chains X n () andX n () as described in Theorem 3. Under this coupling we haveQ n j (t) ≥ Q n j (t) for all t ≥ 0. Hence, for all t ≥ 0, we can drop absolute values and write W (X n (t),X n (t)) = 1 n n j=1 (Q n j (t) − Q n j (t)). SinceX n () is positive recurrent, we further note that E[W (X n (t),X n (t))] ≤ 1 n j∈[n] E[Q n j (t)] < ∞. Let G X n ,X n denote the generator of the coupled process (X n (),X n ()). This generator can be written in terms of the generators of the individual processes as where the last equality follows easily from (8) by noting that G X n 1 n j∈[n] Q n j (t) = dλ− d n B(X n (t)). Under the coupling we have B(X n ) ≤ B(X n ) ≤ n. Hence, E[|G X n ,X n W (X n (t),X n (t))|] ≤ d for all t ≥ 0. Hence, it follows that W (X n (t),X n (t)) − t 0 G X n ,X n W (X n (s),X n (s))ds is a martingale with respect to the natural filtration of the coupled Markov chain. (X n (), X n ()). We can therefore apply Dynkin's formula to conclude the following, for any 0 ≤ u ≤ v: (P(Q n j (s) > 0) − P(Q n j (s) > 0))ds where the last equality follows from the fact that under the coupling described in Theorem 3 we have P(Q n j (s) >Q n j (s) = 0) = 0. Proof. Using Lemma 1 we have u+M2 u P(Q n j (s) > Q n j (s) = 0)ds Now, for each s ≥ u ≥ 0, j ∈ [n] and M 1 > 0 we have Hence, to prove the lemma it suffices to show that for each M 1 > 0 and j ∈ [n], there exists We couple the two systemsX n () and X n () as described in Theorem 3. LetQ n j (u) > Q n j (u) and Q n j (u) ≤ M 1 . We choose M 2 = 4M 1 and consider the event A over which (i) no arrivals occur at queue j in either of the two systemsX n () and X n () in the interval [u, u + M 2 ], (ii) all original jobs present in queue j of the system X n () at time u leave the system by time u + M 2 /2, and (iii) at least one of the original jobs in queue j of the systemX n remains in the system at time u + M 2 . The first two events mentioned above are independent of each other under the coupling described in Theorem 3 since. The probability of the first event is e −dλM2 = e −4dλM1 . The probability of the second event is at least 1 2 . To see this, let S i be the service time of the i th original job in queue j of the lower system. The total amount of work done by the j th server in time M 2 /2 is M 2 /2. Hence, at least one of the original jobs in queue j of the lower system remains in the system after time 2 . The probability of this event satisfies The third event includes the event that one of the original jobs in the larger system at queue j has a service time of at least dM 2 which has a probability of e −dM2 = e −4dM1 and is independent of the first two events. Thus, we have u+M2 u P(Q n j (s) > Q n j (s) = 0|Q n j (u) > Q n j (u), This shows that (18) holds with γ = M 1 e −4dM1(1+λ) and M 2 = 4M 1 . Proof of Theorem 4: We prove Theorem 4 using the lemmas stated above. LetX n () and X n () be two Markov chains, both evolving according to the transition rates given by (1). Let X n (0) = 0 andX n (0) be distributed according to the stationary distribution of X n (). We couple the two chains according to the coupling described in Theorem 3. By the coupling lemma we have that d T V (π n 0 (t), π n ) ≤ P(Q n 1 (t) = Q 1 (t)), whereQ n () and Q n () are the queue length processes corresponding to two chainsX n () and X n (), respectively. To prove Theorem 4, we shall show that for each > 0 there exists τ ( ) such that for some u ∈ (0, τ ( )] we have This will imply the statement of Theorem 4 since for any t ≥ τ ( ) we have where the first and fourth lines follow due to the coupling described in Theorem 3; the inequality in the second line follows from Corollary 1 due to the monotonicity of Q n (); the equality in the third line follows from the factX n () is a stationary chain. We now proceed to establish (19). We observe the following (P(Q n j (u) =Q n j (u), Q n j (u) ≤ M 1 ) (P(Q n j (u) =Q n j (u), Q n j (u) ≤ M 1 ) where the equality in the first line follows from the exchangeability of the queues in both systems; the inequality in the third line follows from Theorem 3; the inequality in the last line follows from Theorem 2. We now show that each term on the RHS of (20) can be bounded above /2 by appropriately choosing M 1 ( ) and τ ( ). Clearly, the second term on the RHS of (20) can be bounded by /2 for an appropriate choice of M 1 = M 1 ( ). Thus, it remains to show that the first term is also bounded above by /2 for this choice of M 1 and some τ ( ) and some u ≤ τ ( ). Suppose this is not true. In particular, assume that is chosen according to Lemma 2 and K > 0 is some sufficiently large number to be chosen later. Then, applying the above inequality in (18) and summing over all u ∈ {0, M 2 , . . . , KM 2 } we obtain E W X n (M 2 (K + 1)),X n (M 2 (K + 1)) where γ = γ(M 1 ) is as defined in Lemma 2. Thus, we have where the last inequality follows from Theorem 2. The RHS of the above is negative for K = 2λ/(γd (1 − λ)). This clearly is a contradiction since by Theorem 3 we must have E[W (X n (M 2 (K + 1)),X n (M 2 (K + 1)))] ≥ 0. Hence, for some u ∈ {0, M 2 , . . . , KM 2 } we must have 1 n Static version We have so far considered systems in which jobs are replicated at d servers chosen uniformly at random, wherein servers adopt a processor-sharing policy and allocate their effort equally to all customers in their queue. This leaves open the question of whether weighted processor-sharing schemes, either centralised or distributed, could achieve better performance. Motivated by this question, we now consider a static version of the problem. Consider a system of n servers and λn jobs, where λ > 0 is a given constant. Each job is replicated across a set of d servers, chosen independently and uniformly at random from all subsets of size d. All jobs are of unit size, and all servers work at unit rate. If we identify servers with vertices and jobs with hyperedges, then the allocation of jobs to servers yields a random d-uniform hypergraph (each hyperedge contains exactly d vertices) on n vertices, with λn hyperedges. We now consider the makespan minimisation problem induced by this hypergraph, as defined below. Consider the d-uniform hypergraph (more precisely, a multigraph) with vertex set V and hyperedge set E; each v ∈ V denotes a server and each e ∈ E denotes a job. Denote by x(v, e) the fraction of its capacity that server v devotes to job e. The makespan minimisation problem is the following linear program: In the remainder of this section, we derive bounds on the value of this random linear program by relating it to the k-core problem on random hypergraphs. This same model was proposed in [12], where the distribution of the load at a typical server under the optimal assignment was studied. The analysis in [12] was non-rigorous, but was made rigorous for the d = 2 case in [3]; they also gave an exact expression for the maximum load, which is the value of the optimisation problem in (21), in terms of a fixed point problem. Here we consider general d, but only obtain bounds on the optimum value. Definition. The k-core of a hypergraph is the largest vertex induced subgraph in which the degree of each vertex is at least k. The k-core can be obtained by recursively deleting all vertices whose degree is smaller than k and the edges which are incident on them. Note that the k-core could be empty. As a simple example, the 1-core of a tree is itself, while its 2-core is empty. Proof. The (k * + 1)-core of H is empty, i.e., recursively eliminating vertices of degree k * eliminates all vertices. For a vertex v, let E v denote the set of hyper-edges which are incident on v in the step at which v is eliminated. Then, |E v | ≤ k * . Set x(v, e) = 1/k * if e ∈ E v and set x(v, e) = 0 otherwise. In other words, assign a fraction 1/k * of the capacity of a vertex to each hyperedge which is incident upon it in the step at which the vertex is eliminated by the core-finding algorithm. Any residual capacity of that vertex is unassigned (wasted). As there at most k * incident hyperedges when the vertex is eliminated, the capacity constraint, e:v∈e x(v, e) ≤ 1, is satisfied. Note that this algorithm assigns capacity at least 1/k * to each job or hyperedge since all hyperedges are eventually eliminated. (If multiple vertices contained in a hyperedge are eliminated in the same time step, it receives capacity a multiple of 1/k * ). Hence, the makespan is at most k * , which proves the upper bound in the theorem. Next, let H denote the k * -core of H, which is non-empty. Denote by |V (H )| and |E(H )| the total number of vertices and hyperedges in H . We have where deg H denotes the degree in the subgraph H . Now, the total work corresponding to jobs or hyperedges in H is |E(H )|. As these hyperedges are only incident to vertices in H (or else they would have been eliminated), they can only be worked on by a total of |V (H )| vertices. Thus, the makespan is at least |E(H )|/|V (H )|. The lower bound in the theorem is now immediate from (22). We now use the following result about k-cores of random hypergraphs to obtain asymptotic bounds on the makespan which hold with high probability (w.h.p.), i.e, with probability tending to 1 as n tends to infinity. For µ > 0, let Z µ denote a Poisson random variable with mean µ. Denote by H(n, p; d) the random d-uniform hypergraph on n vertices, where each hyperedge on d vertices is present with probability p, independent of all others. We have the following: Theorem 5. Fix natural numbers d and k larger than or equal to 2. Define and notice that it is finite, strictly positive, and increasing in k. Suppose k and d are not both equal to 2. Then, H(n, µ/n d−1 ; d) contains a non-empty k-core w.h.p. if µ > γ k , whereas its k-core is empty w.h.p. if µ < γ k . In the special case d = k = 2, we have γ 2 (2) = 1. It is well known that the random graph G(n, µ/n) has a non-empty 2-core (i.e., contains a cycle) w.h.p. if µ > 1, whereas, for any µ > 0, the probability that G(n, µ/n) contains a cycle is bounded away from zero, uniformly in n. The theorem was proved for random graphs (i.e., when d = 2) by [24] and extended to random uniform hypergraphs by [23]. It was also noted that, by contiguity, the results extend to the random (hyper)graph model parametrised by the number of (hyper)edges rather than their probability, i.e., the G(n, m) and H(n, m; d) models with m = n−1 d−1 p. The following result is thus a corollary of Lemma 3 and Theorem 5. Corollary 2. Fix d ≥ 2 and consider a system with n servers and λn jobs, where each job is replicated on a subset of d servers chosen uniformly at random and independent of other jobs. If d = 2 and λ < 1/2, then the makespan is bounded above by 1. Otherwise, if γ k (d) < λ · d! < γ k+1 (d), then, w.h.p., k-core is non-empty and the k + 1-core is empty, and the makespan is bounded between k/d and k. Conclusion and Discussions We have studied an idealised model of job parallelism where each job receives service from d ≥ 2 processor sharing servers simultaneously. Using a mean-field model, we have studied the average delay experienced by jobs, in the limit as the number of servers tends to infinity. Our results show that a significant reduction in the average delay can be obtained near the heavy-traffic limit. In particular, the average delay scales as O( 1 λ log 1 1−λ ) as λ → 1 for d ≥ 2. This is a significant reduction compared to the d = 1 case where the delay is known to be 1/(1 − λ). Numerical results show that the proposed mean-field approximation is accurate even for moderately large system sizes. We make significant progress towards rigorously establishing the results obtained from our meanfield model. In particular, we show that the system is stable when the normalised arrival rate λ is below the normalised system capacity. We also show that the individual queue lengths are uniformly bounded for all system sizes. This proves the existence of subsequential limits for the sequence of stationary queue-length distributions indexed by the system size. In addition, we establish an important monotonicity property required to study the speed of convergence to the stationary distribution. Using the uniform bounds on the queue lengths and the monotonicity property, we show that the rate at which the individual queue length distribution converges to the corresponding stationary distribution does not depend on the system size n. We have not been able to show that the sequence of stationary queue-length distributions (π n ) n converges as n tends to infinity to the conjectured distribution π derived in Section 3. We now outline some ideas as to how this might be proved, building upon the results that we have established. The key idea is to bound the distance between π n and π using triangle inequality as follows: d T V (π n , π) ≤ d T V (π n , π n,0 (t)) + d T V (π n,0 (t), π 0 (t)) + d T V (π 0 (t), π), where π 0 (t) = (π 0 k (t), k ∈ Z + ) denotes the distribution of the queue-length process Q() of the tagged server at time t starting from Q(0) = 0. Recall from Section 3 that the transient ccdf of Q() satisfies the mean-field equations (6). Hence, we can obtain π 0 (t) from the solutionȳ 0 (t) of the mean-field equations starting from the initial ccdfȳ(0) = (1, 0, 0, . . .) which corresponds to the empty queue. If this solution exists and is unique, then we can set π 0 k (t) =ȳ k (t) −ȳ k+1 (t) for each k ≥ 0. We note that by Theorem 4 that the first term in the RHS of (23) can be bounded by any > 0 for sufficiently large t (independent of n). Now, if each of the other two terms in the RHS can be bounded by any > 0 for sufficiently large n and t, then we can conclude that d T V (π n , π) → 0 as n → ∞. But for this to be true we need to show that (i) for each fixed t, the distribution π n,0 (t) converges to π 0 (t) as n → ∞, i.e., the system started from the empty state converges to the mean-field and (ii) the transient queue-length distribution π 0 (t) converges to the invarant distribution of the mean-field model π as t → ∞. Establishing these two properties seem difficult since the queue-length process Q n () is non-Markovian for finite n. We leave these as future directions to explore. Finally, there are other interesting directions to explore as well. For example, the case where d varies with n has not been studied in the paper. We believe that our mean-field model could be extended to the case where d increases sufficiently slowly with n. Letting d → ∞ in the mean field model yields an average queue length of log(1/(1 − λ)). Our numerical experiments suggest that the average queue length is close to this value when d is scaled as d = Θ(log(n)). It would be interesting prove this rigorously. Another possible extension of our model is the case where the underlying graph is not complete. The key challenge in this context would be to find the conditions under which mean-field results still hold.
2022-03-17T01:16:23.708Z
2022-03-16T00:00:00.000
{ "year": 2022, "sha1": "d1991bec9b64a56de708598515982e3ae9acc20f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d1991bec9b64a56de708598515982e3ae9acc20f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
54614780
pes2o/s2orc
v3-fos-license
The Effectiveness of Programmed Education in Developing Writing Skills of Children with Learning Difficulties in Primary Education : A Case Study of Northern Border Areas of Saudi Arabia The present research aimed at recognizing the effect of Programmed Education (PE, hereafter) in developing writing skills for children of learning difficulties in primary education in Saudi Arabia. This empirical study is a case study of primary level students from the learning difficulties centers in KSA. The study sample is chosen randomly consisting of two groups: controlled (50 students), and experimental (50 students), wherein the study instruments have been applied after checking its validity and reliability. The results of the study shows that there is a difference of statistical significance at (α = 0.05) between the controlled and experimental groups related to the effect of programmed education in developing writing skills of children with learning difficulties in primary education in the Northern Border area of the Kingdom of Saudi Arabia. Introduction The twenty first century is witnessing fast and immense changes which have a clear impact on social, economic, educational, cultural, scientific, and technological life systems.In front of these immense transformations, a mutual response is a must between these systems, in order to benefit from it for the advantage of promoting and advancing life.Educational system response forms central importance among these responses for it represents the comprehensive entry for uplifting other systems inside the social entity.In order to make education to be contemporary and capable of these transformations, it is necessary to identify technological issues in shaping its goals.It is also necessary to develop various innovative methods and means so that they can prove to be able instruments to perform their functions in this changing world (Al Teeti et al., 2008). Educational technology theories took its roots from old and new education principles; from Pronze and James and their theories for knowledge structure (Anderson, 2004).Crowder (1960) developed what is called Branched Programmed Education, in which programmed education is branched into channels, furnishing learners with the right knowledge for its wrong answer during the learning process.This step led to imparting programmed education flexibility in usage, richness in knowledge and uniqueness in education, i.e., the learner depends on himself during the learning process (Ornestein, 2011).Kinzie (2002) sees the need to prepare students toward computerized educational technology.Computerized Educational Programs are found to be highly effective in enhancing the learners' competence toward educational subjects.Richard (1997) understands that directing students toward using computer programs and its applications and realizing its advantages, forms a necessity because of the effect of its use in benefiting the potential learners.The interaction takes place among pupils through these computerized programs during their learning experience generates positive direction to the pupils which increases their drive to learn.factor of human production.Its use is not monopolized and rather has been widely expanded to all areas of research, applications in human lives (Kensarah, 2009). Significance of the Research Special education literature describes learning disabilities as puzzling hidden deficiency for children suffering from these difficulties that hide their weak sides in their performance.Children tell wonderful stories despite not being able to write.They could succeed in performing very complicated skills, although they may fail in following simple instructions.And they may seem completely normal and intelligent not only in their appearance, rather there is nothing that reveals that they are different from the other ordinary children, yet they suffer immense difficulties in learning some skills in school.Some of them cannot learn to read, some are incapable of uttering some letters as a result of defects, and some are incapable of learning writing.The theoretical and practical significance of the current study explains the following: a. Verifying the effectiveness of programmed education on developing writing skills for children with learning difficulties in the primary stage at the Northern Border District in the Kingdom of Saudi Arabia. b.There is a lack of studies in Arabic regarding programmed education in developing writing skills for children with learning difficulties in the primary stage at the Northern Border District in the Kingdom of Saudi Arabia.c.The need of children with learning difficulties is to live independently in fulfilling their needs, integrate in the society, interact with their peers, and social conformity with the others through developing their writing skills, and overcoming their writing problems. Research Problem Talking about learning difficulties is not easy since it is a new term that needs a refined definition because it is associated with other categories of special needs.Sometimes it is mixed with education for the mentally retarded, for those with language and speech disorders, or with behavioral disorders.These difficulties are not uniform as far as their symptoms are concerned as learning difficulties in an individual may not necessarily be the same of the others (Qahtan, 2008).Hence the present study aims to address the following research questions: a.Is there an effect for programmed education on developing writing skills (putting together letters for words) for children with learning difficulties in the primary stage in the northern borders district in the Kingdom of Saudi Arabia?b.Is there an effect for programmed education on developing writing skills (sectioning words into letters) for children with learning difficulties in the primary stage in the northern borders district in the Kingdom of Saudi Arabia? Research Hypotheses a.There are differences between both controlled and experimental groups to assess the effect of programmed education on developing writing skills (putting letters together to form words) for children with learning difficulties in the primary level. b.There are differences between both controlled and experimental groups to assess the effect of programmed education on developing writing skills (segmenting words into letters) for children with learning difficulties in the primary level. It is a self-education in which learners seeks to control the learning process, and there by controlling the fields of educational experience and determines its sequence skillfully and accurately.It helps the student educate by themselves, discover their mistakes, and correct them until learning is accomplished and reach an appropriate level of performance.The student goes through these steps before he passes another test after the completion of this program so that he can find out how far he has achieved the objectives of the lesson and the level of his performance particularly (2004, p. 35).Rousan (2001) defined it as the growth of mental abilities in an irregular manner, also focuses on aspects of academic deficits of the child, which represent the inability to learn the language; reading, writing and spelling, which do not belong to mental or sensational causes, and finally the definition focuses on the contrast between academic achievement and mental ability of the individual. Procedural definition of the learning difficulties means the students in this study are aged between 7-10 years who attend primary school.It has been noticed that they have a disorder in writing but the difficulty is not due to a sensory impairment, mental disorders, or environmental problems. Literature Review Writing is a human heritage which bestows upon the human being to mark his humanism since history and civilization.Since early times, people taught their children reading and writing.Writing includes the sub-activities: hand writing, spelling, and linguistic expression.Writing plays a major role in study activities of practical life tasks.Difficulties in learning writing is a common problem with children in primary level and could continue when they are older, therefore, the problem should be realized at the start before it becomes serious by assessing the factors causing this difficulty.The term "learning difficulties" was first used by Kirk in the early sixties of the last century to distinguish between mental retardation, slow learning, and educational difficulties that some students suffer from as a result of internal or developmental factors; despite having normal intelligence, he cannot achieve in the level that is consistent with his mental abilities (Adel Abdallah, 2006).Some consider that programmed instruction is a modern technological method, while others argue that its roots extend to the era of the philosophers of the ancient Greece.Socrates used the method of dialogue and discussion in his education.The give and take method with the student help them taking advantage of his answer to give a new questions -it's a way to lead the learner to the desired goals. But Plato has pointed to the need of the effective answer and small steps principle to avoid coercive methods in education.While Koantylian mentioned that the learner depends on the principle of small steps during his learning, asking a lot of questions in order to have positive reinforcement.In the seventeenth century, Komenus described a kind of learning characterized by effectiveness, increases learning, and reduces the impact of the teacher.The psychology and scientists in the nineteenth century and the twentieth century were closely related to the programmed science which is now known as in the programmed instruction, or as the principle of reinforcement. In 1925, the American psychologist, Sidney Pressed invented the first learning machine, a small machine to self-correct tests: it contains multiple answers and enables the learner to discover his mistakes and work on its corrections.Percy didn't mean what is known as programming prior to the discovery of this machine.It marked the beginning of interest point in the programmed learning (Kurbanoglu et al., 2006). In the fifties programmed instruction philosophy has emerged the way we see it now, as a result of the experiments and research of American psychologist, Skinner carried out on pigeons and rats, and found the relation between those results and the human learning.When he conducted his experiments on his daughter and as a result of that the extent her mathematical skills improved and concluded with programmed instruction principles.After that he held several conferences related to programmed education and its principles.Afterwards many conferences have been held to evaluate its viability (Fathima, 2013). Computerized curriculum is considered one of the novelties of technology which goes back to the ideas of Skinner in mid-fifties.Since then, computerized programs in general and computerized educational programs in particular, witnessed a notable development and advancement.Computerized Educational Programs have proved its effectiveness in various educational situations.Looking at various literatures, the advantages of Computerized Educational Programs in education were categorized with a strategic objective that the school administration should perform the following elements; instruction, presentation, practicing, examination, maintenance, and transferring (Abu Khatwah et al., 2009).Computer has occupied a prominent place among the modern techniques because of its high capability, and the possibility of the use of educational programs to help students learn different subjects in different patterns such as languages and others, where information is displayed in a different and exciting ways that help learners to repeat what they have learned, and also to consolidate information in their minds to seek remedy of the shortage happening in absorption of the concepts.The applications that are related to the development of school curricula with the help of computer will be heavily depended on computer usage for the analysis of lessons/material and their respective units, either for the teacher or the learner (Simona, 1997).Fernand's direction criterion has been used in teaching spelling and writing, which is considered a gateway to the child's linguistic experience and an introduction to teach the entire word.Fernand believes that overcoming the emotional issues that are related to reading, faced by the students, would be easier if the material provided boost an interest in them.Moreover the stories written by students with assistance from the teacher whenever needed, serve as the basis for them to read it afterwards and select the words they want to learn.Repetition of its pronunciation helps them write another story on their own, using the words saved in a special file.These words act as the database they can always refer whenever needed.Fernand never supported the idea of cramming words, but focused on reading and writing the word as a whole (Elhanan Kaufman, 2007).Cardinal and Smith (1994) conducted a study aimed to investigate the effect of educational computer in students' achievement in different educational areas including language.The study sample consisted of 60 students from the students of a University in Virginia.The study sample were distributed to two groups; an experimental group studied using computer-based learning strategy which focused on understanding and memorization, and a control group also studied using the computer but without learning strategy.The results of the study favored experimental group to the control group which studied the Arabic language and skills by computerized program.Mines and Brandes (1995) aimed to check how a group of school teachers and professors of US universities in the field of teachers' education encourage students to interact and participate in learning.The information and data has been collected through weekly meetings, interviews.It has confirmed the importance of active learning in teacher training raise the impact on increasing students' participation and interaction through educational learning process.Bones and Thompson (2000) conducted study in order to investigate the relationship between the use of a computer by the teacher and the motivation and integration in terms of training, and curriculum designing, collaborative learning, self-direct learning, and active learning.The researcher has used the Mann-Whitney Test to determine the relationship between motivation and teaching strategies.The study sample consisted of 445 teachers who are teaching grade nine through twelve in the public schools of the province of North Louisiana in the US.The results of that indicated the existence of statistically significant relationship between the degree of computer application and the frequency of the use of his teacher, the extent of training on the process of completion of the curriculum, and the extent of support to that completion.Jallad (2000) conducted a study aimed at knowing the effect of programmed education in students' achievement in Islamic Education subject compared to traditional education.To achieve the goal of the study the researcher put the following two questions: a. Are there significant differences between the achievement of tenth grade students in Islamic Education course followed by the programmed instruction and the traditional education?b.Is there a statistically significant difference at the level of significance (0.05) due to the gender and age of the student?The study sample consisted of 109 male and female students in the tenth grade.The sample was divided into two groups; 32 male students and 24 female students in the two experimental groups, followed by two controlling groups of 22 male and female students.The researcher designed programmed text for the curriculum and the collective test grades on the two groups; the controlled and the experimental both.Dunn (2002) aimed to investigate the effect of learning by computer, and compared the same with traditional methods.The study sample consisted of 141 students from the primary ninth grade, were chosen based on their prior achievements in the eighth grade.Then they are divided into two groups: control group studied the traditional way, experimental studied using computer.The results of the study showed an improvement on the performance of the control and experimental groups in the post-test compared to the pre-test.The improvement was greater in the experimental group, also appeared there superiority in favor of females to males, and the results indicated that teaching reading using a computer improves the performance of students in standardized examinations.Mottalag (2002) studied the impact of programmed instruction on the third grade students' usage in their achievement of general science and further compared it to the traditional way.It aims to investigate the effect of the method and style of programmed education compared to the usual way of education in raising the level of achievement in science.The researcher used the descriptive analytical curriculum for all students of the desired grade in the schools in Jordan.The most important result of the study is that programmed instructions reduce the learning time so that the remaining time can be used in other educational activities for the benefit of the learner.There is a marked increase in the level of educational attainment and achievement of the learners.The study recommended further studies on the programmed self-learning in other various materials, and also the importance of providing facilities, materials and devices that are commensurate with programmed instruction method.Souse (2003) also held a study aimed to investigate the impact of educational program administrated by the computer in the development of creative writing skill.The study was conducted on the basic ninth grade students at the Ibn Abbas Secondary School for Boys.The researcher built a test for the creative writing, and designed two programs for the development of creative writing skill such as articles, stories, dialogues.One group was supervised by the computer and the other without a computer.The study sample consisted of three groups; an experimental group of computer composed of 28 students, and an experimental group without computer of 30 students, and a control group consisting of 27 students.The results indicated the presence of statistically significant differences in creative writing skill. Aw 'd Allah (2003) also conducted a study aimed at recognition of the effect of programmed education method and compares the same with traditional method of education.Mohammed Al-Khair ( 2002) conducted a similar study on the views and trends of college professors on the use of programmed self-education.Hayek (2005) also conducted a study aimed at building a computerized teaching model based on the use of multimedia and tests its impact on the development of creative reading skills at the primary stage students in Jordan.Around 110 students were selected from two public schools; a school for boys and the other for girls.The results showed statistically significant differences in the performance of students.Ali (2006) made a study aimed to identify the use of method of programmed instruction instead of traditional methods in various educational stages.His study addressed the programmed instruction in different grades in general, and middle grade in particular.His studies can be summarized in the following observations: a.The programmed instruction method is considered as one of the modern and effective teaching methods for various grades.This method increases the students' achievement in scientific information subjects and prepares them for a long term benefits. b. Programmed instruction makes the role of the student positive and effective during the learning process because the student is transmitted from easy to difficult in the form of gradual steps that makes the concentration process high, so that he does not move from one step to another before mastering the previous step.In this process the students learn thinking by himself in solving the problems, and the student's self-reliance on himself makes the understanding, comprehension, memorization, and concentration process very effective.c.Programmed instruction method is useful in classrooms crowded with students, because they are engaged in the educational process at one time in the light of the educational program. d.The programmed instruction takes into account the individual differences among students as well, so as to allow each student to walk through the program according to his abilities and aptitudes and at his own pace.e. Programmed instruction takes care of the feedback step by step as the student moves through the program step by step.The students' success in these steps lead to the reinforcement and encouragement process and enhances his motive to continue the educational process. f.The teacher role becomes the director and supervisor of the student.The teacher turns his attention to development tendencies, trends, and ways of thinking of the students, and this gives him an edge in achieving additional educational goals. Hevens and others (2008) conducted a study on the impact of the employment of active teaching-learning strategies in Geography.The importance of these strategies to involve learners in the classroom was compared to the ways of traditional education in which the teacher dominates the educational process and does not allow the opportunity for learners to participate effectively in it.The study emphasized the importance of active teaching-learning strategies in geography, and refused the believe that benefits the difficulty of implementation of active learning strategies in a lot of educational situations, because they require that students' prior knowledge of the content of the educational content, and that the application of the majority of active learning strategies requires a significant effort by teachers and students respectively.Saleh (2010) held a study aimed at identifying the impact of the use of computerized educational lessons programs to learn the Arabic language on the collecting of the main first-grade in Nablus municipality schools.The study sample was an intentional sample consisted of 313 students from the primary first-grade students in the public as well as private schools UNRWA (2010).The results showed that there were statistically significant differences at the significance level = 0.05α The achievements among students of the primary first-grade differ due to the school type, facilities provided by the international relief agency which marks a statistically significant differences (= 0.05α).The achievement of the primary first-grade students in Arabic language in the post-test attributed to the type of group in each school.In favor of the experimental group, there is statistically significant differences at the level 0.05 = α.Abu Musa's (2010) study describes the features of the training programs based on the programmed instruction, a blend of different types of education facet; face to face, multimedia learning, distance learning with illustrated training module specifications, enable teachers to adapt to e-learning and technology requirements.The paper explains the role of electronic media in test preparation, and finally the paper provides statistical and qualitative data related to the experience with the training program applied over the three years since 2007.Around 120 participants have been trained; descriptive results showed the effectiveness of the training program contributed to bridge the gap between pedagogy and technology, through the participants' self-reliance in designing and producing using educational multimedia. Among the distinguish contributions in this area of study, Alveabat (2013) made a study aimed at investigating the effectiveness of the programmed learning that is based on the use of the both self-learning and the traditional way of collecting data from Tafila Technical University students about the teaching methods for the first classes and their attitudes towards it.The study sample was consisted of 58 students, were chosen randomly from students of the both specialization and children with learning difficulties, and from the class teacher about teaching modalities in the first classes.In this study, collective test prepared by the researcher consisted of 45 paragraph of multiple choice, the test reached the stability ratio of 86, and the results showed that there are significant differences, and the differences was in favor of the experimental group which studied using the method of programmed instruction.The researcher admitted the importance to adopt the programmed instruction method in teaching of the other courses of various specialties.Alian, Waldibs (1999) f.This enhances the learning process. g. Continuous self-evaluation and the student feeling of success step by step. h.The program directs the student when he makes a mistake in the answer (Fathima, 2013).Ibrahim (2002) Methodology Following is a description of the method and procedures followed in this research to achieve the desired objectives of the study.It includes a description of the educational community and its sample, studying tool and methods of verification of its validity and reliability, the variables of the study, and the statistical processes used by the researcher to answer questions about the study. Data Sampling The study consisted of children with learning difficulties in primary schools of the northern border area in Saudi Arabia.The study sample consisted of 40 male children who were selected in a deliberate manner from a Center of Learning Difficulties, Northern Border Area in Saudi Arabia.The sample was divided randomly into two groups: 20 children in the experimental group who would be in the training program, and 20 children in the control group who would not be exposed to the training program. Tools of the Study The researcher designed the training program to be applied to the experimental sample to test their performance in the writing skill test. The Training Program The researcher prepared plans for writing skill in such a way that the employment of drawing, story narrative, and representation of computational strategies, and a number of diverse activities have been proposed to take into account individual differences among children by employing methods, techniques and computerized presentations.The program consisted of 12 training meetings. Sincerity of the Training Program To verify the authenticity of the training program prepared, it has been submitted to a group of arbitrators, specialists, and experts in the area of curricula, teaching methods, the Arabic language, and literature majors, and supervisor from the teaching staff of the university, where they were making their observations and suggestions to modify some of the activities or exercises or to add new exercises. Tools to Measure Writing The researcher has prepared a checklist to monitor and to observe the performance of students in writing, and the tool was formed of 30 items distributed on two dimensions; structure of letters in words, cutting words into the letters, sub-paragraphs was reached through whole skill analysis to specific sub-skills. Authenticity of the Writing Tool To check the authencity of the tool, it was presented to a committee of arbitrators, specialists, and experts from the university faculty members in the area of special curricula and methods of teaching the Arabic language.Their observations and suggestions were taken by and necessary suggestions and modifications were made to the tool. The Stability of the Writing Measurement Tool To check the stability of writing measuring tool, the stability coefficient was extracted in two ways; the first is application and re-application on an exploratory sample of the study.The children's Coefficient Pearson Correlation between the results of the two applications is checked.The agreement between observers was used during an exploratory test sample with the help of two examiners and the degree of agreement between their estimates was established by using Cooper Equation.Tool as a whole 0.87 0.82 0.90 The Approach of Study The semi-experimental approach was used in this study because it is the most appropriate, where the study sample was distributed into two groups; one of them randomly for the training program, and the other left as a control group not exposed to this program. Statistical Treatment To answer the study questions and check its hypotheses, the averages and standard deviations was extracted for the degrees of experimental and control groups on the checklist prepared for constructing letters into words and cutting words into letters.Paired Samples Test is conducted to identify the differences between the pre and post measurement for the experimental group and the control one and Analysis of Variance Associated (ANOVA) is administered to detect the effect of the training program on the development of this skill. Discussion on Results The first question: Is there an effect for programmed education on developing writing skills (putting together letters for words) for children with learning difficulties in the primary stage in the northern borders district in the Kingdom of Saudi Arabia?Arithmetic means and standard deviations for the measurement of telemetric of two groups; experimental and control were extracted.There has also been a test application (Independent sample T Test and Paired Sample t.Test) for independent samples to detect differences between the two groups on the telemetric.Double samples have been used to identify the differences between the mean of pre and post measurement for each group on writing skills. Equal groups on writing skills (fixing letters words): The Table 2 shows that the values of (t) is weak and is not statistically significant at the significance level (α ≤ 0.05) on the measurement.This shows the parity between the two groups (control and experimental) in the measurement. The Table 3 shows that the value of (t) statistically significant at the significance level (α ≤ 0.05) between the two groups (experimental and control) in the dimensional measurement.The difference is in favor of the experimental group where the performance of students is the better in the dimensional measurement than the control group.4 above shows the presence of statistically significant differences between the measurement of pre and post to both groups in writing skills (fixing letters words), as the value of (t) in experimental group (23.338) and statistical terms (0.00) and the differences in favor of telemetric.The value of (t) for the control group (23.117) and statistical terms (0.00) and the differences in favor of telemetric, but note that the level of development in the experimental group is better than the control group. To identify the effectiveness of a training program in the development of writing skills (fixing letters words) for children with learning difficulties in primary school in the northern border region of Saudi Arabia and the same has been associated with the application of variation analysis (ANOVA).The Table 5 above shows the presence of statistically significant differences at the level of significance (0.05).Depending on the variable group and the differences were in favor of the experimental group, the results showed no difference in the writing skills (fixing letters words) on the pre-program measurement, and this confirms the parity between the two groups. Second Question: Is there an effect for programmed education on developing writing skills (sectioning words into letters) for children with learning difficulties in the primary stage in the northern borders district in the Kingdom of Saudi Arabia? To answer this question, arithmetic means and standard deviations for the measurement of telemetric of two experimental control was extracted and it has also been a test application (Independent sample T. Test) for independent samples, to detect differences between the two groups on the telemetric.The application of the test (Paired Sample t.Test) for samples is to identify the differences between the mean of pre and post measurement for each group to test writing skills (cutting words to characters).The Table 6 shows that the values of (t) is weak and is not statistically significant at the significance level (α ≤ 0.05) on the measurement, this shows the parity between the two groups (control and experimental) on the pre-program measurement.The Table 7 shows that the value of (t) statistically significant at the significance level (α≤0.05) between the two groups (experimental and control) on the dimensional measurement, and the difference is in favor of the experimental group, where the performance of students in this group is better than the control group.The Table 8 shows the presence of statistically significant differences between the measurement (pre and post) for both groups in writing skills (cutting words to the letters).The value of (t) (28.296) and statistical terms (0.000) for experimental group are better than the value of (t) (21.918) and statistical terms (0.000) for the control group.It shows that the level of development in the experimental group is better than the control group. Equal groups on writing skills (cutting words to the letters): To learn about the effectiveness of the training program for the development of writing skills of children with learning difficulties in primary school in the northern border region of Saudi Arabia has been associated with the application of variation analysis (ANOVA).Variability associated with the analysis of test results (ANOVA) to learn about the differences between the two groups in writing skills (cutting words to characters) prior to the test.The Table 9 shows the presence of statistically significant differences at the level of significance (0.05) depending on the variable group and the differences were in favor of the experimental group, as results showed no difference in the writing skills (cutting words to characters) on the measurement scale, and this confirms the parity between the two groups.The present study conforms to the findings of the previous studies and supports the effectiveness of Programmed Education for pupils with learning difficulties.Though the study is limited to the difficulties related to writing skills, but further research in other areas of skills development will further enrich the area of study. Conclusion Although this is only one study of a convenient sample of primary students, the findings do suggest that PE measures of writing skill shows very promising results.It proves its suitability to be incorporated in the regular curriculum.Further work should be done to develop standardized norms for PE and should be implemented on the representative primary school students with learning difficulties.It would be even more encouraging to carry out similar case studies in the other parts of the Kingdom as well to have more authentic and valid grounds to implement PE for all the students to meet the educational objectives in a real sense. summarized the following characteristics of Programmed Education: a.Each student works alone.b.Each student learns according to his own pace.c.The course material is provided in frames' form.d.The student responds a certain response in it to the stimulus (the question) and is often successful.e.It allows the student to know the correct answer. has summarized the following features of Programmed Education: a.Take in account the individual Differences between the Learners.b.Motivates the Students to study.c.Helps on the Self-Learning.d.Help on learning Outside the School.e. Contributes in handling the growing numbers of students.f.Provides the Learner by the modern advanced Sciences.Based on the various studies carried out, the following are the four acts performed by the learners while administering Programmed Education: a. Read the new information given in the framework.b.Answer the questions in the frame.c.Make sure of the answer by comparing them with the correct answer given.d.Go to the next frame and repeat those steps, and this step is made only for the linear program Linear. Table 1 . Stability study tool Table 2 . Test (Independent sample T Test) to learn about the equality of the two groups in writing skills (fixing letters words) on the measurement (N = 40) Table 3 . Test results (Independent sample T Test) to learn about the differences between the two groups in writing skills (fixing letters words) on the telemetric (N = 40) Table 4 . Test (Paired Samples t.Test) double samples to identify the differences between the measurement of pre and post experimental group and control group in writing skills (fixing letters words) Table 5 . Variability associated with the analysis of test results (ANOVA) to learn about the differences between the two groups in writing skills (fixing letters words) on the telemetric Table 6 . Test (Independent sample T Test) to learn about the equality of the two groups in writing skills (cutting words to characters) on pre-program measurement (N = 40) Table 7 . Test results (Independent sample T. Test) to learn about the differences between the two groups in writing skills (cutting words to characters) on the telemetric(N= 40) Table 8 . Test double the samples (Paired Samples Test) to identify the differences between the measurement of pre and post experimental group and control group in writing skills (cutting words to the characters) Table 9 . Statistically significant differences at the level of significance in their writing skills
2018-12-04T00:47:12.616Z
2015-09-28T00:00:00.000
{ "year": 2015, "sha1": "9b07a6a93f6799c2b051afdcfcefe88b8c51172b", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/jedp/article/download/52532/28654", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9b07a6a93f6799c2b051afdcfcefe88b8c51172b", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
220251369
pes2o/s2orc
v3-fos-license
Systemic Review: Is an Intradiscal Injection of Platelet-Rich Plasma for Lumbar Disc Degeneration Effective? Current studies evaluating the outcomes of intradiscal platelet-rich plasma (PRP) injections in degenerative disc disease (DDD) are limited. The purpose of this review was to determine if an intradiscal injection of PRP for degenerative discs results in a statistically significant improvement in clinical outcomes. A systematic review was performed using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Level I-IV investigations of intradiscal PRP injections in DDD were sought in multiple databases. The Modified Coleman Methodology Score (MCMS) was used to analyze the methodological quality of the study. Only the outcome measurements used by more than 50% of the studies were included in the data analysis. The study heterogeneity and nature of evidence (mostly retrospective, non-comparative) precluded meta-analysis. Pre and post-injection pain visual analog scales (VAS) were compared using two sample Z-tests. Five articles (90 subjects, mean age 43.6 ± 7.7 years, mean follow-up 8.0 ± 3.6 months) were analyzed. Four articles were level IV evidence and one article was level II. Mean MCMS was 56.0 ± 10.3. There were 43 males and 37 females (10 unidentified). Pain VAS significantly improved following lumbar intradiscal PRP injection (69.7 mm to 43.3 mm; p<0.01). Two patients (2.2%) experienced lower extremity paresthesia after treatment. One patient (1.1%) underwent re-injection. No other complications were reported. In conclusion, intradiscal injection of PRP for degenerative discs resulted in statistically significant improvement in VAS with low re-injection and complication rates in this systematic review. It is unclear whether the improvements were clinically significant given the available evidence. The low level of evidence available (level IV) does not allow for valid conclusions regarding efficacy; however, the positive results suggest that further higher-quality studies might be of value. Introduction And Background Low back pain (LBP) is one of the most common causes of disability in the United States, with over 80% of American adults experiencing one or more lifetime episodes [1][2]. Although various organic and inorganic pathologies may cause LBP, degenerative disc disease (DDD) accounts for more than 40% of chronic LBP in the United States [3][4]. In spite of the high prevalence and morbidity associated with DDD, current treatment options are limited. Common treatments of early disease consist of a combination of conservative measures, such as bed rest, non-steroidal anti-inflammatory drugs (NSAIDs), physical therapy, and analgesic injections, which have shown to decrease symptoms but do not slow the progression of the disease [5][6][7][8]. Treatment of later disease consists of surgical approaches, including discectomy and spinal fusion, which are invasive, expensive, and have high rates of postoperative complications [9][10][11][12][13]. In recent years, platelet-rich plasma (PRP) has emerged as a relatively non-invasive treatment option for DDD unresponsive to conservative measures [14]. PRP is an autologous concentrate of various cells and growth factors acquired from centrifuged whole blood with growing evidence of its application in the healing response across different specialties, particularly in orthopedics. PRP has been shown to achieve its effects by delivering a high concentration of growth factors, including transforming growth factor-β, insulin-like growth factor, plateletderived growth factor, and vascular endothelial growth factor, which activate cell proliferation and differentiation of vascularized cells [15]. Thus, various studies indicate its effective application in areas where vascularity is relatively preserved, including ligament, tendon, and muscle pathologies such as osteoarthritis, tendinopathies, lateral epicondylitis, and muscular injuries [16][17]. On the other hand, the vascular supply to the human intervertebral disc recedes completely during the developmental process, leaving virtually no direct blood supply to the disc in a healthy adult [18]. Therefore, it may be hypothesized that growth factors will have minimal effect on the degenerated discs. However, PRP has also demonstrated to have an anti-inflammatory effect by decreasing pro-inflammatory mediators at the injected site primarily by reducing the transactivation of the inflammatory regulator, nuclear factor-kappa B, and by inhibiting the inflammatory enzymes cyclooxygenase 2 and 4, metalloproteinases, and disintegrins [19][20][21]. This latter effect of PRP makes it a potential injectable option for the management of discogenic pain in DDD. Current studies evaluating the outcomes of intradiscal PRP injections in DDD are mostly limited to small case reports and retrospective studies. Thus, the purpose of this investigation was to determine if the intradiscal injection of PRP for DDD results in statistically significant improvement in clinical outcomes with low re-injection and complication rates. The authors hypothesized that the procedure results in statistically significant improvement in pain VAS with low re-injection and complication rates. Review Methods A systematic review was registered with PROSPERO (International Prospective Register of Systematic Reviews) on August 31, 2017 (Registration # CRD42017075843). PRISMA guidelines were followed [22]. Eligible studies consisted of level I-IV (via Oxford Centre for Evidence-Based Medicine (CEBM)) therapeutic studies that investigated the outcomes of intradiscal PRP injections for lumbar DDD among adult human patients [23]. The diagnosis was made in each included study based on a combination of history, physical examination, and radiographs, including magnetic resonance imaging (MRI) for every patient. Studies that included non-DDD etiology of back pain were excluded. Cadaveric studies, basic science and animal studies, diagnostic studies, economic studies, prognostic studies, level V evidence expert opinions, letters to editors, and review articles were excluded. Studies published in non-English languages were not excluded but were unidentified in the medical databases. In the event of different studies with duplicate subject populations, the study with the longer follow-up, higher level of evidence, greater number of subjects, or greater clarity of methods and results was included. The authors conducted separate searches of the following medical databases: MEDLINE, Web of Science, and Cochrane Central Register of Controlled Trials databases. Under the PROSPERO registration, similar prior systematic reviews and meta-analyses were sought and none were identified. The searches were performed on April 20, 2020. The search terms used were "platelet-rich plasma," "degenerative disc," "spine," and "injection." The search results were reviewed for duplicates and the inclusion criteria to determine articles that were included in the final analysis ( Figure 1). Two authors independently reviewed all articles using the methodology recommended by Harris et al. [24]. The study design, patient populations, and procedure technique were first identified. All lower back-specific patient-reported outcome scores, re-injection rates, and complication rates were analyzed. The levels of evidence were then assigned based on the Oxford Centre for Evidence-Based Medicine [23]. Study methodological quality was analyzed using the Modified Coleman Methodology Score (MCMS) [25]. The overall Strength-of-Recommendation Taxonomy (SORT) score was B and Grading of Recommendations Assessment, Development, and Evaluation (GRADE) score was C [26][27]. Study heterogeneity and nature of evidence (mostly retrospective, non-comparative) precluded meta-analysis. Thus, a best-evidence synthesis was used instead [28]. Only the outcome measurements used by more than 50% of the studies were included in the data synthesis to increase the power of the measurement over that of individual studies. A weighted mean of pre-and post-injection values from each study was calculated and comparisons were made using two sample Z-tests (http://in-silico.net/tools/statistics/ztest) using a p-value of less than 0.05 for significance. The individual changes in LBP visual analog scale (VAS) were compared with a previously reported minimal clinically important difference (MCID) of 22.5, substantial clinical benefit (SCB) of 32.5, and a patient acceptable symptomatic state (PASS) of 33.5 [29][30]. Results Five articles were analyzed ( Table 1) [31][32][33][34][35]. Four articles were level IV evidence and one article was level II. According to MCMS, one article was good (scores between 70 to 84), three articles were fair (scores between 55 to 69), and one article was poor (scores less than 55). The mean MCMS was 56.0 ± 10.3. There were 90 patients analyzed. There were 43 males and 37 females (10 unidentified). The mean age was 43.6 ± 7.7 years old with a mean follow-up of 8.0 ± 3.6 months. PRP was obtained in all studies by the centrifugation of 30 to 200 mL of autologous blood to perform a fluoroscopy-guided injection of 1 to 5 mL of PRP directly into one or more symptomatic lumbar intervertebral discs ( Table 2). Three studies confirmed symptomatic discs using provocative discography. The remaining two studies utilized MRI alone with a combination of history and physical exam. All studies performed intradiscal PRP injections after the failure of non-interventional management. None of the studies recorded the use of post-injection cryotherapy. One study approved the use of post-injection NSAIDs for unbearable pain. No study compared leukocyte-poor PRP to leukocyte rich PRP. However, one study reported using leukocyte-poor PRP, and one study reported using leukocyte-rich PRP. One study used a negative control with a placebo contrast injection and reported a significant improvement in FRI as compared to controls at eight weeks post-treatment but reported no significant difference in VAS and SF-36 pain at any time post-treatment. No comparison injections were made in all other studies. Re-injection and complication rates were minimal. There was one patient (1.1%) that required re-injection ( Table 4). There were two cases (2.2%) of transient lower extremity paresthesia in unspecified nerve distributions that occurred one and six months post-treatment, both of which self-resolved within seven days. No other complications were reported. Discussion It was determined that an intradiscal injection of PRP for DDD results in a statistically significant improvement in VAS. Although all reviewed studies presented statistical significance in the improvement of VAS after an intradiscal PRP injection for DDD (69.7 mm to 43.3 mm; p<0.01), no studies analyzed the clinical importance of outcome scores. Various studies have shown that a statistically significant score change in outcomes does not imply a clinically significant change [36][37][38][39][40]. Thus, measuring the MCID was introduced to determine the smallest difference in outcome score that patients found beneficial [41]. SCB is comparable to MCID but seeks to further develop a standard that better reflects the envisioned benefit of an intervention [42]. PASS is also a similar concept but instead represents the maximum amount of signs and symptoms beyond which patients consider themselves well [43]. Park et al. determined in a study of 105 patients with persistent LBP after lumbar surgery that the MCID and SCB for LBP VAS are 22.5 mm and 32.5 mm, respectively [29]. Furthermore, Tuback et al. determined in a study of 330 ankylosing spondylitis patients that the PASS of LBP VAS is 33.5 mm [30]. Of the two reviewed studies that reported individual data, only 19 patients (59.4%) met MCID and 12 patients (37.5%) met SCB and PASS. This demonstrates that though the procedure results in statistically significant improvement, a large portion of the patients does not achieve a clinically meaningful improvement in outcomes. All studies analyzed utilized an intradiscal injection of PRP to treat both the symptoms and the progression of disc degeneration in DDD. One of the analyzed studies by Comella et al. also utilized stromal vascular fraction (SVF) obtained from a mini-lipoaspirate procedure of fat tissue to be injected into the disc as a PRP-SVF suspension [32]. The authors hypothesized that SVF, which is a mixture of growth factors and adipose-derived stem cells (ADSCs), can be injected into the disc simultaneously with PRP to minimize inflammation while promoting healing. This study of 15 patients resulted in an average VAS decrease of 20.0 mm at the sixmonth follow-up with no adverse effects, which was neither superior nor inferior to other studies that utilized PRP alone. Numerous types of PRP systems exist, with varying leukocyte, platelet, and growth factor concentrations. Leukocytes consist of neutrophils, eosinophils, basophils, lymphocytes, and monocytes, which are responsible for providing an acute and chronic inflammatory response against foreign invaders. Studies comparing leukocyte-rich and leukocyte-poor PRP have demonstrated a significantly higher inflammatory response and cell death seen with leukocyterich PRP [44][45]. Of the studies included in the review, Levi et al. used leukocyte-rich PRP, which showed a significant decrease in VAS (24.6 mm) at the six-month follow-up [34]. Akeda et al. used leukocyte-poor PRP and reported a larger decrease in VAS (43.0 mm) at the sixmonth follow-up [33]. However, this review was unable to develop conclusions regarding outcome differences in the use of leukocyte-rich versus leukocyte-poor PRP, as none of the reviewed studies directly compared the use of these formulations. Complication and re-injection rates after intradiscal PRP injection were low. The re-injection rate in this study was 1.1%. Furthermore, besides the 2.2% incidence of transient paresthesia post-injection, there were no reported adverse effects compared to the higher rates of complications with surgery such as infection, hematomas, thromboembolic events, and adjacent level disease. Overall, this study demonstrates that intradiscal injection of PRP for DDD leads to clinical improvement with low complication and re-injection rates. However, further higher-quality studies with randomized controlled trials are necessary to justify the use of PRP over more cost-effective treatment methods. There are several limitations among the studies included in this review. Four of the five articles were level IV evidence, which limits the strength of the results [31][32][33][34]. None of the studies used a double-blinded approach producing potential bias. The average study methodological quality as assessed by the MCMS was fair. The assimilation of heterogeneous, low methodologicalquality studies with VAS is a significant limitation. However, the authors minimized this as much as possible with strict study eligibility and inclusion criteria, despite the level IV evidence nature of the studies. Furthermore, the heterogeneity of outcome measures used among the studies limited the data analysis to one outcome measure. Additionally, MCID, SCB, and PASS are used to compare individual differences between preoperative and postoperative outcomes, and a majority of the reviewed studies reported means of patients and did not include individual statistics. Future studies can improve through designing a prospective comparative trial, increasing study size, and standardizing clinical outcome measures such as using VAS, ODI, numeric rating scale (NRS), and functional rating index (FRI) simultaneously. Another possible limitation of this review is that other relevant studies on this topic could have been excluded, despite conducting a systematic search. Conclusions Intradiscal injection of PRP for degenerative disc disease results in a statistically significant improvement in VAS with low re-injection and complication rates. Further randomized controlled studies that show a clinically relevant improvement in multiple outcome parameters are necessary to evaluate the true efficacy of this treatment.
2020-06-30T05:06:24.954Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "42b3c5bcd437bca86d4c30e2bbf694bbeb1f89c0", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/35016-systemic-review-is-an-intradiscal-injection-of-platelet-rich-plasma-for-lumbar-disc-degeneration-effective.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "42b3c5bcd437bca86d4c30e2bbf694bbeb1f89c0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12210710
pes2o/s2orc
v3-fos-license
Sharp tunable optical filters based on the polarization attributes of stimulated Brillouin scattering Sharp and highly-selective tunable optical band-pass filters, based on stimulated Brillouin scattering (SBS) amplification in standard fibers, are described and demonstrated. Polarization pulling of the SBSamplified signal wave is used to increase the selectivity of the filters to 30 dB. Pump broadening via synthesized direct modulation was used to provide a tunable, sharp and uniform amplification window: Pass-band widths of 700 MHz at half maximum and 1GHz at the −20dB points were obtained. The central frequency, bandwidth and shape of the filter can be arbitrarily set. Compared with scalar SBS-based filters, the polarization-enhanced design provides a higher selectivity and an elevated depletion threshold. ©2011 Optical Society of America OCIS codes: (190.0190) Nonlinear optics; (290.5830) Scattering, Brillouin; (060.4370) Nonlinear Optics, Fibers References and links 1. G. P. Agrawal, Fiber-Optic communication systems, third edition, (Wiley, 2002), Chapter 8, pp.330–403. 2. J. Capmany, B. Ortega, D. Pastor, and S. Sales, “Discrete-time optical processing of microwave signals,” J. Lightwave Technol. 23(2), 702–723 (2005). 3. T. A. Strasser and T. Erdogan, “Fiber grating devices in high performance optical communication systems,” chapter 10 of Optical fiber telecommunications IVA – components. I. P. Kaminow, and T. Li (editors), San Diego, CA: Academic press, 2002. 4. A. Yariv, chapter 4 in Optoelectronics, pp. 110–116, Orlando FL: Saunders College Publishing, 4th Edition, 1991. 5. C. R. Doerr, “Planar lightwave devices for WDM,” chapter 9 of Optical fiber telecommunications IVA – components. I. P. Kaminow, and T. Li (editors), San Diego, CA: Academic press, 2002. 6. T. Tanemura, Y. Takushima, and K. Kikuchi, “Narrowband optical filter, with a variable transmission spectrum, using stimulated Brillouin scattering in optical fiber,” Opt. Lett. 27(17), 1552–1554 (2002). 7. A. Zadok, A. Eyal, and M. Tur, “GHz-wide optically reconfigurable filters using stimulated Brillouin scattering,” J. Lightwave Technol. 25(8), 2168–2174 (2007). 8. R. W. Boyd, Nonlinear Optics, third edition, (Academic Press, 2008). 9. M. Nikles, L. Thévenaz, and P. Robert, “Brillouin gain spectrum characterization in single-mode optical fibers,” J. Lightwave Technol. 15(10), 1842–1851 (1997). 10. J. C. Yong, L. Thévenaz, and B. Y. Kim, “Brillouin fiber laser pumped by a DFB laser diode,” J. Lightwave Technol. 21(2), 546–554 (2003). 11. A. Loayssa and F. J. Lahoz, “Broadband RF photonic phase shifter based on stimulated Brillouin scattering and single side-band modulation,” IEEE Photon. Technol. Lett. 18(1), 208–210 (2006). 12. A. Loayssa, J. Capmany, M. Sagues, and J. Mora, “Demonstration of incoherent microwave photonic filters with all-optical complex coefficients,” IEEE Photon. Technol. Lett. 18(16), 1744–1746 (2006). 13. Z. Zhu, D. J. Gauthier, and R. W. Boyd, “Stored light in an optical fiber via stimulated Brillouin scattering,” Science 318(5857), 1748–1750 (2007). 14. L. Thevenaz, “Slow and Fast Light Using Stimulated Brillouin Scattering: A Highly Flexible Approach,” in Slow Light – Science and Applications, J. B. Khurgin and R. S. Tucker Eds. (CRC press, 2009), pp. 173–193. 15. A. Zadok, A. Eyal, and M. Tur, “Stimulated Brillouin scattering slow light in optical fibers,” Appl. Opt. 50(25), E38–E49 (2011). 16. A. Zadok, E. Zilka, A. Eyal, L. Thévenaz, and M. Tur, “Vector analysis of stimulated Brillouin scattering amplification in standard single-mode fibers,” Opt. Express 16(26), 21692–21707 (2008). #152142 $15.00 USD Received 1 Aug 2011; revised 31 Aug 2011; accepted 1 Sep 2011; published 21 Oct 2011 (C) 2011 OSA 24 October 2011 / Vol. 19, No. 22 / OPTICS EXPRESS 21945 17. A. Zadok, S. Chin, L. Thévenaz, E. Zilka, A. Eyal, and M. Tur, “Polarization-induced distortion in stimulated Brillouin scattering slow-light systems,” Opt. Lett. 34(16), 2530–2532 (2009). 18. M. Wuilpart, “Distributed measurement of polarization properties in single-mode optical fibres using a reflectometry technique”, Ph.D. Thesis, Faculte Polytechnique de Mons (2003). 19. H. Sunnerud, C. Xie, M. Karlsson, R. Samuelsson, and P. Andrekson, “A comparison between different PMD compensation techniques,” J. Lightwave Technol. 20(3), 368–378 (2002). 20. C. Y. Wong, R. S. Cheng, K. B. Letaief, and R. D. Murch, “Multiuser OFDM with adaptive subcarrier, bit, and power allocation,” IEEE J. Sel. Areas Comm. 17(10), 1747–1758 (1999). 21. M. Sagues and A. Loayssa, “Orthogonally polarized optical single sideband modulation for microwave photonics processing using stimulated Brillouin scattering,” Opt. Express 18(22), 22906–22914 (2010). Introduction Optical tunable filters are widely used for channel selection within dense wavelength division multiplexing (DWDM) telecommunication networks [1], for the reduction of amplified spontaneous emission noise following optical amplification [1], as well as in microwave photonic processing setups [2].The primary figures of merit for tunable optical filters are low insertion loss, sharp transition between the pass-band and stop-bands, high side-lobe suppression, and a broad tuning range.Several mature technologies are available for the realization of passive tunable optical filters, such as fiber Bragg gratings (FBGs) [3], Fabry-Perot etalons (FPs) [4], Mach-Zehnder interferometers and ring resonators in planar lightguide circuits (PLCs) [5].In such passive filters the bandwidth and spectral transmission shape are typically fixed.In contrast, active tunable optical filters allow for adjusting not only the transmission wavelength, but also the width and shape of the pass-band as well.In addition, active filters may amplify the signal within the frequency range of choice. Active tunable optical filters have been previously proposed and demonstrated based on stimulated Brillouin scattering (SBS) in standard optical fibers [6,7].SBS requires the lowest activation power of all non-linear effects in silica optical fibers.In SBS, a strong pump wave and a typically weak, counter-propagating signal wave optically interfere to generate, through electrostriction, a traveling longitudinal acoustic wave.The acoustic wave, in turn, couples these optical waves to each other [8].The SBS interaction is efficient only when the difference between the optical frequencies of the pump and signal waves is very close (within a few tens of MHz) to a fiber-dependent parameter, the Brillouin shift Ω B , which is on the order of 2π⋅11•10 9 [rad/sec] in silica fibers at room temperature and at telecommunication wavelengths [8].An input signal whose frequency is Ω B lower than that of the pump ('Stokes wave'), experiences SBS amplification.SBS has found numerous applications, including distributed sensing of temperature and strain [9], fiber lasers [10], optical processing of high frequency microwave signals [11,12] and even optical memories [13].Over the last six years SBS has been highlighted as the preferred mechanism in many demonstrations of variable group delay setups [14,15], often referred to as slow and fast light. In previous demonstrations, selective SBS amplification with an arbitrary central frequency and a sharp pass-band of up to 2.5 GHz width was demonstrated [6,7].The amplification bandwidth was broadened using pump wave synthesized modulation [7].The central frequency, bandwidth and gain selectivity of the filters were all separately tunable.However, the selective amplification of the filters was limited by the onset of the amplified spontaneous emission that is associated with SBS (SBS-ASE), and use of the filters was restricted to relatively weak signal power levels by pump depletion.In this paper, we enhance the spectral selectivity of SBS tunable filters, and elevate their depletion threshold.The solution path relies on the polarization attributes of SBS in standard, weakly birefringent fibers.A vector analysis of SBS reveals that the state of polarization (SOP) of the amplified signal is drawn towards a particular state, which is governed by the SOP of the pump [16].That particular state could be made different from the output polarization of unamplified, outof-band signal components, unaffected by SBS.Based on this principle, the filters described in this work combine a relatively modest SBS amplification within the filter pass-band, together with polarization discrimination for out-of-band rejection.A 700 MHz-wide, sharp band-pass filter with 30 dB selectivity is demonstrated experimentally. Principle of operation Consider the Jones vector ( ) sig E z of a monochromatic signal of optical frequency ω sig , entering the fiber at z = 0, where z denotes the position along a fiber of length L. A broadened, counter-propagating pump wave of power spectral density (PSD) ( ) p P ω enters the fiber at z = L.We denote the unit Jones vector of the pump wave as ) ( ˆz e pump .The same {x, y} coordinate axes are used for both Jones vectors (as in [16]).We neglect linear losses, as well as polarization mode dispersion effects within the spectral range of Ω B ~2π•11 T(z) is the Jones matrix, which describes the linear signal propagation along the fiber up to point z, and ( ) (in units of m −1 ) is given by a convolution of the pump PSD with the inherent Lorentzian line shape of the SBS process [14,15]: Here Γ B ~2π⋅30⋅10 6 rad/sec is the SBS linewidth, and γ 0 is the SBS gain coefficient in units of [W•m] −1 .The evolution of the counter-propagating, undepleted pump is governed by birefringence alone: where the superscript T stands for the transpose operation, and et al. [16] have shown that the SBS amplification process in a birefringent fiber is characterized by maximum and minimum values of the signal amplitude gain, G max (ω sig ) and G min (ω sig ), respectively.The two gain values are complex, and they vary with the signal frequency.For the broadened, uniform ( ) p P ω used in this work, the absolute values of G max and G min become nearly frequency-independent within the amplification bandwidth [14,15], (see Eq. (2).The maximum and minimum gain values are associated with a pair of orthogonal SOPs of the signal [16].We denote the unit Jones vectors of these SOPs at the signal input end of the fiber as In Eq. ( 4) and ( 5), the superscript * ⊥ denotes the orthogonal of the conjugate.Based on Eqs. (3-5) and the fact that , we find that the signal SOPs of maximum and minimum amplification at the fiber output are simply related to the corresponding input states by the birefringence matrix ( ) L T : ( ) ( ) ) For low pump power values, the integrated impact of the Brillouin amplification almost solely depends on the relative orientations of the pump and signal SOP's along the fiber, as determined by the fiber birefringence.Hence, it is not surprising that the relationships of Eq. ( 6) do not depend on the Brillouin interaction.Yet, it is interesting to note that both numerically and experimentally, Eqs.(4-6) also hold, at least approximately, even for strong pumps and considerable Brillouin gains [16]. An input signal of arbitrary SOP can be decomposed along the basis of ˆin max sig e and ˆin min sig e : ( ) Following SBS amplification, the output signal vector becomes: On the other hand, if the signal wave is subject to birefringence alone, the output vector is instead given by: ˆ( ) For long enough [16], randomly and weakly birefringent fibers, the expected magnitudes of the maximum and minimum amplification are ( ) ( ) , and unless a is vanishingly small, Eq. ( 8) describes polarization pulling of the output probe wave towards a particular state, ˆout max sig e , which is determined by the pump polarization.The effectiveness of the pulling is governed by the ratio max min G G .Equations ( 8) and ( 9) also show that SBS introduces a difference between the output SOP of amplified signal components, for which ( ) sig g ω is significant, and that of unamplified components, for which ( ) sig g ω is negligible.It is therefore possible to further discriminate between amplified and unamplified spectral components of a broadband signal wave, using a properly aligned polarizer. Let ˆpol e denote the state of a polarizer placed at the signal output, z = L: The final equality in Eq. ( 12) is met when Eq. ( 11) is set to zero.Due to the differential gain of SBS, in-band components are retained and even amplified. To calculate the SBS gain of the signal components we assume the signal input to be of unity power ( Subject to the constraint of complete out-of-band rejection ( 0 = biref sig A in Eq. ( 11)) together = , it is easy to show that this in-band SBS gain can become as high as Thus, the amplification of the polarization-assisted SBS process, at the high pump power limit, is only 6dB lower than that of a corresponding scalar process, when the latter is aligned for maximum gain.However, while polarization discrimination can achieve very high rejection (theoretically infinite) for the unamplified outof-band components, the power transfer for these components in the scalar process is unity.We conclude that the polarization discrimination filtering proposed in this work can achieve much higher selectivity than its scalar counterpart.Fig. 1.Simulation results for the signal power gain at the output of an SBS amplification process, using a 3.6 km-long highly nonlinear fiber (HNLF) and a 0.7 GHz-wide, 13.5 dBm pump.The pump is assumed to be undepleted.In the lower curve (a), the input signal's SOP was chosen with equal projections on the states of maximum and minimum SBS amplification ( presents simulation results of the relative optical power transmission of the signal wave, as a function of the frequency offset from the pass-band center.In the simulations, Eq. ( 1) and (3) were directly integrated.A 3.5 km-long highly non-linear fiber (HNLF) with an SBS gain coefficient γ 0 = 2.9 [W⋅m] −1 was used.The fiber was simulated as 1000 cascaded birefringent media that are randomly oriented, with a polarization beat length of 40 m and a polarization coupling length of 10 m [16,18].The pump power was set to 13.5 dBm, and its PSD was uniform within a 0.7 GHz-wide region.The pump was assumed to be undepleted.Curve 1(b) shows the signal power gain for an SBS process with no output polarizer, and with the signal input SOP aligned for maximum amplification (a = 1).A filtering selectivity of G G − ; (ii) The gradual transition between the pass-band and stopbands is due to the convolution form of ( ) sig g ω , (Eq. ( 2).Lastly, the lower in-band amplification is expected to defer the onset of depletion to higher signal power levels. Experiment results The response of a tunable optical filter based on the vector properties of SBS was measured experimentally.The measurement setup is shown in Fig. 2. Light from a distributed feedback (DFB) laser diode was used as an SBS pump wave.The optical spectrum of the pump was broadened through direct modulation of the DFB injection current, using the output of an arbitrary waveform generator (see Fig. 3) [7]. Figure 4 shows a heterodyne measurement of the pump PSD, taken through beating of the pump wave with a detuned local oscillator on a broadband detector.The 700 MHz-wide pump wave was amplified to a power level of 13.5 dBm by an Erbium-doped fiber amplifier (EDFA), and launched into a 3.5 km-long, highly nonlinear fiber under test (FUT) via a circulator.The fiber length and SBS gain coefficient, as well as the pump power, matched those of the simulation of the previous section.A 1.5 nmwide optical band-pass filter was used to reduce the ASE of the EDFA. Fig. 2. Experimental setup for measuring the power transfer function of a polarizationenhanced SBS filter.The SBS signal wave is generated at the upper branch, using a tunable laser that is externally modulated.The electro-optic modulator (EOM) is driven by a radiofrequency tone in the range of 13.5-16.5GHz, which in turn was amplitude-modulated by a 1 MHz sine wave.The optical polarization was adjusted by polarization controllers (PC).The signal was launched into the fiber under test (FUT) through an isolator.The middle branch is used to realize a 0.7 GHz broadband pump wave, through the direct modulation of a DFB laser by a properly programmed arbitrary waveform generator (AWG).The pump power is amplified and adjusted to 13.5 dBm by an EDFA and a Variable Optical Attenuator (VOA), and directed into the FUT by a circulator.The lower branch includes a 5 GHz-wide FBG for selecting a single sideband of the signal wave, an output polarizer and a photo-detector.The detected signal was analyzed by a radio frequency spectrum analyzer (RFSA).Light from a tunable laser diode was used to generate the SBS signal wave.The laser output was double-sideband modulated using a LiNbO 3 Mach-Zehnder interferometer (Electro-Optical Modulator -EOM), driven by a swept sine wave of frequency Ω RF , in the range of 2π⋅13.5-2π⋅16.5GHz.The tunable laser carrier wavelength and the radio-frequency (RF) modulation were chosen so that one of the sidebands scanned the SBS amplification spectral window that was induced by the pump wave, as in Fig. 5.The modulated signal wave was launched into the FUT from the end opposite to that of the pump input.Following propagation through the FUT, the signal was filtered by a 5 GHz-wide fiber Bragg grating (FBG), which retained only the side-band of interest and blocked off the carrier wavelength, Rayleigh back-scatter of the pump wave and the other sideband.Lastly, the signal passed through a Polarization controller (PC) and a linear polarizer.The filtered signal power at the polarizer output was observed directly by a 125 MHz-wide photo-detector.In order to distinguish between the signal the induced SBS-ASE, the RF sine wave at Ω RF was further amplitude modulated by a 1-MHz tone, and the detector output power was measured by an RF spectrum analyzer (RFSA), using zero-span at 1MHz with a resolution bandwidth of 100Hz.First, the optical power transmission of a scalar SBS-based filter without polarization discrimination was characterized (as in [7]).In this set of measurements, the output polarizer was removed, and the input SOP of the signal was adjusted using PC4 for maximum amplification.The carrier frequency of the tunable laser was set to 15 GHz below the center of the SBS amplification band, as induced by the pump wave.Figure 6 shows the measured optical power gain of the sideband of interest as a function of Ω RF , which was scanned around 2π⋅15GHz.Measurements were taken for several signal power levels in the range of −18.1 to 2.7 dBm.A maximum selectivity of 22 dB was achieved in the undepleted pump regime.Pump depletion reduces the filter selectivity to 12.7 dB when the input signal power is raised to 2.7 dBm. Figure 7 shows the corresponding signal power gain at the output of a polarizationenhanced filter.In the absence of the input signal, max ˆout sig e was first identified as the SOP of SBS-ASE [16].Then, using PC1, max ˆout sig e was oriented at 45° with respect to the output polarizer (i.e.max, min 1 2 p = ± ), as discussed in the previous section.Finally, PC4 was readjusted for maximum rejection of the unamplified signal components, thereby implementing 1 2 a b = = .Using the polarization enhanced configuration, the filter selectivity for the higher optical signal power level of −3.1 dBm was improved considerably, from 16.5 dB to 30 dB.The depletion tolerance of the filter was improved as well: the same frequency response was obtained for signal power levels of −13.1 dBm and −3.1 dBm (see Fig. 7).The power gain within the pass-band of the polarization enhanced filter was 8 dB lower than 2 max G , in good agreement with the predictions of Fig. 1. Discussion In this work we have demonstrated a significant enhancement in the performance of SBSbased tunable band-pass filters.The improvement relies on the vector properties of the SBS amplification: the output SOP of amplified signal components is pulled towards a specific state, whereas the SOP of unamplified signal components is unaffected by SBS.Polarizationbased discrimination, with judicious alignment of the input SOPs, provides an improvement in the filter selectivity in the undepleted pump regime.In addition, the depletion threshold of the filter is elevated as well.Care must be taken, though, in the application of the filter above the depletion threshold, as the transfer of broadband Stokes waves could be different from that of monochromatic signals.The filter bandwidth can be arbitrarily increased (up to ~10GHz [14]) by further pump broadening, at the expense of lower gains and increased vulnerability to PMD.Finally, proper tracking and compensation of slow polarization drifts may be necessary for the stable, long-term operation of the filters [19]. In our experiments a 0.7 GHz-wide, polarization-enhanced filter provided a 30 dB selectivity in amplifying input signals having a range of optical power levels, from −13.1 to −3.1 to dBm.A scalar SBS-based filter, without polarizarion considerations, provided only 22 to 16.5 dB selectivity for the same input power levels of signal and pump.The obtained performance is superior to that of our previous work [7], in which a power gain selectivity of only 14 dB was achieved with a similar pump PSD and using the same fiber.The filter selectivity can be further increased using higher pump power levels [7].The spectral power transmission of SBS-based tunable filters is very sharp: a 20 dB change in transmission occurs within a 200 MHz-wide spectral region.The central frequeny of the filter can be varied arbitrarily, and its bandwidth can be independently scaled between 30 MHz to ~10 GHz through pump modulation.SBS pump synthesis can further allow for the flexible preemphasis and spectral shaping of the filter pass-band. SBS-based photonic filters could also be highly attractive, for example, in selecting subbands of modern coherent optical communication systems, such as optical orthogonal frequency domain multiplexing (O-OFDM) [20].The proposed technique can also be adapted to microwave-photonic filtering of broadband RF signals.In SBS-based microwave-photonic filters, an optical carrier is single-sideband modulated by the RF signal of interest.The modulation sideband undergoes frequency-selective SBS amplification as described above, and modified RF waveform is recovered through beating of the sideband with the optical carrier upon detection.The RF power gain of the filter therefore scales with the optical power gain of the modulation sideband.SBS-based RF photonic filters would provide a sharp and aperiodic transfer function, with independently tunable central radio frequency, width and shape.The experimental transfer function obtained in the previous section is analogous to that of a sharp microwave-photonic filter, whose pass-band is centered at 15 GHz.Finally, frequency-selective polarization pulling of SBS amplification was also recently employed in the generation of an advanced modulation format [21]. In conclusion, tunable and sharp optical band-pass filters were proposed and demonstrated, based on the insight that has been provided by the vector analysis of SBS in randomly birefringent fibers. two extreme gain values are also associated with a pair of orthogonal SOPs of the signal output: ˆout min sig e and ˆout max sig e .Both the input and the output pairs of SOPs were shown to be nearly frequency independent within the amplification bandwidth [17].In sufficiently long, standards fibers, being weakly and randomly birefringent, the signal SOPs associated with maximum and minimum SBS amplification are related to those of the pump wave by [16]: ,, Figure1presents simulation results of the relative optical power transmission of the signal wave, as a function of the frequency offset from the pass-band center.In the simulations, Eq. (1) and (3) were directly integrated.A 3.5 km-long highly non-linear fiber (HNLF) with an SBS gain coefficient γ 0 = 2.9 [W⋅m] −1 was used.The fiber was simulated as 1000 cascaded birefringent media that are randomly oriented, with a polarization beat length of 40 m and a polarization coupling length of 10 m[16,18].The pump power was set to 13.5 dBm, and its PSD was uniform within a 0.7 GHz-wide region.The pump was assumed to be undepleted.Curve 1(b) shows the signal power gain for an SBS process with no output polarizer, and with the signal input SOP aligned for maximum amplification (a = 1).A filtering selectivity of is obtained.In curve (a), the signal input SOP was chosen so polarization-assisted filter was lowered by 10 dB, in agreement with the prediction of Eq. (13), where for the specific, rather modest pump power, min G the polarizer helps to significantly attenuate the out-of-band components so that the filtering selectivity is much improved.Two observations to be noted in Fig.1(a): (i) The slightly larger amplification towards the pass-band edges originates from the complex nature G max and G min : while both are real numbers in the band center, they have different phases at the edges, resulting in somewhat higher values for 2 min max Fig. 3 .( Fig. 3.The direct current modulation waveform used in the spectral broadening of the SBS pump wave. Fig. 4 . Fig. 4. Measured PSD of the pump wave, as a function of the offset from its central frequency. Fig. 5 . Fig. 5.The generation of the SBS signal wave.(a-b): Schematic spectrum of double-sideband modulated tunable laser.The radio-frequency (RF) modulation waveform is a swept sine-wave ΩRF in the 2π⋅13.5 to 2π⋅16.5 GHz range.Depending on ΩRF, the upper modulation sideband could fall within the SBS amplification spectral region induced by the pump (a), or outside that region (b).(c): Spectrum of signal wave following propagation in the FUT and after filtering by a 5 GHz-wide FBG, which retains the upper modulation sideband only.The additional 1MHz amplitude modulation of the carrier is not shown.
2018-04-03T04:23:53.498Z
2011-10-24T00:00:00.000
{ "year": 2011, "sha1": "925f493748fa2eb5988d0d78dad620485bbad733", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.19.021945", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "925f493748fa2eb5988d0d78dad620485bbad733", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }