id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
245078412 | pes2o/s2orc | v3-fos-license | Relationship between Urban Floating Population Distribution and Livability Environment: Evidence from Guangzhou’s Urban District, China
: The livability environment is an important aspect of urban sustainable development. The floating population refers to people without local hukou (also called ‘non- hukou migrants’). The floating population distribution is influenced by livability environment, but few studies have investigated this relationship. Especially, the influence of social environment on floating population distribution is rarely studied. Therefore, we study 1054 communities in Guangzhou’s urban district to explore the relationship between livability environment and floating population distribution. The purpose of this article is to study how livability environment affects floating population distribution. We develop a conceptual framework of livability environment, which consists of physical environment, social environment and life convenience. A cross-sectional dataset of the impact of livability environment on the floating population distribution is developed covering the proportion of floating population in the community as the dependent variable, eight factors of livability environment as the explanatory variables, and two factors of architectural characteristics and one factor of location characteristics as the control variables. We use spatial regression models to explore the degree of influence and direction of physical environment, social environment and life convenience on the floating population distribution in livability environment. The results show that the spatial error model is more effective than ordinary least squares and spatial lag model models. The five factors of the livability environment have statistical significance regarding floating population distribution, including four social environment factors (proportion of middle- and high-class occupation population, proportion of highly educated people in the population, proportion of rental households, and unemployment rate) and regarding life convenience factors (work and shopping convenience). The conclusion has value for understanding how the social environment affects the residential choice of the floating population. This study will help city administrators reasonably guide the residential pattern of the floating population and formulate reasonable management policies, thereby improving the city’s livability, attractiveness and sustainable development. reduce the proportion of floating population in the community by 0.1408%. This shows that the education level of floating population is generally lower than that of registered population. It also shows that there are fewer highly educated talents in the floating population concentration areas. PRH and WSC and the proportion of floating population, it shows that floating population are concentrated in communities with poor livability environment, which is consistent with theoretical expectations. Only the relationship between UR and the distribution of floating population is inconsistent with the theoretical expectation. This can be explained by the employment-oriented inflow of the floating population.
Introduction
Livability environment is an important aspect of urban sustainable development [1]. In "Transforming our World: The 2030 Agenda for Sustainable Development", 17 goals were the floating population in 1054 communities in Guangzhou's urban district, eight factors of livability environment, two factors of architectural characteristics and one factor of location characteristics. We use a spatial regression model to explore the degree of influence of influence and direction of the impact of the physical environment, the social environment, and life convenience on the floating population distribution in livability environment. This will hopefully help city administrators to reasonably guide the residential pattern of the floating population and to formulate reasonable management policies to improve the city's livability, attractiveness and sustainable development capacity.
The rest of this paper proceeds as follows. Section 2 presents the conceptual framework of the livability environment, and our research design, indicators, data and research methods. Section 3 analyzes the degree of influence, and the direction, of the livability environment on the distribution of floating population in Guangzhou's urban district. Section 4 discusses the research findings and draws conclusions.
Conceptual Framework of Livability Environment on the Residential Location Choice of the Floating Population
The concept of livability environment is complex and diverse. In different research perspectives, scholars have presented different understandings of the concept. We present a conceptual framework of livability environment from the perspective of residential location choice of the floating population, which accounts for differences within urban communities. The framework includes three aspects: the physical environment, the social environment, and life convenience. Each aspect is evaluated by corresponding index (Figure 1).
Physical Environment
Residents tend to prefer near-positive physical environments (NPPE) and avoid negative physical environments (ANPE) when choosing residential location [13]. In terms of NPPE, famous landmarks [34], parks [35] and waterfronts [36] have a positive effect on choice of residential location; as for ANPE, unwanted transportation facilities and municipal facilities have a negative impact on the living environment [14][15][16][37][38][39][40], and residents often want to stay away from the above facilities when they choose the location. Unwated transportation facilities include railways, highways, elevated roads, train stations, coach stations and airports, among others. Unwanted municipal facilities mainly include gas stations, signal transmission towers, funeral homes, substations, high-voltage corridors, garbage disposal sites and sewage treatment plants, among others. These facilities may cause odor, dust, noise, radiation and other pollution to the surrounding environment or have a negative impact on mental health.
Social Environment
Social environment mainly includes occupation, educational background, rental household and unemployment. First, occupation can be denoted by the proportion of middleand high-class occupations in the population (PMHCOP). In China, occupation classes can be divided into seven categories: (1) management, (2) professionals, (3) clerks and administrative staff, (4) workers in the retail and service sectors, (5) industrial workers, (6) workers in the agricultural sector and (7) unemployed; (1)-(4) can be defined as middleand high-class occupations. Second, educational background can be denoted by the proportion of highly educated people in the population (PHEP); those with bachelor degree or above can be considered highly educated people. The percentage of highly educated people among those aged six and above is the indicator of a community's educational background. Third, in terms of rental household (denoted by the proportion of tenants in total households; PRH), Saunders' research shows that the ownership of residential property is the main determinant of social status [41]. Lockwood (2007) shows that the higher the proportion of rental housing in the community, the higher the violent crime rate [42]. Therefore, in theory, the higher the proportion of tenants in the community, the worse its security and social environment. Fourth, unemployment rate (UR) is an important index to measure the attractiveness and security of a community. Raphael and Winter-Ebmer (2001) show that the unemployment rate is positively correlated with the crime rate. Therefore, communities with high unemployment rate have relatively high risks of instability and insecurity, and people's income level is low, forming a poor social environment [43].
Life Convenience
Life convenience includes work and shopping convenience (WSC) and social public services convenience (SPSC). Among them, WSC includes office accessibility, public transport accessibility and commercial service accessibility, and the accessibility to office space, subway stations, and stores can be selected for evaluation. SPSC can be evaluated by the accessibility of basic education, medical services, and cultural and physical amenities. Previous studies have shown that the index factors of life convenience can significantly affect residents' residential location choices [44][45][46].
Study Area
Guangzhou is a Chinese megacity. The urban area of Guangzhou is the core and most important part of Guangzhou, having 379.71 km 2 and a permanent resident population of 5.96 million (Data of the Sixth Population Census in Guangzhou). Taking Guangzhou City as the research area, the scope is as follows: west to the boundary of Guangzhou City; south to the boundary of Haizhu District; east to Qianjin Sub-district Office, Huangcun Sub-district Office, Xintang Sub-district Office and Longdong Sub-district Office of Tianhe District; and north to Jingxi Sub-district Office, Tonghe Subdistrict Office, Huangshi Subdistrict Office, Xinshi Sub-district Office, Tangjing Sub-district Office and Songzhou Subdistrict Office of Baiyun District. The scope refers to the previous research results on the division of functional areas in Guangzhou [13]. From the perspective of administrative division, the urban area of Guangzhou includes Yuexiu District, Liwan District, Haizhu District, most of Tianhe District and the south of Baiyun District. Three communities in the study area have no data, so the research object is 1054 communities. The area can be further divided into old area, core area and marginal urban district ( Figure 2). The Guangzhou's CBD is Zhujiang New Town, and the international finance center (IFC) is located at the center of the CBD.
Research Design, and Indicator System and Model
This study develops a research framework to analyze the influence direction and degree of livability environment on the floating population in the process of residential location selection. The analysis process is as follows: first, taking 1054 communities in Guangzhou as the research objects, the spatial distribution and spatial correlation characteristics of floating population were analyzed. Second, from the physical environment, social environment, life convenience and building and location characteristics, this paper constructs the residential choice model of floating population in Guangzhou urban area. Here, floating population is the dependent variable and physical environment, social environment and life convenience are explanatory variables, including eight indicators. There are three indicators for building and location characteristics, which are control variables. Third, we choose the appropriate model among ordinary least squares (OLS), spatial lag model (SLM) and spatial error model (SEM) to analyze the relationship between floating population and livability environment. This relationship includes significance, influence direction and influence intensity. Finally, the research results are analyzed ( Figure 3). The index system of decision-making factors of floating population living choice under livable orientation is constructed, as shown in Table 1.
Among them, PFP is the proportion of the population that is floating, which is a dependent variable of the model and represents the floating population status of the community. NPPE, ANPE, PMHCOP, PHEP, PRH, UR, WSC and SPSC are eight explanatory variables representing livability environment. BAGE, BAREA and DCBD are control variables.
BAGE and BAREA represent the building characteristics of the community. In theory, residents prefer to live in the houses of newer buildings and larger building areas [47][48][49][50]. DCBD is the most important indicator of location characteristics [13]. Location characteristics significantly affect residents' living choices [51].
Data and Data Sources
The data of PFP, PMHCOP, PHEP, PRH, UR, BAGE, and BAREA are derived from the data of the sixth census in Guangzhou. The relevant location data of NPPE, ANPE, WSC, SPSC, and DCBD are drawn according to POI point, line and area data of Baidu Map in 2012. Among them, the location data of distasteful municipal facilities in ANPE also refers to the corresponding current situation map of Guangzhou City Master Plan (2011-2020). Community boundaries are drawn according to the Atlas of Guangzhou Urban Management Community Network Responsibility Division.
Evaluation Method of Community Index Score
The calculation methods of PFP, PMHCOP, PHEP, PRH, UR and DCBD are shown in Table 1. To avoid large dimensional difference of data, we multiply PFP, PMHCOP, PHEP, PRH and UR by 100. See Wang et al. (2020) [13] for the scoring method of BAGE and BAREA. For the score calculation method of ANPE, see the evaluation method of "Avoid municipal facilities" by Wang et al. (2020) [13]. NPPE, WSC and SPSC are composite indicators; the score assignment methods of their sub indicators are shown by Wang et al. (2020) [13]. For the compound variables (NPPE, WSC and SPSC), the weights of the indicators are calculated through a factor analysis, and the weighted sum is employed to calculate their scores. Taking NPPE as an example, the NPPE score of a single community can be calculated as follows: In the formula, S p is the score value of the p-th sub index of NPPE, m is the number of sub indexes and w p is the weight of the p-th sub index, which are calculated by factor analysis.
Analysis Method of Spatial Autocorrelation and Spatial Agglomeration of Foreign Population
When choosing residence, floating population often considers multiple nearby at the same time. Therefore, communities with similar proportion of floating population may have spatial agglomeration distribution and then produce spatial relevance. Global spatial autocorrelation index (GMI) is used to measure whether there is spatial autocorrelation characteristic of floating population distribution in Guangzhou's urban district.
Spatial Regression Model
To study the relationship between the distribution of floating population and livability environment in Guangzhou's urban district, we build a model of floating population's residential choice tendency. The model takes the proportion of floating population in the community as the dependent variable, eight indicators of livability environment in Table 1 as the explanatory variables (independent variables), and three indicators of building and location characteristics in Table 1 as the control variables. The OLS, SLM, and SEM are used to calculate, through comparison and selection, and the optimal model is selected to analyze, the relationship between the distribution of floating population and livability environment in Guangzhou's urban district.
OLS is a traditional linear regression model that can analyze the linear relationship between the proportion of floating population and 11 factors. The premise of the application of this model is that the factors of floating population's residential choice tendency are independent of each other and do not consider the spatial location relationship of the community. The expression of OLS model is as follows: In the above formula, i = 1, 2, . . . , 1054, indicating the number of community samples in Guangzhou urban district; y i is the dependent variable and the proportion of floating population in the community; X i is the S-dimensional row vector (s = 1, 2, . . . , 11) of the choice factors of the floating population, representing the value of the s-th influencing factor variable in the i-th community; β is the S-dimensional column vector, which is the spatial regression coefficient corresponding to these 11 factors of residence choice of floating population; ε is the error term of the model; and ε i~N (0,δ 2 I) indicates that the error term obeys normal distribution and the variance is consistent, where I is the identity matrix.
SLM, one form of spatial regression models, considers the influence of the proportion of floating population in a certain community on the proportion of floating population in other neighboring communities, that is, the spatial spillover effect. SLM can be expressed as [52,53]: where ρ is the coefficient value of spatial autoregressive and W ij stands for spatial weight matrix. SEM considers the possible spatial spillover benefits of independent error items in the model. SEM model is expressed as [52,54]: In the formula, ϕ is the spatial autocorrelation error term in the model of residence choice of migrants and λ is the spatial autocorrelation coefficient of the error term. Moran's I index was used to study the spatial autocorrelation characteristics of the proportion of floating population among 1054 communities in the region. The Moran's I index is 0.430952, the p value is 0.0000 and the Z statistical value is 29.8726, which shows that the proportion of migrants in Guangzhou's urban district presents significant spatial correlation characteristics. That is, the proportion of floating population in a certain community will be affected by the neighboring communities.
Characteristics of Spatial Difference Pattern of the Livability Environment
Eight livability environmental indicators (NPPE, ANPE, PMHCOP, PHEP, PRH, UR, WSC and SPSC) were classified into five categories by Nature Breaks (Jenks) ( Figure 6). The higher the score, the better the livability environment. Communities with NPPE scores higher than 5.1534 are mainly distributed in the new central axis of Guangzhou, along the Pearl River and in the northern part of the old city. Communities with APPE scores below five are mainly distributed in peripheral urban areas. In terms of PMHCOP, the high value is mainly distributed in the core area and the low value is distributed in the southern part of the peripheral city. In terms of PHEP, the value in the east is generally higher than that in the west. The communities with PRH greater than 72.2629% are mainly distributed in the old city, and the PRH in the core area is generally lower than 29.3515%. The areas with UR higher than 11.9404 are mainly in the southwest of the old city and peripheral city. Communities with WSC scores greater than 6.9230 are concentrated in the old city and the area north of the Pearl River in the core area. However, the WSC scores of peripheral urban areas are generally lower than 2.3985. In terms of SPSC, the scores of the old city and the core area are generally higher than those of the peripheral city. In general, the spatial distribution of eight indicators of livability environment is heterogeneous. There are also differences in distribution patterns among the eight indicators.
Relationship between Urban Floating Population Distribution and Livability Environment in Guangzhou's Urban District
First, the collinearity test is carried out on the 11 impact factors in Table 2 (Table 2). The test results show that the VIF values of all the 11 indicators are far lower than 10, and the factor with the largest VIF value (BAREA) is only 3.6461, which indicates that there is no obvious collinearity among these 11 factors, and all can be included in the regression model for factor analysis. The relationship between floating population distribution and livability environment was analyzed by three regression methods (OLS, SLM and SEM). The results (Table 3) show that among OLS, SLM and SEM, the R 2 of SEM model is the largest, reaching 0.769824, and AIC value is the lowest, reaching 8048.50. Thus, the fitting degree of the SEM is evidently better than the other two models, which again shows that the living choice of floating population in Guangzhou's urban district has significant spatial spillover effect. Therefore, the spatial error model is used to construct the decision-making model of floating population living choice in Guangzhou's urban district, and then the relationship between floating population distribution and livability environment is analyzed (Table 4). According to the significance of regression coefficient, we can judge the relationship between livability environment factors and the distribution of floating population. Table 4 shows that the four indicators of social environmental factors (PMHCOP, PHEP, PRH and UR) and WSC have a significant impact on the living choice of floating population. This shows that in the livability environment, social environment is the core factor affecting the distribution of floating population, while physical environment has no significant influence. In life community, work and shopping community has a significant impact on the distribution of floating population, while social public services community has no significant impact. The relationship between five significant livability environmental factors and the distribution of floating population is as follows.
In PMHCOP, every 1% increase in the community's promotion of middle-and high-class occupation population will reduce the proportion of floating population by 0.1305%. There is a negative correlation between them. It shows that the level of professional class is generally low in areas where the community floating population is concentrated and distributed. The floating population is at a disadvantage in the competition of professional class.
In PHEP, the community's proportion of highly educated population is negatively correlated with the proportion of floating population. Every 1% increase in the community's promotion of highly educated population will reduce the proportion of floating population in the community by 0.1408%. This shows that the education level of floating population is generally lower than that of registered population. It also shows that there are fewer highly educated talents in the floating population concentration areas.
In PRH, every 1% increase in community proportion of rental household will increase the proportion of floating population by 0.4379%. Renting a house is highly positively correlated with the floating population, which shows that the floating population mainly rents a house. Compared with the registered population, the floating population is at a disadvantage in obtaining housing property rights.
In terms of UR, the community's unemployment rate is negatively correlated with the proportion of floating population. For every 1% increase in community unemployment rate, the proportion of floating population will decrease by 0.9470%. This shows that the employment rate is higher in communities where floating population gather. Guangzhou is an employment-oriented immigrant city. The primary reason why the population immigrates to Guangzhou from other places and becomes a member of the floating population is that Guangzhou has more employment opportunities. If floating population is unemployed in Guangzhou, there is a high probability that it will move out of Guangzhou. Therefore, the employment rate of floating population is higher than that of registered population.
In WSC, the proportion of floating population in the community decreases by 1.0479% when the score of work and shopping convenience increases by one. This shows that the floating population often lives in communities with poor work and shopping convenience. Accessibility to office space, subway stations, and stores in communities with a large proportion of floating population are often poor. This may be due to their low-income level, and, thereby, poorer access to more expensive housing, making them choose to live in their area of work or shopping convenience.
Discussion
This study presents the conceptual framework of livability environment guided by the living choice of floating population. Compared to the previous livability environment framework, the framework of this paper considers social environment factors, which are more suitable for explaining and analyzing the residential location choice of floating population in cities. At present, few studies have analyzed the living choices of floating population from the perspective of social space. In fact, social space is an important factor that cannot be ignored in the process of housing choice [22]. To some extent, choices of housing location can be regarded as their choice of social environment [26,27]. Residents tend to choose to live close to people with similar social attributes as their places of residence [28]. In the future, when studying the residential choice tendency of floating population in cities, we can adopt the theoretical framework proposed in this study and the perspective of dividing residential characteristics into "Physical Environment, Social Environment and Life Community" and select the corresponding indicators for research.
Based on the case study of Guangzhou's urban district, the relationship between the distribution of floating population and livability environment is described. The results show that the social environment is the most important factor for floating population's choice of residence. That is, the distribution of floating population is most closely related to social environment but not significantly related to physical environment. These findings have been rarely mentioned in previous studies. The results of this case study verify the rationality of the theoretical framework constructed in this paper. Researchers of Neo-Marxism and Structuralist School hold that understanding of social environment is an important factor in the process of residential decision-making [23,55]. Cassel and Mendelsohn (1985), who posit social space unity theory, believe that the choice of residential location has an interactive relationship with social space. For the above-mentioned basic viewpoints, this case study verifies the above-mentioned theory from the perspective of residence choice of floating population [24].
Notably, this article still has some limitations that need to be improved in the future. First, these livability environmental factors may have spatial heterogeneity effects on the distribution of the floating population. This study does not consider this effect. In the future, geographically weighted regression can be used to further analyze the spatial differences of livability environmental factors. Second, different types of the floating population have different needs for living choices, so in the future, questionnaires or interviews can be used to analyze the characteristics of population differences. Third, using distance from the CBD to represent location characteristics is a way of simplifying assumptions. In the future, more refined methods and more indicators can be used to evaluate location characteristics.
Conclusions
In this study, a conceptual framework is established to analyze the relationship between floating population distribution and livability environment. From the perspective of residential location choice of the floating population, livability environment is divided into three aspects: physical environment, social environment and life convenience. A dataset of influencing factors of floating population's living choice in 1054 communities in Guangzhou city is established. Specifically, we focus on the possible degree of influence, and the direction, of physical environment, social environment and life convenience on the distribution of the floating population.
The research shows that the distribution of the floating population in Guangzhou has obvious spatial differences and spatial agglomerations. In general, the proportion of the floating population in the marginal urban district is the highest, while the proportion of the floating population in core area and old city is generally low. The livability environment in Guangzhou city presents spatial heterogeneity. Altogether, the livability environment in the core area is better, while that in the marginal urban district is relatively poor. There are differences in the spatial distribution pattern of livability environmental factors for eight aspects: NPPE, ANPE, PMHCOP, PHEP, PRH, UR, WSC and SPSC.
In this study, SEM is used to test the relationship between the distribution of floating population and livability environment and the significance of this relationship. The results show that five factors of livability environment have statistical significance on the distribution of floating population. The five factors include four social environment factors (PMHCOP, PHEP, PRH and UR) and one life community factor (WSC). Based on the relationship between PMHCOP, PHEP, PRH and WSC and the proportion of floating population, it shows that floating population are concentrated in communities with poor livability environment, which is consistent with theoretical expectations. Only the relationship between UR and the distribution of floating population is inconsistent with the theoretical expectation. This can be explained by the employment-oriented inflow of the floating population. | 2021-12-12T17:26:31.029Z | 2021-12-06T00:00:00.000 | {
"year": 2021,
"sha1": "df9b350f2ff9b9e69d0b4d167d736fd69935efa7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/23/13477/pdf?version=1638885282",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3281ecc40481c3227d2de49896f36d27d15e29eb",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
267779831 | pes2o/s2orc | v3-fos-license | Predictive deep learning models for cognitive risk using accessible data
and convert it into data of a smaller size. For instance, in tasks where the goal is to discern whether an image contains a cat or a dog, a CNN learn to recognize essential patterns such as eyes, ears, and the mouth. This learning process involves repeatedly extracting relevant information, allowing the network to focus only on the data necessary for image recognition tasks. The CNN learns from a dataset designed for the specific task, including images of cats and dogs alongside the correct identification of each. Deep learning utilizes vast amounts of task-related data and correct answers to develop an algorithm capable of high-performance predictions and feature extraction. Over the past decade,
Introduction
Globally, the population is aging, with the number of people age 65 and above reaching 727 million, representing 9.3% of the total population of 7.7 billion in 2020 (1).Japan has the world's highest rate of aging, with its elderly population accounting for 28.6% of its total population in 2020.Dementia, and especially Alzheimer's disease, is a significant challenge in such aging societies.The Cabinet Office predicts that by 2025, around 7 million elderly Japanese will have dementia, accounting for 20% of those age 65 and over (2).Globally, dementia cases are expected to rise to 152 million by 2050 (3).Early detection is crucial as many cases progress significantly before becoming apparent, particularly in the early stages of mild cognitive impairment (MCI), which often goes unnoticed due to its minimal impact on daily or social activities.Identifying MCI early is essential to preventing and halting the progression of dementia.
Over the past few years, various studies haves been conducted to detect MCI early.A technology called deep learning has been particularly highlighted and utilized.Deep learning is one of the methods in the field known as machine learning.Essentially, machine learning techniques involve using an algorithm to discover features, rules, or patterns existing in the background of the data collected with regard to a certain event or task, and then using those features or rules to make inferences.Deep learning is an improved method of machine learning based on a technique called neural networks.A characteristic of deep learning is its ability to learn features, rules, or patterns from a large amount of data collected on complex problems, enabling highperformance inference.Conventional machine learning algorithms have difficulty dealing with such a large amount of input information, but one of the deep learning technologies, convolutional neural networks (CNNs) (4), can locally extract image information and convert it into data of a smaller size.For instance, in tasks where the goal is to discern whether an image contains a cat or a dog, a CNN learn to recognize essential patterns such as eyes, ears, and the mouth.This learning process involves repeatedly extracting relevant information, allowing the network to focus only on the data necessary for image recognition tasks.The CNN learns from a dataset designed for the specific task, including images of cats and dogs alongside the correct identification of each.Deep learning utilizes vast amounts of task-related data and correct answers to develop an algorithm capable of high-performance predictions and feature extraction.Over the past decade, The early detection of mild cognitive impairment (MCI) is crucial to preventing the progression of dementia.However, it necessitates that patients voluntarily undergo cognitive function tests, which may be too late if symptoms are only recognized once they become apparent.Recent advances in deep learning have improved model performance, leading to applied research in various predictive problems.Studies attempting to estimate dementia and the risk of MCI based on readily available data are being conducted, with the hope of facilitating the early detection of MCI.The data used for these predictions vary widely, including facial imagery, voice recordings, blood tests, and inertial information during walking.Deep learning models that make predictions based on these data sources have been proposed.This article summarizes recent research efforts to predict the risk of dementia using easily accessible data.As research progresses and more accurate predictions become feasible, simple tests could be incorporated into daily life to monitor one's personal health status and to facilitate an early intervention.
Review
deep learning has advanced significantly, demonstrating human-like or superior performance in areas such as image recognition, text generation, autonomous driving, facial recognition, and AI systems like ChatGPT.
Deep learning is increasingly used in medical research, including predicting dementia.Here, studies using deep learning from various perspectives to detect dementia early are described.Conventionally, dementia is assessed using the Mini-Mental State Examination (MMSE) to evaluate cognitive function (5).In addition, brain MRI scans and biomarker tests are used.However, markers like amyloid-beta require invasive procedures, making them impractical for widespread screening and early detection of dementia (6).This highlights the significant challenge of early detection, as opportunities for testing are limited unless patients proactively seek medical help.Moreover, administering the MMSE and performing an MRI scan are costly and time-consuming, making their use as screening tests impractical.Therefore, recent research has focused on developing more affordable and convenient methods of detecting dementia using deep learning.This approach differs from conventional testing methods by focusing on easily obtainable information, such as facial expressions, voice, basic blood tests, and gait data.The potential of these data types to detect dementia early will be detailed further.The key advantage of these sources is their ease of acquisition.If these prediction models evolve to offer a high level of accuracy, they could enable immediate on-site testing, known as point-of-care testing (PoCT), and these tests could be incorporated into daily life.Here, the potential to use deep learning-based methods of estimation for PoCT to detect dementia is summarized.
Estimation of MCI using facial images
Research has attempted to estimate dementia based on facial video.The field of image recognition, which has particularly advanced as a result of deep learning, encompasses object estimation, facial recognition, facial expression recognition, and detecting human figures in video.Predominantly developed through CNNs, models like AlexNet (7), ResNet (8), and VGG (9) have emerged to extract features from images, alongside object detection models such as Faster R-CNN (10) and YOLO (11) for real-time detection.These technologies are used in studies to estimate cognitive function based on facial videos (12).Prior studies reported younger-looking facial impressions in individuals without dementia (13), suggesting potential facial indicators of cognitive decline.This research focuses on estimating cognitive functions based on facial videos.For the study, videos ranging from 3 to 30 minutes in length were recorded of 34 elderly individuals age 65 and above, including 10 with MCI.Images were extracted from these videos at a rate of 5 frames per second, with 10 frames over 2 seconds forming one set for the model's input.A total of 3,822 data sets were created, with 3,058 sets used for training and 764 sets for evaluation, to solve a binary classification problem of distinguishing between MCI and health using deep learning.The study used ResNet, which is based on a CNN, to extract facial structure and motion information from facial videos (Figure 1).ResNet, a deep learning model linking over 50 layers of CNNs, was developed for image recognition tasks and is highly effective at extracting features from twodimensional spatial information.The model to estimate MCI was created using two instances of ResNet: one as a model to extract spatial features from the face to estimate MCI and the other as a model to extract dynamic features based on facial dynamics to predict MCI.The spatial model randomly selects one image from a set of 10 frames over 2 seconds for input, focusing on static facial features.The dynamic model generates an optical flow from the same frame set, reflecting facial movements over 2 seconds, which ResNet then uses to extract features.Optical flow (14), represented by a three-dimensional vector for each text.This approach estimates Alzheimer's based on the coherence and expressiveness of the text.A drawback is that features unique to speech might be missed.However, unlike with direct extraction of speech features, this approach allows for estimation based on lengthy dialogues that have been converted to text.Recent advances in deep learning for natural language processing, such as the use of the high-performance natural language model BERT (23), have led to proposed methods of estimating Alzheimer's using those technologies (24).
Data used to train and evaluate models come from tasks performed during studies.Primarily, tasks include semantic verbal fluency (25), where subjects list as many items as possible from a category like animals or vegetables within one minute (26,27), a natural speech task involving conversation without direct questions (28), and a picture description task where participants orally describe the content of a picture within a set time (29).Notably, the ADRess database (30) offers open access to data from these tasks, including voice recordings, transcribed texts, and MMSE scores.Such databases are valuable for developing deep learning models to estimate Alzheimer's based on speech data.
Estimation of MCI using blood test information
One unique area of research to detect dementia early using deep learning involves blood test information (31).This research focuses on the relationship between systemic disorders like arteriosclerosis, which is the result of lifestyle diseases, and cognitive impairments, which include both MCI and severe dementia (32)(33)(34).It also considers other systemic disorders that might affect cognitive function, such as malnutrition (35), anemia (36), lipid metabolism (37), purine metabolism (38), and renal dysfunction (39).These can be detected via basic blood tests obtained during health check-ups.The research attempts to estimate MCI using blood test data, including 23 items like red and white blood cell counts, hemoglobin levels, hematocrit, albumin levels, and age, using a feedforward neural network, a basic form of deep learning, to predict MMSE scores.The input items obtained from the blood tests used are shown in Table 1.This neural network consists of a four-layer structure with intermediate layers as shown in Figure 2.Each intermediate layer has a neural network with 400 nodes, solving a regression problem that estimates the MMSE in the range of 0 to 30 based on 24 numerical items.Data used to train and evaluate the data were collected from 202 patients (average age: 73.48 ± 13.1 years).All patients received inpatient treatment including rehabilitation and pharmacotherapy for lifestyle-related diseases, with 142 patients having cerebrovascular diseases and 174 patients having at least one lifestylerelated disease.The feedforward neural network was trained and evaluated using the leave-one-out method, pixel, is analogous to the RGB structure of images, making ResNet suitable for extracting features from these data.The model ultimately estimates whether an individual has MCI based on two dynamically and spatially obtained features.The final model had a precision of 0.94, recall of 0.78, accuracy of 0.91, and an F1 score of 0.85.Despite the low recall and concerns over data on a small number of individuals and the balance between MCI and normal data, the ability to determine MCI at a certain level using deep learning represents a significant advance in early detection.Estimation of cognitive functions based on video data, such as this, is also being performed in another study (15) and is an area of growing interest.If MCI can be estimated based on approximately two seconds of video data, this could allow for testing without a significant burden in everyday life or visits to hospitals and care facilities, enabling immediate examinations on-site.
Estimating Alzheimer's disease using speech information
Estimating dementia based on speech information is one of the most extensively studied tasks in the field of deep learning-based estimation of dementia ( 16).Alzheimer's disease, a type of dementia, initially manifests as language impairments.Focusing on this characteristic, the goal is to estimate the presence of Alzheimer's disease using speech data.Previous studies have reported that Alzheimer's patients tend to pause more frequently between words and speak more slowly than healthy individuals (17).Moreover, Alzheimer's patients are reported to have difficulties in finding appropriate words or expressions to match a sentence (18,19).Deep learning models are used to extract various vocal features from speech data.In order to estimate Alzheimer's disease, two primary features are extracted: features from continuous speech signals and features from speech converted to text to analyze the context and content of conversations.These features are then used for the final estimation task.
In order to extract features from speech signals, studies have used deep learning algorithms that are effective at continuous signal processing (20), such as long short-term memory (21) and recurrent neural networks (RNN) (22).These algorithms have the capability to internally retain a memory of past inputs, allowing the neural network to maintain information over a certain duration.This capacity enables the extraction of features needed to estimate Alzheimer's disease not just based on a single speech sample but also based on historical data.However, they have limitations in terms of storing information over extended periods, such as tens of minutes.
The second method involves converting speech into textual data and then extracting Alzheimer's disease characteristics from the context and content of the which was applied to the blood test results and MMSE scores from the 202 patients.Actual MMSE scores and predicted MMSE scores were correlated (r = 0.85, p < 0.001).The mean absolute error was 2.02.Blood tests, primarily obtained during medical examinations and health check-ups, serve as the main data for this research.A cognitive function estimation model based on blood tests could effectively be utilized as a test to screen for dementia in medical facilities and during regular health check-ups.For instance, when elderly individuals undergo blood tests during a health examination or medical visit, their cognitive function can be estimated using deep learning in no time at all.If MCI or dementia is suspected, a medical facility could then encourage the individual to undergo a more detailed examination or visit an outpatient clinic.This estimation model could be an effective means for early detection of dementia, simply by undergoing a regular medical consultation or health check-up.
Estimation of MCI using inertial information during walking
Compared to the previously described models to estimate MCI, there is another approach that is more similar to everyday life, and it has the potential to be used for the early detection of MCI by estimating cognitive decline on the spot in everyday situations.Studies have estimated MCI using inertial sensor data collected by a wearable device when the wearer walks (40,41).In those studies, a small inertial sensor was affixed to the shin of 30 cognitively normal individuals and 30 individuals with MCI, and they were asked to perform a simple task of walking 20 meters, as well as a complex task of walking 20 meters while simultaneously performing cognitive tasks such as subtracting numbers or naming animals.Moreover, subjects were asked to always keep walking while performing the task.The device used for measurement was the Shimmer3 GSR+ Unit (42), which is equipped with a 3-axis accelerometer and a 3-axis gyroscope.Eight pieces of information, including three forms of acceleration, three angular velocities, and the total magnitude of the signal vectors of both the accelerometer and gyroscope sensors, were used to estimate MCI.A six-layer CNN and three types of RNN were used to estimate MCI, as shown in Figure 3.The eight signals input are timeseries data, and the 8×T input information, segmented The features extracted by this process are then input into the RNN.Since the RNN has the characteristic of retaining past input information, the features extracted by the CNN from information before the most recent time T are retained, and features that incorporate timeseries information are ultimately extracted.Finally, a binary classification of MCI is performed by a feedforward neural network.The leave-one-out method was used for model training and evaluation, achieving an accuracy of 73.33%, a sensitivity of 83.33%, and a specificity of 63.3%.The walking information used in this study was obtained by attaching a measuring device to the shin, which differs from walking data that can be easily obtained with commonly carried devices such as smartphones or smartwatches.Therefore, this method has not yet reached the point where it can be used for early detection in everyday life as it is.
However, as research and data collection progress, this method could be effectively utilized as a method of detecting MCI early since it the sensor is easy to attach and measurement is performed simply by walking, potentially serving as a prompt before visiting a medical facility.
Conclusion
PoCT refers to methods that allow for immediate testing on the spot at the appropriate time.Conventional tests for dementia primarily involve brain imaging with MRI, peripheral biomarkers like amyloid-beta, and the MMSE, which are used for final diagnosis.These tests require a certain amount of time to conduct, and moreover, they are opportunities that will not arise unless individuals are aware of their symptoms and go to a hospital voluntarily.Due to the inconvenience of such tests, research has been conducted on methods that can estimate MCI using deep learning based on information that can be acquired more easily, without hassle, and without posing a burden.Deep learning has a high level of inferential performance and can learn from complex data, so data measured during events that indirectly reflect the impact of dementia could be used effectively, something that was difficult to achieve in the past.If high-precision estimation of MCI becomes possible based on information that can be obtained relatively easily, such as speech, facial expressions, blood, and gait, deep learning models will likely be incorporated into testing systems for PoCT.For example, estimation of MCI using facial recognition or blood tests could be combined with regular health checkups for the elderly, allowing for effortless estimation of MCI.Moreover, if estimation of MCI can be achieved using diverse data sources, this could lead to more accurate estimates.Furthermore, if estimation of MCI is widely adopted for PoCT, it could easily be configured into a smartphone or web app.Estimation could be performed at medical facilities and also in nursing homes and at home, so this test could be integrated into one's daily life.Including simple tests using deep learning in daily life could allow for immediate detection of abnormalities, leading to the discovery of cognitive decline at an earlier stage compared to conventional methods.If tests are conducted daily and the data collected and used for research, this could lead to estimates of future changes in cognitive functions, such as one year or five years later, based on an analysis of daily data collected over time.
Figure 1 .
Figure 1.Deep learning structure to estimate MCI based on image and motion information.Facial videos are divided into still images and motion information using an optical flow.Deep learning models are created for each to extract spatial and dynamic features, which are then used to estimate MCI.
Figure 2 .
Figure 2. Structure of deep learning used to estimate the MMSE score based on blood test data.It consists of a forward propagation neural network with a four-layer structure.
Figure 3 .
Figure 3.The structure of deep learning to estimate MCI based on inertial information during walking.The CNN handles inertial information in image format and extracts features.The recurrent neural network subsequently extracts features based on those from the past and present, and these are used these to perform the final estimation of MCI. | 2024-02-23T06:17:12.626Z | 2024-02-20T00:00:00.000 | {
"year": 2024,
"sha1": "5fe995820837cad7a54ac5b3e8d1ca4f3a041337",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/bst/advpub/0/advpub_2024.01026/_pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "97a3bffdfd8f333d7baff06a40e5b16d9f506b91",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249450100 | pes2o/s2orc | v3-fos-license | Law, Legal Socializations, and Epistemic Injustice
This review essay looks at the relationship between Deborah Tuerkheimer’s Credible (2021) and Meera Deo’s Unequal Profession (2019) in order to make a substantive point about inequality in legal institutions and the methods that are employed in dissecting them. At first glance, the connections between these projects might not seem apparent, although each deals with the inequalities in which its actors in focus are embedded. But both projects go deeper by unveiling institutional inequities that are often in plain sight when we investigate the background frameworks implicated in their production, and they reveal the problematic relationship between everyday discrimination and the systemic biases that justify them. Finally, reading these books together allows us to make an intervention about the methods and credibility of narratives within socio-legal scholarship more generally. In theorizing about legitimacy, we ask how the way in which we are told to look at structures of normativity changes the kinds of inequities we are able to see.
problematic relationship between everyday discrimination and the systemic biases that justify them. This becomes especially crucial as conversations about sexual assault and professional biases both become independently and connectedly crucial for our times.
We start by building a case for this joint reading on three main grounds. First, we motivate the comparison broadly by sketching out the similarities between the projects and highlight how using each project to inform the other in substantive ways could lend new light to understanding their resonance. Particularly, we engage Tuerkheimer's concepts of credibility discount and credibility inflation beyond the context of assault to other kinds of biases (including professional ones) to illustrate how they can be useful. We also suggest that Deo's focus on intersectionality could be central to understanding the narratives that Tuerkheimer presents. Tuerkheimer acknowledges the possibilities for the credibility discount becoming even more powerful when a person occupies more than one marginalized group, and Deo's interlaced "raceXgender" analysis offers a way to theorize that discount. Reading the two books together allows each to inform and extend the other. We suggest that this reading offers important extensions to law and inequality scholarship more broadly.
Further, we argue that Deo's book illustrates that the ideas of the credibility complex, in particular, and epistemic injustices, more generally, apply in the context of peripheral experiences within legal academia. In drawing this connection, we argue that people are trained to create, apply, and enforce the law in a space that itself enacts, reinforces, and replicates the credibility complex. Finally, we argue that reading these books together now could be crucial because of the centrality of these issues for our current times; they offer possibilities for method, theory building, and policy that new feminist futures could offer. Together, at the most abstract level, we hope to unpack larger questions of deliberative justice and the accounts that allow us to access it: what is believable, and from whom, and how can these narratives and their reception help us think differently about the persistence of structural inequity?
One final note: the decision in this review essay to not question the veracity of the stories that Tuerkheimer and Deo report on is intentional and analytically rooted. Deo's book is built on in-depth, systematic interviews with nearly one hundred law professors in the United States. 2 Tuerkheimer's book, which is intended for popular audiences, does not unpack her methodology in the same way, but it combines conventional legal research and scholarship with interviews with a range of respondents whom the book calls "survivors." As she notes, "[m]ost of the stories in this book belong to women whose abusers aren't famous and whose accounts never made headlines" (Tuerkheimer 7). Scholars of implicit bias warn us that the ways in which we observe 2. Using surveys and in-depth interviews, Meera Deo follows a targeted sample of ninety-three tenured or tenure-track law professors from member schools accredited by the American Bar Association and the Association of American Law Schools in the United States (excluding historically Black institutions that she found to be analytically distinct from predominantly white schools). Of these, the main sample consisted of sixty-three law professors who were women of color, and three comparison samples of men of color (n = 11), white women (n = 11), and white men (n = 8). A target seed group was varied based on race/ethnicity, gender, institutional ranking, geographic region of employment, tenure status, and employment title/position. Seed samples then resulted in nominations of other participants who were similarly selected based on a range of analytical variation. Although targeted snowball samples such as this are not "representative" in the same way as random samples can be, they are a crucial method to reach vulnerable populations who might not respond to more generic calls for participants. events reveal something about how we were trained to look. There certainly exists a reading of this research-and it is a reading that we consistently encounter-where there is disbelief on the presumption of skewed or personal perspective. But how differently might we encounter the same findings were they to be based on frameworks that disciplines have socialized us to believe are "neutral" or "valid"?
We not only choose, as a personal decision, to trust what the women of color in the legal academy tell Deo about their experiences and what survivors tell Tuerkheimer; we argue that not regarding these data as compelling because they do not align with an internalized logic of ideal representative findings reveals the very structural violence that these data record. While we cannot compel the reader to make the same personal choice as we do, we feel inclined to reinforce this orientation for the purposes of our theoretical intervention. Contending with the possibility (or corollary) that these respondents are reliable witnesses of their own realities remains an analytical premise regardless of whether one chooses to agree with it. What would happen, for example, if one were to treat this as credible in the same way that inherent credibility is offered non-narrative methods? If one were to set aside disbelief and imagine different priors of trust on behalf of these narratives, what follows? The theoretical core of this review essay rests on this subversion, and we urge the reader to reflect on these biases alongside us.
UN-CREDIBLE, UNEQUAL PROFESSION: READING DEO USING CREDIBILITY CONSTRUCTS
Unequal Profession uses the experiences of women law professors of color to make a layered and comparative argument about professional careers, spaces, and the inequalities embedded within them. We start by mentioning its empirical core because it is the tool that Deo wields most centrally to make a grander argument about structural racism and inequity. Her book is the first formal mixed-method study of the law faculty experience that focuses on women of color and includes professors from all stages of their careers. It investigates these empirics from a Critical Race Theory approach to amplify voices that are traditionally underrepresented and marginalized. The latter-that is, the predominance of her theoretical frame (rather than a choice to, say, include race as an additional level of analysis)-is a strategy worthy of attention. Starting with the focus on the peripheral actor, rather than needing to revisit the peripheral actors in her study from the starting point of the "normative" professor (cisgender white male) can be read as a crucial theoretical maneuver. The book does not spend much time discussing this decision, and while there is merit in "doing" rather than "telling," what is not "told" here feels like it would be an important contribution for those looking to read significance from the understated and use it for their own theoretical journeys.
Deo's findings of intersectional biases that "doubly marginalize" are troubling, and they are also revealing in the specific ways in which institutional intersectionality gets operationalized. But in starting from the vantage point of actors who are usually added to research in addition to the normative ideal actors, Deo offers us new and useful insights as we think about ways to change these models. For instance, she finds that classroom confrontations and biases in course evaluations have devastating effects on tenure and promotion for professors who are women of color-something that is often spoken about amongst themselves but seen as unsubstantiated because structures predominantly occupied by white men characterize these descriptions as "just complaining." In relaying these findings beyond these circuits of sight and care, Deo's work helps build structures of alternate possibility both for these women and their allies within law schools seeking to make amends to their structures. By offering systematic narratives that start from the perspective of those forced to the margins, Deo allows them to have new power against others who have traditionally refused to acknowledge their value. Her findings of how inequality is done in justified-credible, as we argue-ways are important because they reveal how women leaving or shadow exiting the legal academic workforce might be more about targeted institutional cruelty that strips them of viable recourse rather than individual choices or incapacity.
Credible, Tuerkheimer's book that draws on her research about cases of sexual abuse, describes the operation of the "credibility complex" in the context of assault, abuse, and harassment. Drawing from court observations and public narratives about high-profile assault cases, she argues that women's credibility, especially in the courtroom, is systematically discounted on account of their gender: "[C]redibility is meted out too sparingly to women, whether cis or trans, whatever their race or socioeconomic status, their sexual orientation or immigration status" (Tuerkheimer 9). She calls this the "credibility discount." Related, but distinct, is that men's credibility is systematically inflated due to their status as men. This idea of the same attribute being differently valorized for men and women is not in itself surprising. As Correll, Benard, and Paik (2007) show, for example, parenthood embroils women in an incessant "motherhood penalty," whereas it offers men a slight boost, possibly reinforcing the stereotypes that male bread winners are more stable and better workers. But unlike the different assessment in a comparative context, abuse claims give us a chance to regard this differential valuation in a combative context where one's credibility in an adversarial system necessarily impacts the ways in which their opponent is regarded and reinforced.
Tuerkheimer's focus on female accusers allows us to regard the ways in which threats to patriarchy-an invisible norm that constructs legal systems-may seem potent where "male sexual prerogatives are at stake" (15), whereas their denial within patterned systems of inequity might attract less attention and almost feel "natural and intractable." In calling out this background framework of expectation, and specifically naming it, the book allows for new understandings to emerge about "rational" or "normal" events and facts. As an extension, the book suggests that any woman (and, by extension, any member of any marginalized group) alleging abuse makes three claims inherent in such accusation: "This happened. It was wrong. It matters." In response to this three-part suggestion, the credibility discount inflicts its "justified" violence in three parts: disbelief ("this did not happen"); blame ("this wasn't wrong because it was your, the accuser's, fault"); and disregard ("this doesn't matter"). On the flip side, "credibility inflation" gives a credibility "boost" to men, especially powerful men.
Deo's book is rife with examples of these compounded effects of the gendered and raced ways in which women of color have to navigate the academy. She argues that, since the "background framework" of an ideal law professor is one of a "white male professor who scares them" (Deo 60), not being in this position, or not being entitled to the privileges of the position interactionally, has left women of color at a mismatch that has resulted in terse classroom exchanges where they have had to both earn students' respect and feel like their performance at the job was persistently under review (74). As one of Deo's respondents, a female black law professor, explained, she did not have the "privilege" to be able to say to a student: "I don't know, I'm going to have to get back to you on that." Unlike her (white, male) colleagues who were finding ways to "master the material," she had to spend her time doing what they could take for granted-that is, earn her student's respect. Still, this interactional discrediting in the classroom is only one of the threads that Tuerkheimer's theory helps elevate from Deo's data. Across the book, Deo highlights other forms of credibility negotiations and valuations that disadvantage women: from evaluations after the semester, to peers who hepeat, whitesplain, and mansplain in ways that threaten the space that women of color can claim as their own in the academy.
An interconnected second thread in Tuerkheimer's book is also a grander moral and jurisprudential claim that follows this negotiation of credibility and power in interactions. Tuerkheimer builds on the work of Miranda Fricker (2007) and other philosophers on epistemic injustice and argues implicitly that credibility discounts and inflation are distinct moral wrongs. This implicit moral claim draws from a more explicit one made in an earlier article by Tuerkheimer (2017), which draws on Fricker to argue that the credibility complex in the law is an example of epistemic injustice, as we have suggested above. 3 Epistemic injustice is, most broadly understood, a wrong to someone in their capacity as a knower. To be a giver of knowledge, or to be one who knows, is "a capacity essential to human value" (Fricker 2007, 5). As an extension, to refuse to accept someone as a person who is capable of knowing things and of communicating that knowledge is to dehumanize them. Someone who experiences epistemic injustice is "degraded qua knower, and they are symbolically degraded qua human. : : : [W]hat a person suffers from is not simply the epistemic wrong in itself, but the meaning of being treated like that, : : : [T]he dimension of degradation qua human being is not simply symbolic: rather, it is a literal part of the core epistemic insult" (44-45).
The epistemic injustice that Tuerkheimer recounts is primarily what Fricker (2007) describes as "testimonial injustice": the injustice that occurs when a member of a group that is structurally less powerful has her testimony devalued due to her membership in that group. For instance, statements made by those seen as having less legitimacy across contexts-for example, those with assumed incapacities or children-may be less likely to be credible, but the credibility itself might be gauged on the basis of what might be prejudiced (for example, the evaluation of a junior colleague or someone with a contrary political position to one's priors). Fricker calls this the "identity-prejudicial credibility deficit": "The speaker sustains such a testimonial injustice if and only if she receives a credibility deficit owing to identity prejudice in the hearer" (28). Fricker (2007) also discusses hermeneutical injustice, which is predicated on unequal participation in the construction of meaning. 4 Hermeneutical injustice, as defined by Fricker, occurs when a person cannot understand their own experience because they have been excluded from the societal process that creates the categories, ideas, and concepts that would seem to govern it. As she describes it, hermeneutical injustice is "the injustice of having some significant area of one's social experience obscured from collective understanding owning to hermeneutical marginalization" (158). Fricker's example is of women who experienced sexual harassment before there was a name or concept for it as a legal category. 5 But it can as easily be extended to other kinds of categories of exclusion that preceded the social movements that might have offered frameworks of reference to claim rights within (for example, invisible levies of violence against queer bodies before an articulated movement for rights for LGBTQIA+ persons came under the direct purview of mainstream equality jurisprudence).
Here again, the extension of credibility deficiency and the resultant systemic stratification has resonance in Deo's description of the legal academy. For instance, Deo's respondents relay accounts of the tenure and promotion process that reveal the ways in which institutions consistently disregard women of color as producers of knowledge. Some of this has to do with devaluing priorities that might be important to these women-for example, when faculty members do not treat scholarship "involving the interaction of law with race, gender, sexual orientation, socio-economic status, and other identity related areas" as "real scholarship" (Deo 89). One respondent, Armida, for example, reveals having to "fight the perception of 'assumed incompetence' not only from students, but also from colleagues who discount her work, announcing her belief that 'because I write on diversity issues, somehow it's not scholarly.'" In a similar vein, Deo's examples provide that, across contexts, "normative legal scholarship : : : tends to be valued above identity-based work" (89). But these examples, and, in particular, Armida's account that this attitude results in her faculty "diminishing the work that [she can] do" (89) is unearthing not just a personal narrative of disadvantage but also a more robust insight into the academic institutions that normalize the rejection of anything that threatens its own viability.
The way in which this situation too may be read as epistemic injustice is an interesting question. It might be that the faculty members who discount this scholarship do so because they do not believe what the scholarship asserts, but this is not necessarily the case. It might also be that they discount the work as scholarship-that is, they do not accept it as knowledge itself. Put another way, testimonial injustice usually takes the form of something like this: a speaker could say "X" and the hearer could internalize it as "I do not believe X, because you are a member of a group about which I have prejudicial views." This is straightforward testimonial injustice, as defined by Fricker (2007), and it could be what Deo is describing. Or (and this seems more likely) Deo could be describing a situation in which the hearer does not necessarily disbelieve the statement itself, but does not believe that the statement is worthwhile. This is an example of discounting speech, but not because of reduced credibility about the content of the claim. Rather, definition distinguishes hermeneutical injustice from other kinds of injustice by clarifying in her definition that they are not culpable despite being structurally unjust, which Mills expertly pushes back against within the context of race reinforcements.
5. This connection is made implicitly in the text under study by Tuerkheimer and explicitly in Tuerkheimer 2017. this is discounting speech because of the reduced credibility about a claim implicit in the statement "X"-a speaker, for example, could say "X, and X is valuable knowledge", and the hearer could internalize it (even if they do not say it) as "I believe X, but I do not believe that X is valuable knowledge." A claim about a statement's value as knowledge is not necessarily implicit in all speech, but it is implicit in an article that a scholar chooses to write and publish and that the scholar uses as part of her tenure file. Instead, it is the valorization of this statement and the ways in which it is ceded as work of worth that is necessary of our deeper consideration.
This framework also allows us to extend this line of thinking about the periphery to other kinds of inequalities embedded in legal education research and policy. Allen, Jackson, and Harris (2018), for example, argue that there is a "pink ghetto pipeline" that challenges the expectations that circumscribe women's experiences in the academy, relegating them to essentially feminized labor and expectations of work. In a more immediate context, López (2021) shows how seemingly "legitimate" strains or tracks within the academy-a variation that Deo's book does not focus on because it is about inequalities in tenure-tracked careers-are still raced, classed, and gendered within law schools. López argues that, if legal institutions are committed to an anti-racist agendaas increasingly more and more law schools say they are-they need to think more critically about the work of these tracks and their capacity to build inequalities within structures of hierarchy. Similarly, research that centers the perspectives of actors who experience being on the periphery-for example, international students (Ballakrishnen and Silver 2019) or students with mental health issues (Young 2021)-or during certain times of crisis-for example, the global pandemic (Deo 2020)-has illuminated a new appraisal of the kinds of ways in which persistent inequalities are entrenched into what can otherwise seem neutral or fair.
When seen through the lens of credibility of the actors, however, these data offer an even more nuanced perspective of what is at stake. What accounts for credible disadvantage in law school and how that disadvantage is responded to by peers and institutions alike can have important implications for those experiencing it as well as for those arguing for change to these models. Thinking of identity and solidarity building from the background framework of credibility offers us another chance to review these structures crucially, and it might help with building a certain fabric of "credibility" with these audiences in question. Altogether, the important takeaway is that this construction is one that highlights epistemic injustice-because it is couched as necessary/useful/ fair when seen through a certain lens-and, then, when argued to be unfair or discriminatory, the critique is dismissed as radical or not worth engagement, at least in part because of the identity of the person making the critique. This framework offers us a new way to think about these data and policy offerings with theoretical rigor that might ironically offer, for lack of a better term, a neutral-and, therefore, just-perspective.
LAW AND INTERSECTIONAL INEQUALITY: READING CREDIBLE THROUGH A RACEXGENDER LENS
Tuerkheimer's book provides numerous examples of credibility discounting and credibility inflation and shows how the law itself incorporates the credibility complex.
Sometimes that discounting is in the explicit substance of the law. For example, as Tuerkheimer explains, many states' rape laws still include either a physical or verbal "resistance requirement": absent physical or verbal resistance to sex, a woman is assumed to have consented. Some states, as well as the federal government, allow "cautionary" jury instructions in rape cases, in which judges may warn jurors to "evaluate the complainant's testimony with extra suspicion" (Tuerkheimer 2017, 93).
Credibility discounting and inflation come into play, even when they are not written into the law, when participants in the legal system exercise judgment. Drawing from her own courtroom observations as well as a range of secondary sources including news reports, law review articles, and empirical studies, Tuerkheimer shows that law enforcement officers designate sexual misconduct accusations as "unfounded" far out of proportion with the actual rate of false reports. Even when accusations are not deemed unfounded and some investigation is pursued, officers tend to overlook potential corroboration because they have already discounted the credibility of the accusers and elevated the credibility of the accused (81)(82). Prosecutors refuse to prosecute because they do not believe the accusers (85). When prosecutors do prosecute, juries discount accusers' credibility and refuse to convict (74). And judges use their discretion to permit sexual history evidence (125-26) and to overturn convictions (115).
Credibility, according to Tuerkheimer's account, is "sparingly meted out" to all women, irrespective of class, sexual orientation, or immigration status (13), but this is not to club all kinds of navigation of this discount. She concedes that, just as there are no female prototypes, there is no singular experience of the credibility discount (13). In particular, she suggests that race and gender are not "additive" in simple or linear fashion: "Black women are not simply subordinated to a greater degree than white women; they are also differently subordinated" (18). The stories that she reports throughout the book support this observation. Yet the very language of "discounting" forces a linear notion of disbelief. She writes, for example, that "[w]hen a Black woman comes forward : : : the discount is at its steepest" (19). Implicit in Tuerkheimer's primarily qualitative account is a quantitative concept that seems to be on a single axis: whether someone is believed "more" or "less" and whether a "discount" is applied to a quantity of belief.
The idea of a "steeper" discount or "more" of a discount can be critically important. For example, quantity or magnitude of belief is relevant to evidentiary considerations, as Tuerkheimer (2017) discusses. But Deo's notion of raceXgender-in line with a larger tradition of intersectional scholarship-demands a move beyond the idea of a single value growing or shrinking or even a more rapid progression of discrimination or disbelief. Rather, as elaborated by Hutchinson (2001), intrinsic in the idea of intersectionality is the idea of more than two dimensions. Tuerkheimer begins to capture this when she writes that "the three discounting mechanisms-distrust, blame, disregard-[are] brought to bear with special vengeance on Black women" (19; emphasis added). The way in which women of color's knowledge is treated is different. This difference is described to some level by both Deo's and Tuerkheimer's qualitative work, but to do full justice to the coordinates of this difference, understanding with an intersectional len-such as Deo's raceXgender-offers crucial value.
Scholars have noted the ways in which invisibility of identity scholarship, especially for those who navigate multiple intersectional positionalities, can perform different kinds of structural violence. Collins (2017), for example, describes the devaluing of intersectionality, and, specifically, of identity-based scholarship, as "epistemic suppression." People who mischaracterize and misunderstand identity politics and intersectionality as "individualistic and nonstructural" and then "criticize[] intersectionality's emphasis on identity as problematic" are, as Collins argues, engaged in epistemic violence because they are refusing to engage in a linguistic exchange due to persistent and reliable ignorance (119-20). One example of such injustice that is becoming increasingly relevant in interactional contexts is pronoun usage. Knowing, for example, someone's preferred pronouns but not choosing to use them might feel like a language slip at one level or an exaggerated identity consciousness not worthy of attention, but, to those whose preferences are not being valued, such exchange could feel like violence. Collins describes this in the context of scholars who work on intersectionality; it equally applies to those who do not work directly in the area but who refuse to acknowledge identitybased scholarship as valuable. But these limitations on the inclusion of intersectionality into analysis might lie not so much with intersectionality as with "inclusion" (Ballakrishnen 2021). In other words, if we started from the periphery as a point of entry rather than comparison, we would have a mode of analysis where peripheral identities are not valued within theoretical infrastructures that code them as "problematic" or "individualistic" but, rather, within a framework in which their difference from normative models is expected and central to their valence.
In this way too, Deo's reading can offer theoretical extensions to many of the complicated narratives and prior events inherent in Tuerkheimer's claims. What kinds of exclusion might be reinforced while doing certain kinds of inclusion? If we speak about race or class or gender as particular and think that inclusion is done by such acknowledgment, who does that inclusion serve and who does it sever? For instance, physical ability and affect is central to the thesis in Credible, but the theory in the book does not address the impact of these claims on, for example, the desirability paradox of the disabled. 6 Similarly, trans women are not mentioned except in passing, but given the ways in which bodies and their distinctions are primed within normative logics, this credibility matrix is likely to be further complicated for them. Without suggesting how a theory about assumptions might implicate choice making by those who are least likely to be seen as having agency, the full contours of how we understand the epistemic justice that Tuerkheimer sets up is hard to fully flesh out. Is this justice meted out by mentioning different identities or does the theory itself need to have new points of intervention to be best served?
Similarly, readers of Credible are given traces of how race/class/sexual identity matters to interpersonal affect and interaction, but we are not shown how it matters in different ways. Assuming that all kinds of deviance from what is seen as "normal"-that is, the site from which epistemic injustice is produced and justified-are the same is problematic because it can club anormativity into a singular experience, thereby producing a different sort of injustice than the one the book illustrates. At the same time, 6. The disability studies literature has long reported the ways in which the normative logics of disability are grounded in an able-body bias that does not take into consideration the range of ways in which health, well-being, and satisfaction might be coded on scales of alterity. For the initial definition, see Albrecht and Devlieger 1999. For an elucidation on critical disability approaches, including its intersections with race and class, and their implications for legal systems, especially in the global North, see Morgan 2020. not all deviations can be predicted or recorded. Again, this is where theory could help offer recourse. Instead of thinking of inclusion as an additive construct-where we extend normative understandings to new sites-if we were to start from the perspective of the actors who are most vulnerable in these encounters, as intersectionality urges us, it might subvert the power that normative epistemologies hold.
Deo's lens of raceXgender could reinforce the theoretical core of Credible. As it stands now, even if raced and sexualized politics of the body are mentioned, Credible does not have a central theoretical logic that allows the reader to ascertain how and in what ways certain victims are seen as more deserving of blame than others. Women, for example, are assumed to invite whatever sexual advances come their way (Tuerkheimer 131). But while Tuerkheimer's book is consistently clear about the difference in credibility structures for men and women, it is less clear how distinctions between different kinds of women might matter and demarcate experience. The author, for instance, clarifies that there is a hierarchy in place for how victims are disregarded and that the "care gap" is a real disparity between "inadequate regard for survivors and excessive regard for offenders" (136). While this is certainly compelling, the reminder that the care gap "mirrors social hierarchies while covertly bolstering them" (137) is what deserves more than a passing mention because it is true that "care is distributed along the lines of power and that marginalized accusers are the ones most readily to be dismissed"-if they can, at all, find the spaces to even be heard. Disgrace, disregard, and dignity, which are important themes in this book, are similarly raced and classed, both from the perspective of the person who has to endure as well as of the witness who has to judge the limits and capacities of such endurance. But without tools to analyze the caste-ist compartments within which the "care gap," as Tuerkheimer calls it, grows, it is harder to make sense of the impact of these inequities. In contrast, analyzing their lived embodiment from an intersectional raceXgender(Xability) lens, might have further complicated the possibilities for the epistemic injustice inherent in these structures, which is what we turn to in the next section.
EPISTEMIC JUSTICE AND THE LEGAL ACADEMY: UNDERSTANDING THE PERIPHERY
Many of the stories that Tuerkheimer recounts in service of her argument are familiar: Harvey Weinstein's repeated assaults and his ultimate conviction and Brett Kavanaugh's Supreme Court hearing are striking case studies that demonstrate what the book refers to as the credibility complex. In showing how credibility discounting and inflation of narrative happens across audiences, Tuerkheimer goes beyond fact in recounting to make an implicit moral claim, set up in the first section of this review essay-namely, that the credibility discounting that women experience is a distinct moral wrong.
Tuerkheimer (2017), following Katharine Jenkins (2017), also identifies "rape myths" as an example of hermeneutical injustice and its sometimes complicated operationalization. Although one can know at a conscious level that violence is happening, reinforced invisibility may gaslight them into not having full clarity about some knowledge, even to one's own self. Tuerkheimer gives the example of a woman who, after being raped by her boyfriend, was asked questions by police officers such as "what were you wearing" and "how much did you have to drink?" One officer even chided her by saying: "don't mix alcohol with beauty." In this account, even "rationally" knowing that the rape was not her fault could not compensate for the violence of disorientation that she also had to endure. At some level, the failure of her boyfriend, the police, and many of her friends to understand the rape as a rape made her blame or question herself . These examples reveal both Fricker's (2007) characterization of hermeneutical injustice as well as the implications for not acknowledging knowledge of experience beyond accepted scripts.
As we started to explain in the previous section, this line of research that shows us how sexual misconduct is hard to prove makes it a frustrating paradigm within which to seek any kind of justice. But this is not to say that the injustice is not affectively experienced by the individual. As Mason (2011, 297) points out, women who experienced what is now called sexual harassment may not have had that name for it before the term was coined, but they did know that their experience was "harmful to their well-being," even if it was not seen that way by their environments. It is also this dissonance that perhaps produced the social movements that coded the valence of these terms within law: "[I]t was precisely women's interpretations of their treatment as wrongful and unjust that fueled the resistance movement that was responsible for naming sexual harassment" (298). We return to the idea of non-dominant hermeneutical communities later in this review essay.
The epistemic injustices that Tuerkheimer describes are injustices imposed in large part by actors within the legal system-lawmakers, prosecutors, defense lawyers, and judges. These people are disparate in many ways, but almost all share a common core training: they graduated from a US law school. While she does not identify it as such, Deo's work, similar to the research by Cooke (2019), Donnelly (2018), and Hänel (2020) in other contexts, documents epistemic injustice and a strain of hermeneutical violence against women of color within the US legal academy. These sites are important because law schools, beyond being academic institutions, are feeder sites for those who create and populate what becomes law in the lived world. Paying attention to this recursivity offers new dimensions of violence: the law schools in which Deo observes injustices are the very sites that socialize students to make the decisions that Tuerkheimer describes. Seen this way, law's inclination to systematically discount the credibility of those structurally less powerful has a very specific genesis. Thus, Deo's work is not just an account of individual experience of inequality, but it also casts light on the creation of larger institutional norms that cement epistemic injustice and violence within the law more generally.
We are not alone in drawing attention to the structural implications and possible extensions for epistemic justice. While Fricker's account of epistemic injustice has attained prominence and could in theory extend to different kinds of marginalization, it is in other work (both prior and subsequent) on which intersectional experiences produced by epistemic injustice are specifically focused. Black feminist thinkers and other feminists of color have long identified and addressed epistemic injustice and violence (McKinnon 2016, 438-39). Particularly key to this analysis is the description in Kristie Dotson (2011, 238) of silencing as epistemic violence: "[A] refusal : : : of an audience to communicatively reciprocate a linguistic exchange owing to pernicious ignorance," where "pernicious ignorance" is a defined term that describes ignorance that is harmful and stems from a "predictable epistemic gap in cognitive resources." Dotson (2011) describes a particular type of silencing: testimonial quieting. Here, "an audience fails to identify a speaker as a knower." Dotson illustrates testimonial quieting with the work of Collins (2000): " [Collins] claims that by virtue of her being a US black woman she will systematically be undervalued as a knower. : : : To undervalue a black woman speaker is to take her status as a knower to be less than plausible. : : : [W] hat is important about Collins's analysis is her understanding of black women as belonging to an objectified social group, which hinders them from being perceived as knowers" (Dotson 2011, 242). Just as we take Deo's respondents and their accounts as a reflection of the structures within which they are embedded rather than just their perception, we are similarly inclined to use Dotson's experience to extrapolate to the reality of the structures she has had to navigate. Similarly, Deo's accounts offer insights into how women of color in the legal academy are, in Dotson's words, "systemically undervalued as knowers" in all aspects of their academic positions. In this way too, the women with whom Deo speaks describe epistemic injustice and violence in their service, scholarship, and teaching.
For instance, with respect to service, Deo's interviewees describe how their contributions in faculty meetings were ignored or disregarded, but when men or white faculty made the same point, the comment was elevated and praised (44)(45)(46). The women of color were not treated as valid sources of knowledge or insight and thus were disrespected as knowers. And, as we argue over the course of this review essay, Deo's account suggests the ways in which this discounting or discrediting has been informed and reinforced by the prejudicial views of the hearer. 7 But beyond biased views that might have produced a straightforward epistemic injustice of ill-considering the view of minoritized actors, these structures might produce environments where their views, over time, are silenced altogether. In turn, this may be best understood as an example of Dotson's testimonial quieting, in which the audience-here, being the other faculty members-do not identify the woman of color as a knower because she is a woman of color. And since her "status as knower [is] less than plausible" from the biased perspective of her peers, there might be a quieting of her experience over time that reinforces her exclusion within these environments.
There are other ways in which structures reinforce individual experiences of inequality. In Deo's accounts, even as women of color have their views dismissed and ignored in faculty meetings, they are asked to take on disproportionate amounts of service work, including both formal and informal student mentoring and committee work. As Deo explains, this service work is often directly related to their identities, including "improving student or faculty diversity, or bringing a unique perspective to virtually any group, as one of few people of color at their schools" (88). The women who Deo interviews often feel they cannot say no to these requests. These additional service activities are not rewarded; rather, they take time away from scholarship and can be emotionally draining. But what follows could also be that women might be called upon to do translation-as Deo does in her book-to tell audiences why the labor they perform is unjust. This kind of translation between one's experiences and the need to make visible the ways in which it is unjust is a kind of "epistemic exploitation," as Nora Berenstain (2016, 570) offers, "when privileged persons compel marginalized persons to produce an education or explanation about the nature of the oppression they face." But these kinds of exploitations and injustices are not only top-down accounts of credibility discounting; they are also reinforced by actors who structurally might be seen as having less power. Paying attention to these reinforcements allows us to see the ways in which exclusionary structures are reinforced by everyday logics of "legitimate" inequality. For instance, the professors whom Deo interviews seem to experience epistemic injustice not only from other faculty but also from students, who, as Deo shows, disrespect these faculty members as knowers because the faculty members are women of color. Students "challeng[e] : : : knowledge" and "belie[ve] that the woman of color in front of the room is unqualified to teach them" (Deo 63,66). Students ask tangential or irrelevant questions in the classroom (62). They take openly racist actions, such as producing a noose during a criminal law class taught by a Black woman (64). They openly challenge the professors' authority, asking, for example, "have you ever taught before?" (65). One professor whom Deo interviewed described being physically threatened by a student who came to class late. As Deo explains, "their disrespect stems from intersectional raceXgender bias: the students' belief that a woman of color in front of the room is unqualified to teach them" (66).
Teaching evaluations similarly reflect and enact epistemic injustice, as they reflect some students' views that women of color are not qualified to teach law classes. The evaluations are "blatantly discriminatory in an intersectional raceXgender way" (Deo 70). One woman of color described the evaluations as "more microaggressions than blatant racist comments" (70). Again, some students refuse to accept the woman of color who is teaching them as a person who is a "knower." Teaching evaluations are used (sometimes pretextually) as a basis for determining compensation, promotion, and tenure, so such evaluations can have serious professional consequences (86-87). But even if the evaluations were for nothing other than the professor's own personal use, these comments and views treat the professor as one who cannot know or impart knowledge because she is a woman of color, and, thus, the comments enact epistemic injustice. As Berenstain (2016, 574) explains, "the oppressed [are kept] busy doing the oppressor's work." Epistemic exploitation interacts with other forms of epistemic injustice to create additional costs for the people who experience it. But whether this is seen as problematic reflects at least as much about the priors of the viewer as it does about the site itself.
WHOSE STORY MATTERS? AUDIENCE, POSITIONALITY, METHODS
Viewership brings us to other important consideration as we deliberate on inequality: audience and voice. In recent research about the illusion of homophily, Douds (2021) argues that what looks, from the outside, like a harmonious environment, might in fact be a co-constructed contract that selectively recognizes inequalities of choice. Douds's specific example of the "diversity contract"-which examines a racially diverse, but economically homogenous, suburb that seems very assimilated at one glance but which, on closer examination, reveals itself to be a local context where all actors have agreed to perform a certain representation of diversity to themselves and their viewershas extensions for our argument here. Using this powerful insight, material conditions are not absolute factors but are instead conditions that are cocreated by actors with power in relation to their surroundings. Similar to the diversity contracts that Douds observes, one could argue that law schools and legal institutions have their own local negotiations that give an outward appearance of equity, while housing more fraught internal inequities. Moreover, this appearance could benefit the internal actors-especially those who already feel legitimate in these surroundings-who want to trust they have achieved cultures of equity.
The legitimacy of narrative is predicated on the power of individual actors (for example, often "ideal actors" who are responsible for constructing the very narrative to which they respond), their relationship to a given cause (for example, who speaks and on whose behalf), and their capacity for bias following such proximity (the more proximate, the less likely one is to have an "agenda"). Minorities, in contrast, are often seen as acting out of bias or agenda because they are representing views that threaten the status quo. This example of credibility questioning from the perspective of a trans woman is illustrative of the distinct "before and after" ways in which the reception of a person transforms when there is a change in identity: When I presented myself as an educated straight white man, showing all indications of privilege, I could easily be accused of inexperience or just plain being wrong, but I don't recall ever being accused of having an agenda. Whenever I advocated for myself, I was generally received as a reasonable person making a reasonable request. Now, as a visibly trans and queer individual, I feel like my voice is inherently suspect when I talk about certain things that are important to me, particularly the inclusion of sex and gender minorities in the church. I feel like cisgender, heterosexual folks have an advantage over me in credibility, even when talking about the experiences of the LGBTQ+ community. At times, this means that other people are granted or assume they have more credibility than I do when talking about my own identity and experience. Fortunately, there are many allies who use their gift of credibility to amplify LGBTQ+ voices, but there are also people who use it against us. (Compton 2019; emphasis in original) 8 Compton's notes on positionality in flux, and the relative credibility given to the same actor in different roles, are striking, as are the implications this might have for allies. Still, while transition might most effectively illustrate the fungibility of roles within institutions, it is certainly not the only case where this form of inequality can be observed. Normative or ideal actors acting out of ideas that support themselves or their homogenous peers typically are not similarly viewed because their ideas are often already reflected within the logics of the institutions they are challenging. Thus, they 8. Regardless, it is important to note that, while there might be some experiential differences in credibility following transition, the experience of trans persons, even when they move into more "advantaged" gender identities, is rife with inequality. On the experience of trans men in the workplace, see Schilt 2006. are seen as "neutral," whereas minority actors can rarely escape the identities that they are speaking on behalf of, which then consistently calls into credibility their motivations and positionality. The task before us is to deliberately unpack the normal, the mundane, and the everyday so that we can reflect on what underpins these ideas of reflective equity.
Deo and Tuerkheimer do a similar kind of unpacking; they lay out the ways in which, in plain sight, legal institutions, by just existing, conform narratives of ideal actors with power and alienate actors at the periphery. Since there are no acts that need to be performed in order to enact inequality, those who question it have an additional burden of proving wrong what has been taken for granted and running the risk of being seen as having an agenda (which they do!) and therefore being biased. This commitment to neutrality of voice, without recognizing the ways in which neutrality is a racialized, gendered norm, is the most important takeaway from the two projects, by highlighting the ways in which we think about "useful" data and the implicit biases embedded within the many audiences that might receive it over time.
Variations in research audiences (for example, academic, press), their disciplinary priors, and intent (for example, policy versus theory) could all have important implications for how we think about the usefulness of research and the viability of its method. For example, the reception that Deo's book has garnered over the last few years is telling of the very structures about which it tries to theorize. The many talks and symposia that have featured Deo's research have revealed the varied ways in which women of color (her primary audience) have found solace in the representation and its capacity for building community. Seen this way, the exchange that Lawsky recalls at the start of this review essay is telling: male actors with authority (and others) who reject this research and those who "don't find it credible," and even threatening, are revealing, with their "non-buying," a certain proof of concept. Had Deo written a book that these men would have found credible, it narratively would have been a different book. But, in doing so, it would have also performed a different kind of work in the world.
Similarly, Tuerkheimer might have written differently and with a varied valence (as she has in other work facing other audiences-for example, Tuerkheimer 2017) had she written for academic exchange rather than public reach. We highlight this positionality to make the case that whom one's research is facing and what it can do while facing its audience is a choice every author makes, and it is one that can be agoraphobic in its decision making (Ballakrishnen 2021). Still, what these books reveal is that the choices that authors make may have different costs and valences at different stages. As we highlight here, books might have limitations in theory or empirics-the task of a book is not to be without blemish. But it is exactly the lacunae in literature that allow new work to take shape in response and reaction. There might be limits, but, alongside them, there might also be important triggers to bring about different kinds of conversations and tools to help build movements that are not easily revealed at an initial reading. Allowing for that generosity of engagement with research feels necessary to highlight as we read these books together and offer interventions of possible method.
Note too that the narrator also has a position. The idea of "unreliable narrators" and the ways in which these narrators conceptualize and "other" their surroundings is a foundational concept in critical fiction and postmodern theory (D'hoker and Martens 2018). Even so, the doctrine of positionality makes us consider whether an unreliable author might actually be a function of the audience and environment rather than the individual. That is, it is not that the author is unreliable but, rather, that the author may be speaking on behalf of an experience so peripheral that the reliability of the narrative is not easy to access by a majority of its audience of interest, especially when the author might be seen as representative of the very population they are attempting to study. Instead, the possibility that this critique might be just that-critical-might well be at the root of the hesitation to easily accept these narratives and to brand them unreliable instead. In fact, more recent theory makes the case that, especially as we think across new and cross-cultural contexts, we need to contend with the fact that there are spaces where unreliable narration is all that can be expected in an unreliable world (Jedličková 2018). Further, beyond individual identity in a particular environment interacting with a given audience, the reliability of the narrator could also be impacted by temporality (that is, time periods or events that we accord value to) and spatiality (that is, unreliability attributed as a function of a specific culture or nation).
Nonetheless, acknowledging positionality is not without its consequences or costs. Just as we know biography is crucial for theorizing, it also follows that theorizing on behalf of others who do not share one's positionality could have its own politics of representation (Ballakrishnen 2021). But we still maintain the importance that this methodological framework can have for theory building, especially as it applies to constructions of reliability and credibility. Although social scientific and legal methods call for reliability as the root of legitimate process, affective method that calls for positionality could complicate our understandings of what unreliability even is, especially as it pertains to the call for "objective" research that follows legitimate "process" (Nünning 2015;Ballakrishnen, forthcoming).
WHY NOW? CURRENT URGENCY AND LAW AND SOCIETY EXTENSIONS
The implicit relationships between the two central books in this review essay as well as the strains of scholarship we suggest they implicate are urgent to law and society scholars for several reasons. As a field rooted in the periphery, locating what we think of as valuable or credible has particular purpose. "Valuable" is a claim about what is worthy of being included in societal meaning, and both inclusion and disregard feels illustrative of Fricker's (2007) hermeneutical injustice. A linear operationalization of this injustice is that, excluding people within institutions who, due to their marginality, are assumed to not have societal meaning (in the case of Deo's book, minority professors; in Tuerkheimer's research, respondents with less credibility because of their interactional status), loses along with it important perspectives. As Dotson would suggest, in not comprehensively engaging with peripheral actors, inequality is reinforced and amplified because only certain kinds of knowledge are seen as worthy of primacy. 9 9. Dotson's (2012) description of contributory injustice highlights this difference in primacy and the production of inequality. For example, she argues, contra Fricker's (2017) description of hermeneutical injustice: "We do not all depend on the same hermeneutical resources. Such an assumption fails to take into account alternatively epistemologies, countermythologies, and hidden transcripts that exist in hermeneutically marginalized communities among themselves. The power relations that produce hermeneutically One could argue that the violence reported by Deo and Tuerkheimer is limited because even if the women in these books' narratives are not seen within the institutions within which they experience a range of violence, there are other sites within which they are noticed and nourished and where there is an active valuation of the knowledge that they are creating. To suggest that these women are excluded from creating meaning altogether because their knowledge and personal narrative is not seen as important within their institutions is to say, as Dotson (2012, 31) would, "that there is but one set of collective hermeneutical resources that we are all dependent upon." Regardless of the credibility afforded to them, the women whom Deo describes are creating social meaning, and regardless of institutional norms that might disregard their impact (for example, by denying them tenure or advancement at rates that they may deserve), these actors are producing knowledge that is lauded and impactful for the communities in which they seek membership. Mason (2011) similarly points out that, contra Fricker's (2007) account, marginalized and non-dominant groups have their own hermeneutical resources that they use to create meaning. Indeed, Deo's book itself creates and enacts, as well as describes, the ways in which "identity-based scholarship" is valued and creates societal meaning.
At the same time, this reframing also allows us to consider that the lack of sight that plagues these actors' experiences within hostile institutions-for example, a faculty member's refusal to accept identity-based scholarship as meaningful-might also be a "contributory injustice" across all of the sites they traverse in that it might be a "willful hermeneutical ignorance" that maintains and utilizes existing resources to thwart disadvantaged actors from having epistemic agency (Dotson 2012, 32). And in this thwarted epistemic agency there is also a kind of violence. Assuming that contributions of women of color, for example, might only be relevant for members of their community, there is a resistance that implicates any possibility of true inclusion. In framing contributions as only central to a particular epistemic community-in this case, to think of Deo's book only as important as a solidarity tool for women of color or to see Tuerkheimer's research as only centrally useful to survivors-there is a certain institutional thwarting. Specifically, in assuming that certain writing will be "meaningful for specific others," there is an absolute attempt to exclude the scholarship from, for example, the impact it could have on a community of more central actors (in Deo's case, tenured law faculty members who do not always find the data "credible"; in Tuerkheimer's case, courts of law). Thus, the violence is not in the prevention or circumscription of membership in other communities (one kind of hermeneutical injustice, as described in Fricker [2007]) but, rather, in the exclusion that follows from being part of a community at the periphery. One might also say that the faculty who refuse to see identity-based scholarship as valuable are engaging in attempted-even if not intentional-hermeneutical injustice and that such an attempt-being agnostic to intention -is itself a moral wrong. But regardless of how we name the injustice, these implications for theory are worth noting.
The urgency of considering these various strains of structural injustices might also help expose the ways in which institutions are implicated in creating actor responses that are poised to reinforce the very dynamics of the institutions that reject them. More specifically, actors who are peripheral-who feel like their voice or work is not valued within their communities-might, literally, speak less. This is precisely the situation described by Deo's interviewee "Patrice." Deo recounts that "[b]ecause Patrice's scholarship focuses on race and ethnicity, some white male colleagues have been hostile rather than supportive. : : : Patrice quickly decided never to present her work to her faculty, purposefully disengaging" (48). Patrice's inclination to do less is a kind of "intentional invisibility" that minority actors lean into doing because they are tired of encountering situations that are not receptive to them (Ballakrishnen, Fielding-Singh, and Magliozzi 2018). Her actions also offer an example of the ways in which minority actors might be forced to perform "testimonial quieting," which we discussed above or feel the need to engage in a self-censored form of "testimonial smothering," where, anticipating structural hostility, these actors choose to limit their own testimony because what they would say is "unsafe and risky" (Dotson 2011, 244). It is worth noting that, although this quieting, smothering, or invisibility might be a function of the environment-for example, one might smother one's testimony when one knows that the audience is "testimonially incompetent" and perniciously ignorant and therefore unlikely to understand or support the speaker-the implications it has impact the individual actor at least as much as the system. Women, for example, might perform invisibility with agency, but they also have to bear the costs of being housed within environments where such invisibility is punished.
We highlight these distinctions between individual-and institutional-level factors because the chasm between them has important implications for inequality and inclusion. Beyond these explanations for actor subjectivities, what is at stake is a larger call to recognize the structures that hold these actors. In Patrice's case, she might have been smothered or performing intentional invisibility because of the hostile structure within which she felt stuck. But, in performing that invisibility, she is also reinforcing the very ways in which institutions that were never made to accommodate her can further alienate her. For instance, one can imagine a faculty using entirely "legitimate" reasons to justify how her not presenting work is a sign of her inability to be a "good" scholar. Letting these neutral structural reasons go unchecked while only focusing on individual narratives shifts the onus unfairly on the actors with the least power to fix them.
Paying attention to what Tuerkheimer calls-and Deo illustrates as-epistemic injustice sheds light on the consequentialist, direct, and intrinsically moral harms of not treating someone as a full epistemic subject. This framing certainly has implications for institutional policy in that it offers new ways of considering how legal institutions can create and support meaningful inclusion. But reading these books in synergistic fashion also helps illuminate the ways in which inclusion without thoughtful equity benefits organizations more than individual actors or sustainable institutional cultures. Particularly, this process helps unveil the ways in which legal or institutional legitimacy is fraught with systemic biases and imagined neutralities. These lines of scholarship urge us to regard our positionalities as important coordinates for our site. They suggest that, if we were to peer more critically and if we allowed ourselves to look beyond the limited historically normative actors that these institutions were set up to serve, we would see the ways in which the power wielded by institutions leads to anything but objective appraisal. From this more variegated point of view, perhaps there will be space made for new actor and audience categories and, in turn, methods and production of what we may come to consider as "valuable" knowledge. How we look, and where we look from, can indeed change what we see. And nowhere is this sight more crucial than while considering the experience of those who have been systemically left invisible and whose narratives might not yet be part of our canons of understanding. | 2022-06-08T15:15:45.482Z | 2022-06-06T00:00:00.000 | {
"year": 2022,
"sha1": "e359b53574173062f0fae408cc840c1e42b81430",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/8A37618BF50D3B12F493628CAD41485A/S089765462200020Xa.pdf/div-class-title-law-legal-socializations-and-epistemic-injustice-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "7026a477b51372404311bfe0e3e2dd1415adc783",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
} |
251102021 | pes2o/s2orc | v3-fos-license | Protectivity of COVID-19 Vaccines and Its Relationship with Humoral Immune Response and Vaccination Strategy: A One-Year Cohort Study
This prospective cohort study aimed to evaluate the efficacy of COVID-19 vaccine schemes, homologous versus heterologous vaccine strategies, and vaccine-induced anti-S-RBD-IgG antibody response in preventing COVID-19 among 942 healthcare workers 1 year after vaccination with the inactivated and/or mRNA vaccines. All participants received the first two primary doses of vaccines, 13.6% of them lacked dose 3, 50.5% dose 4, and 90.3% dose 5. Antibody levels increased with the increase in number of vaccine doses and also in heterologous vaccine regimens. In both inactive, mRNA vaccines and mixed vaccination, infection rates were significantly higher in two-dose-receivers, but lower in four- or five-dose receivers and increasing the total number of vaccine doses resulted in more protection against infection: the three-dose regimen yielded 3.67 times more protection, the four-dose 8 times, and five-dose 27.77 times more protection from COVID-19 infection, compared to any two-dose vaccination regimens. Antibody levels at the end of the first year of four- or five-dose-receivers were significantly higher than two- or three-dose receivers. To conclude, an increased number of total vaccine doses and anti-S-RBD antibody levels increased the protection from COVID-19 infection. Therefore, four or more doses are recommended in 1 year for effective protection, especially in risk groups.
Introduction
COVID-19 (COronaVIrusDisease-19) vaccines emerged as a hope to control and end the pandemic caused by SARS-CoV-2 (severe acute respiratory syndrome-coronavirus-2), they were awarded emergency usage license (EUL) without waiting for clinical trials to be completed. Both logistical issues (like production, global/local delivery, and fair distribution) and scientific questions (like vaccine efficacy, safety, optimization of vaccine regimens, booster dosing, and protection relevance) were the main concern of researchers and decision-makers [1]. The vaccination in Turkey started with the inactivated vaccine CoronaVac TM [2]. In late December 2021, the sharp increase in the number of infected cases all over the World and in Turkey, of course, was related to the emergence of the SARS-CoV-2 variant B.1.1.529 (Omicron), which was more capable of evading the immune system and the immunity of individuals was recessed as more than 4 months had passed since the last shot [3]. Israel, Chile, Denmark, and Turkey were countries that adopted the threedose strategy over the four-dose strategy. We aimed to determine the effectiveness and protectivity from COVID-19 infection of different COVID-19 vaccines in terms of schedules (two-/three-/four-/five-dose schemes), new strategies (homologous versus heterologous vaccination), and vaccine-induced humoral antibody (anti-S-RBD-IgG) levels in a group of health care workers and the incidence of adverse events at the end of 1-year follow-up.
Study Design and Participants
This prospective cohort study was carried out at Cukurova University (Adana, Turkey) between February 2021 and 2022 (1 year follow-up since the initiation of vaccination in health care workers in Turkey) and included health care workers who had been vaccinated with the inactivated SARS-CoV-2 and BNT162b2 mRNA vaccines in the context of a public vaccination program by the Turkish MoH. The minimum sample size was calculated as 945 participants by assuming type-1 error as 0.05, type-2 error as 0.1, and effect size as 0.02 (η 2 = 0.02, small effect). The participants were randomly selected from a list of 3000 health care workers with substitution lists. A total of 1000 health care workers participated in the first step of the study, decreasing to 942 due to the lack or incompleteness of some results. All participants signed an informed consent form after required information.
Inactivated SARS-CoV-2 Vaccine by Sinovac (CoronaVac TM )
The vaccine administered to health care workers by the Turkish MoH was the "inactivated SARS-CoV-2 vaccine (CoronaVacTM)", with aluminium hydroxide, developed by Sinovac Biotech Ltd., Life Sciences Lab., Beijing, China. The vaccine (that will be named shortly as CV) was administered intramuscularly in the deltoid region of the upper arm with a dosage of 3 µg/0.5 mL.
BNT162b2 mRNA Vaccine by Pfizer & BioNTech (Comirnaty ® )
The vaccine BNT162b2 (Comirnaty ® ) produced by BioNTech Manufacturing GmbH Germany is a nucleoside-modified messenger-RNA (mRNA) encapsulated in lipid nanoparticles (LNP), which enables the delivery of the RNA into host cells to allow expression of the SARS-CoV-2 spike (S) antigen. This vaccine (that will be named shortly as BNT) is a white to off-white frozen suspension provided as a multiple-dose vial and must be diluted before use. One vial (0.45 mL) contains six doses of 0.3 mL after dilution. One dose (0.3 mL) contains 30 micrograms of COVID-19 mRNA vaccine (embedded in lipid nanoparticles) [4].
Mixed (Heterologous) Vaccine Administration
The Turkish Republic MoH declared the introduction of additional third and fourth doses, in June and August 2021, respectively, to be administered to health care workers and the elderly, who had previously received two doses of CV and to the individuals who wished to be vaccinated due to some international travel requirements. All individuals were given the right to choose between CV and BNT vaccines of their free will.
Immun Response Assessments
Our project aimed to determine the seroconversion in the context of anti-SARS-CoV-2 S-RBD (anti-S-RBD) immunoglobulin G (IgG) antibodies in 195 health care workers at 1, 3, and 6 months following the initial two doses of COVID-19 vaccines. In the present fourth step of the project, anti-S-RBD IgG antibodies were measured in 942 participants who completed 12 months after the initial administration of two doses of CV. Among 942 people, 195 belonged to the 1-year follow-up group included in the first three steps of the project (i.e., post-initial two doses at 1st, 3rd and 6th months), while 747 were recruited in the project at the end of the first year. Therefore, their antibody analysis consisted of the measurements of the 12th month. As mentioned in the previous paragraph, different cohorts were formed according to the vaccine type preference (CV and/or BNT) of the individuals: - The vaccine cohorts-A classified according to the vaccine dosing-scheme subgroups; The vaccine cohorts-B classified according to the vaccine types (homologous or heterologous) -Homologous CV (only CV-receivers) - Homologous BNT (only BNT-receivers) -Heterologous (both CV and BNT-receivers)
Laboratory Procedure
About 5 mL of blood samples were collected into biochemistry tubes with vacuum gel. The sera were extracted by centrifugation at 3000 g for 10 min and kept at 2-8 • C for 1-3 days. Test calibrators and controls were performed first. After the control results were observed to be within the expected ranges, the samples were tested by trained experts in the accredited (by the Joint Commission International (JCI) since 2006) Central Laboratory of Cukurova University Balcali Hospital, Adana, Turkey with the MAGLUMI 2000 series fully automated chemiluminescence immunoassay analyzer (CLIA) (Snibe Diagnostics, Shenzen New Industries Biomedical Engineering Co. Ltd., Shenzhen, China). The test kit for the determination of antibodies was MAGLUMI ® SARS-CoV-2 S-RBD IgG (CLIA) (Cat.#130219017M) (Snibe Diagnostics, Shenzen New Industries Biomedical Engineering Co. Ltd., Shenzhen, China). The SARS-CoV-2 S-RBD IgG (CLIA) assay is an indirect chemiluminescence immunoassay. The analyzer automatically calculates the numerical output in each sample using a calibration curve, which is generated by a twopoint calibration master curve procedure. The results are expressed in absorbance units (AU/mL). The results are reported to the end-user as "Reactive" and "Non-Reactive", where "Non-Reactive" indicates a result less than 1.00 AU/mL (<1.00 AU/mL) and "Reactive" indicates a result greater than or equal to 1.00 AU/mL (≥1.00 AU/mL) [5]. The test is only for use according to the Food and Drug Administration's Emergency Use Authorization [6]. The SARS-CoV-2 S-RBD IgG test is an indirect CLIA and has a high correlation with VNT50 titres (R = 0.712), where VNT stands for "Virus Neutralization Test", which is a gold standard for quantifying the titer of neutralizing antibodies (nAbs) for a virus [7].
Statistical Analyses
Data were examined using the SPSS 22 statistical analyses package (2013, IBM, New York, NY, USA). Non-parametric tests were used in the analysis of categorical and non-normally distributed data, while parametric tests were used in the analysis of normally-distributed data. Dunn's test was used in post-hoc analyses. The enter model was used in the regression analyses and the models were univariate (number of doses in Cox regression, and Anti-S-RBD level in logistic regression). Data were analyzed by Mann-Whitney U, Kruskal-Wallis, Freidman, Chi-Square, Logistic Regression, and Cox Regression test. A value of p < 0.05 was considered significant.
Results
The mean age of 942 participants in the study was 41.17 ± 11.28 (between 17-72). The distribution of the participants according to work positions was 195 physicians (20.7%), 179 nurses (19%), and 568 other positions (60.3%). Reminding that the vaccination in Turkey started on 15 February 2021, 303 (32.2%) participants reported to have been infected with COVID-19 before (199 individuals) or within 1 year (104 individuals) from the start of vaccination. Reinfection was observed in seven participants (five between the second and third doses, one between the third and fourth doses, and one after the fourth dose). Hospitalization was required in 21 patients, of which 18 were infected in the pre-vaccination period, and 3 in the post-vaccination period. At the end of the first year, only six participants had non-reactive antibody levels. The distribution of anti-S-RBD IgG levels of individuals and the rates of non-reactive ones according to demographic characteristics and vaccine cohorts were given in Table 1. It was found that antibody levels increased significantly in correlation with the increase in the number of vaccine doses, and the increase in antibody levels was significantly higher in heterologous vaccine regimens. All of the participants were administered the initial two doses of vaccines, but 13.6% of them did not receive dose-3, while 50.5% did not receive dose-4, and 90.3% did not receive dose-5. While the interval between dose 1 and 2 was found as a mean of 38 days, that between dose 2 and 3 averaged between 130-169 days, that between dose 3 and 4 averaged between 55-167 days, and that between dose and 4-5 as 128 days. The intervals between doses according to the vaccine schemes and the follow-up times from the time the first vaccine dose was administered in each of the vaccine cohorts were given in Table S1 (Supplementary Materials).
A statistically significant difference was found when the status of being infected with COVID-19 was compared by vaccine scheme subgroups (vaccine cohorts-A). The rate of being infected with COVID-19 was found to be significantly higher both in two-dose-CV-receivers and in two-dose-BNT-receivers. In contrast, the rate of being infected with COVID-19 was found to be significantly lower in two-dose-CV+two-dose-BNT-receivers and in two-dose-CV+three-dose-BNT-receivers.
When the infection rates were compared by total vaccine doses, infection rates were found significantly higher in two-dose-receivers, but lower in those who received four or five doses of vaccines. No difference in infection rates was observed in three-dose-receivers (Table 2). The Cox regression model formulated to estimate the risk of being infected with COVID-19 based on the total number of vaccine doses, regardless of vaccine types, was found to be predictive. It was found that the risk of infection decreased as the number of doses increased in all three vaccine cohorts (Figure 1). In the model, vaccine doses were stratified by vaccine cohort B. The dependent variable of the model was "being infected with COVID-19", and the independent variable was the total number of vaccine doses (with reference = 2-dose-receivers). The increase in the number of doses was found to be more protective against COVID-19 infection. Compared to two-dose administration, three-dose administration was found to be 3. The logistic regression model, including participants infected in the post-vaccination period, established to predict the effect of anti-S-RBD-IgG levels (independent) on protection from COVID-19 infection (dependent), which was shown to be significant (p < 0.001). Each 0.008-unit increase in the anti-S-RBD-IgG levels was observed to increase the protectivity from being infected with COVID-19 by 1.008-fold with an odds ratio of 0.992 (95% confidence interval between 0.989-0.996).
Regardless of the vaccine type, in months 1, 3, 6, and 12, anti-S-RBD-IgG levels were compared between (inter) and within (intra) vaccine-dose subgroups. After month 6, intragroup antibody levels continued to increase in the four-or five-dose-receivers, but decreased in two-or three-dose-receivers. At the end of year-1, inter-group antibody levels were found to be higher in four-or five-dose-receivers than two-or three-dose-receivers (Table 4). In our study, 35.9% of all participants did not declare any adverse event. The most common adverse events observed after any of the doses were pain at the injection site, malaise, fatigue, myalgia, backache, and fever. The rate of adverse events was observed to increase after dose 3, but no serious events were detected. The table of adverse events was presented as Table S2 (Supplementary Materials).
Discussion
The key to controlling the COVID-19 pandemic is vaccinating the entire population at full schedule including boosters. The success of this policy is hampered by the occurrence of infection and disease in fully vaccinated persons. The potential primary cause of infection despite vaccination is the emergence of new variants that evade immunity, thereby reducing the efficacy of the vaccine. Another potential cause of infection is a decrease in the immunity provided by the vaccine or disease itself because of time or other factors [8].
To start with the immunity, regardless of vaccine type, we found a continuing increase of antibody levels after the month 6 in four-/five-dose-receivers, but a decrease in two-/three-dose-receivers. At the end of year 1, this difference was still significant. Similarly to our findings, following BNT-dose-2, Mizrahi et al. [9], Puranik et al. [10], and Khoury et al. [11] reported a decrease in vaccine-derived neutralizing antibody titres at month 6; Goldberg et al. [8] in all age groups after a few months; Levin et al. [12] in male, immunosuppressed, and 65-years-old and over individuals at month 6; and Thomas et al.
(in a longer follow-up of phase 2-3 randomized trial of BNT) [13] a 96-84% reduction in vaccine efficacy between month 4 and 7. Regarding CV, Demirhindi et al. [14] reported a 60% decrease in indirect neutralizing antibody concentrations at month 6 compared to month 3 in two-dose-CV-receivers, but a 5-20 times increase in three-dose-receivers (CV and/or BNT).
Obviously, increased antibody responses or serostability point out efficacy in terms of humoral immunity, but this does not guarantee protectivity. One year of usage and follow-up gave us chance to evaluate the protectivity of the vaccines from the COVID-19 infection, besides vaccine efficacy.
We evaluated the relationship between protectivity and vaccination schedule and found the number of vaccine doses to be inversely proportional to infection rates regardless of vaccine type: 32.6% of infection rate in two-dose-receivers, 16.0% in three-dose-receivers, 8.8% in four-dose-receivers, and 4.0% in five-dose-receivers. Regardless of the vaccine type, we found that three-dose-receivers were protected approximately 3.67 times more, four-dose-receivers 8 times more, and five-dose-receivers 27.77 times more from being infected with COVID-19 than any two doses of any COVID-19 vaccine type. Similar proportionality was observed by other researchers. Spitzer et al. reported an incidence rate of infection of 12.8 per 100,000 person-days in three-dose-BNT-receivers; in contrast to 116 in unvaccinated individuals [15], while Bar-On et al., found it as 1.5 in four-dose-BNT-receivers, 3.9 in three-dose-BNT-receivers in the case of severe disease, and 4.2 in the control group. At week 4, BNT-dose-4 reported a lower rate of confirmed infection than three-dose-BNT-receivers by a factor of 2.0 (by a factor of 3.5 in severe infection) compared to a factor of 1.8 observed in the control group (by a factor of 2.3 in severe infection). The protection was reported to wane in the following weeks, but not in severe infection for at least 6 weeks after dose 4. Magen et al., revealed that BNT-dose-4 was effective in reducing the short-term risk of COVID-19-related outcomes in people who, at least 4 months ago, had received BNT-dose-3 [16]. On days 7-30 after dose 4, the efficacy of the vaccine was estimated as 45% against SARS-CoV-2 infection confirmed by polymerase chain reaction, 55% against symptomatic COVID-19, 68% against COVID-19-related hospitalization, 62% against severe COVID-19, and 74% against COVID-19-related death. On days 7-30 after the BNT-dose-4, the absolute risk difference for COVID-19-related hospitalization (BNT-dose-3 versus BNT-dose-4) was found as 180.1 cases per 100,000 and 68.8 for severe COVID-19 [17].
When we evaluated the odds of being infected with COVID-19 as a function of vaccineinduced antibody levels, the protectivity could be expressed as: every unit increase of 0.008 in the antibody concentration resulted in a 1.008 times (Odds Ratio = 0.992) decrease in the infection risk, and with the decrease of the antibody levels over time, the effectiveness of prevention from COVID-19 also decreased. We calculated the hazard ratio (HR) as 0.272 for dose 3 regardless of vaccine type, Spitzer et al. reported HR as 0.07 [15]. At least at day 12, Bar-On et al., reported the confirmed infection rate to be 11.3 times lower in the BNT-dose-3-receivers (19.5 times in severe disease) compared to the no-booster group and 5.4 times lower than the rate observed on days 4-6 [1].
At this point, even though increased efficacy of booster doses for protection from severe COVID-19 and reduced risk of contagion are evident, uncertainties regarding the efficacy and safety of vaccines cause a decrease in the motivation of the population to take a booster dose. Hesitancies are generally due to the adverse events encountered in previous vaccination schedules, thinking that the booster dose was administered too early and uncertainty about the increased efficacy caused by booster doses [17]. In our study, 35.9% of all participants did not declare any adverse event. The most common adverse events observed after any of the doses were pain at the injection site, malaise, fatigue, myalgia, backache, and fever. After dose 3, the rate of adverse events seemed to increase somewhat, but no serious events were detected.
The limitations of the study: (1) The study group consisting of only healthcare professionals, (2) a relatively small sampling, (3) lower participant numbers in some subgroup analyses, and (4) lack of the analyses about protectivity from severe disease due to this low numbers.
The strengths of the study: (1) It is one of the few studies that evaluate the protection from COVID-19 infection concerning four or five vaccine doses, regardless of the vaccine type; (2) it evaluates the effect of antibody levels on the protection; (3) it includes long-term results (i.e., one year); and (4) it is one of the few studies examining the heterologous administration of an inactivated with an mRNA vaccine.
Conclusions
Higher antibody levels and administration of four and/or five doses of vaccines are more protective from COVID-19 than 2 or 3 doses. The same is true in the heterologous vaccination strategy with a stronger antibody response observed in cohorts containing BNT vaccine. High-risk groups like healthcare workers, the elderly, and immunocompromised individuals are recommended at least four doses of vaccines regardless of the vaccine type, with a resulting "0-1-5-9-months scheme" in 1 year.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/vaccines10081177/s1, Table S1. Inter-dose intervals by vaccine cohorts; Table S2. Incidence of adverse events in vaccine-dose subgroups. The funding source provided financial support for the purchasing of the material to perform the tests, but had no involvement in study design, collection/analysis/interpretation of data, writing of the report, and decision to submit the article for publication.
Informed Consent Statement:
The official invitation letters with the list of the randomly selected participants and substitutions were sent to the department headships in order to let them invite the selected staff to participate in the study. Informed written consents were obtained from all participants after required acknowledgement for participation in the study and the publishing of this paper.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to personal data protection regulations. | 2022-07-28T06:18:21.513Z | 2022-07-25T00:00:00.000 | {
"year": 2022,
"sha1": "23cac252cf510c8593c658e8699f952b58a35a07",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-393X/10/8/1177/pdf?version=1658744844",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8047825fe7ce7b58c2e4925727a4eca15399ccd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199466818 | pes2o/s2orc | v3-fos-license | Role of JNK and ERK1/2 MAPK signaling pathway in testicular injury of rats induced by di-N-butyl-phthalate (DBP)
Background Di-N-butyl-phthalate (DBP) is an endocrine disrupting substance. We investigated the adverse effect of DBP on testis of male rat and reveal its potential mechanism of MAPK signaling pathway involved this effect in vivo and in vitro. Gonadal hormone, sperm quality, morphological change and the activation status of JNK, ERK1/2 and p38 was determined in vivo. Primary Sertoli cell was established and cultivated with JNK, ERK1/2 inhibitors, then determine the cell viability, apoptosis and the expression of p-JNK, p-ERK1/2. Data in this study were presented as mean ± SD and determined by one-way analysis of variance (ANOVA) followed by Bonferroni’s test. Difference was considered statistically significant at P < 0.05. Results In vivo experiment, DBP impaired the normal structure of testicular tissue, reduced testosterone levels in blood serum, decreased sperm count and increased sperm abnormality, p-ERK1/2 and p-JNK in rat testicular tissue increased in a dose-dependent manner. In vitro studies, DBP could decrease the viability of Sertoli cells and increase p-ERK1/2 and p-JNK. Cell apoptosis in SP600125 + DBP group was significantly lower than in DBP group (P < 0.05). p-JNK was not significantly decreased in SP600125 + DBP group, while p-ERK1/2 was significantly decreased in U0126 + DBP group. Conclusions These results suggest that DBP can lead to testicular damage and the activation of ERK1/2 and JNK pathways, the JNK signaling pathway may be primarily associated with its effect.
Background
Mounting evidence has implicated that Phthalic acid esters (PAEs), as a widely used synthetic compound, may be related to carcinogenesis, inflammation, metabolic disorder and especially reproductive and developmental disease [1]. Dibutyl phthalate (DBP), one of the most used PAEs, has been proved to be an endocrine disrupting chemical (EDC) which displays estrogenic or anti-androgenic activity in male reproductive system, leading to testicular atrophy, seminiferous tubule degeneration, germ cell loss and infertility [2]. In 1983, animal experiments conducted by Oishi S have shown that di-2-ethylhexyl phthalate (DEHP) can cause testicular atrophy in young male rats [3]. In vivo, DBP acts by reducing cell proliferation and impairing differentiation through reducing expression levels of Pou5f1 and Mki67 in prepubertal and pubertal testes [4]. In compare to other EDCs including other Phthalates and the mixture of EDCs, the main effect of DBP was decrease intratesticular testosterone, and steroid hormone enzymes [5]. It can be seen from the above research results that the association of DBP exposure with male fertility problems is the concerning problems of this and others toxicologicals research in this field.
Testis injury induced by DBP is characterized with testis atrophy and seminiferous epithelium degeneration [6]. It was reported that DBP could penetrate into testis despite the existence of blood-testis barrier (BTB) which is one of the tightest physiological barriers in mammals [7]. Sertoli cells which form the BTB provide the germ cells with nutrition, structure support and protective shield [8]. Studies showed the distribution of vimentin cytoskeleton and intercellular junction proteins in Sertoli cells were altered by DBP, inducing spermatogenic cells to detach from Sertoli cells and then undergo apoptosis [9]. These studies may indicate that Sertoli cell could be the primary target of DBP in testis. However, the properties of Sertoli cells are modulated by various substances such as steroid hormone, cytokine, and protein kinase. Therefore elucidating the mechanisms underlying the impair effect of DBP to Sertoli cells would give hints to the prevention of EDCs induced male reproductive dysfunction.
MAPK signaling pathway is one of the important signal transduction systems, involved in cell proliferation, differentiation, apoptosis and response to environmental stimuli [10]. In mammalian testis MAPK can regulate cell proliferation, differentiation and apoptosis, is considered one of the important determinants of sperm development. MAPK can indirectly affect the development of animal germ cells by influencing the function of Sertoli cells [11]. There are three MAPK units in the mammalian cells, namely, c-Jun N-terminal kinase (JNK), p38, extracellular signal regulated kinase (ERK) subfamily [12]. Sertoli cell proliferation and differentiation are mainly regulated by follicle stimulating hormone (FSH), whereas FSH activity is inseparable from ERK signaling cascade [13]. Cells respond to many reproductive toxicants by activating the MAPK pathway, for example, bisphenol A induces apoptosis by activating ERK and JNK signaling pathways [14], 1-3-dinitrobenzene can induce apoptosis in TM4 mouse Sertoli cells by JNK-MAPK pathway [15]. 4-nonylphenol isomers can induce apoptosis of mouse Sertoli TM4 cells by activating mitogen-activated protein kinase pathway [16]. Therefore, we speculate that the effect of DBP on Sertoli cells may also be related to the MAPK signaling pathway.
DBP decreases testosterone level in blood serum
As revealed in this study (Table 1), the level of testosterone in blood serum decreased whereas FSH and LH increased in the DBP treated groups. Compared with the solvent control group, the level of serum FSH in middle and high dose DBP group increased, and the level of serum LH in high dose DBP group was significantly higher (P < 0. 05). The serum levels of testosterone in middle and high dose DBP group were significantly lower than those in solvent control group (P < 0.05).
DBP disrupts spermatogenesis
DBP can reduce the sperm count and sperm viability of rats and increase the Malformation rate (see Table 2). Compared with the solvent control group and the lowdose group, the sperm count and sperm viability of the rats in the high dose DBP group were significantly lower (P < 0.05). The total abnormal rate of spermatozoa in the middle and high dose DBP group was significantly higher than that in the solvent control group (P < 0.05). There were statistically significant on the rat sperm count and total sperm abnormality rate between the medium and high DBP dose group (P < 0.05). The dose response relationship is shown in the sperm malformation rate.
DBP induces seminiferous tubules degeneration
In the solvent control group, the testicular seminiferous tubules were arranged and the structure was complete. No cells were shed in the lumen, and the number and structure of the Sertoli cells were normal (Fig. 1a). Compared with the control group, the testis tissue structure of the rats in the low dose DBP group was not changed obviously (Fig. 1b). In the middle dose DBP group, the testicular seminiferous tubules arranged in the rule, but the diameter became smaller, the interstitial widened, the level of spermatogenic epithelium decreased, and the spermatogenic cell shedding phenomenon was seen (Fig. 1c). High-dose DBP group testicular seminiferous tubules arranged irregularly, spermatogenic epithelial cells severely damaged, Sertoli cells dropped,and showed a vacuolar change (Fig. 1d).
DBP induces activation of JNK and ERK1/2 signaling pathway in the testes
Western blot results showed that DBP can induce the activation of ERK1/2 and JNK in the MAPK signaling pathway in testicular tissue (see Fig. 2). The ratio of the optical density of the phosphorylated protein to the corresponding total protein of the MAPK signaling pathway in the rat testis was seen in Table 3. There was no statistical difference on p-P38/P38 in the testes between DBP groups and the control group. The ratio p-ERK/ERK and p-JNK/JNK in the testes of middle and high dose DBP group was significantly increased than those in control group and low-dose DBP group (P < 0.05) (see Fig. 2).
Effects of DBP on Sertoli cell viability
The results of MTT showed that DBP could significantly reduce the proliferation of Sertoli cells (see Fig. 3 The testis tissue structure of the rats in the low dose DBP group was not changed obviously. c The middle dose DBP group, the testicular seminiferous tubules arranged in the rule, but the diameter became smaller, the interstitial widened, the level of spermatogenic epithelium decreased, and the spermatogenic cell shedding phenomenon was seen. d High-dose DBP group testicular seminiferous tubules arranged irregularly, spermatogenic epithelial cells severely damaged, sertoli cells dropped, and showed a vacuolar change
Effects of DBP on Sertoli cell apoptosis rate
The results of flow cytometry ( Fig. 4) showed that compared with the control group (the apoptotic rate of the cells was 4.32 ± 0.98), DBP significantly induced the apoptosis of the Sertoli cells, and the apoptotic rate was 17.5 ± 1.21. After the pretreatment of the cells with ERK1/2 inhibitor (U0126) and JNK inhibitor (SP600125), cell apoptosis rate decreased. The apoptosis rates of U0126 + DBP and SP600125 + DBP cells were 13.4 ± 3.21 and 6.23 ± 1.08, respectively. Compared with DBP group, the apoptotic rate decrease in JNK inhibitor (SP600125) + DBP group was statistically significant (P < 0.05).
DBP induces activation of JNK and ERK1/2 signaling pathway in the Sertoli cells
Pretreatment with U0126 and SP600125, respectively to primary Sertoli cells for 2 h, then add 100 μg/mL DBP, Western blot (Figs. 5,6) showed that compared with the DBP group alone, the phosphorylated JNK was not significantly decreased in the JNK inhibitor (SP600125) + DBP group and the phosphorylated ERK1/2 expression was significantly decreased in the ERK1/2 inhibitor (U0126) + DBP group. The ratio of the optical density of the phosphorylated protein to the corresponding total protein in Sertoli cell was seen in Table 4. Compared with the solvent control group, the ratio of p-ERK/ERK and p-JNK/JNK in DBP group increased, the difference was statistically significant (P < 0.05). ERK inhibitors (U0126) and JNK inhibitors (SP600125) reduced the expression of phosphorylated ERK and JNK. Compared with the DBP group, the ratio of p-JNK/JNK was not significantly decreased in the JNK inhibitor (SP600125) + DBP group, while the ratio of p-ERK/ERK was significantly decreased (P < 0.05). These findings indicated that DBP mainly activates the phosphorylation of JNK and participates in the cell damage.
Discussion
As an environmental endocrine disruptor, DBP has certain male reproductive toxicity, can cause male rodents testicular atrophy, weight loss, decreased testicular activity of the enzyme, seminiferous tubules atrophy, spermatogenic cell loss, genital malformations, etc. [17]. In this vivo study of testicular pathology showed that DBP lead to testicular tissue structure damage, manifested as spermatogenic tubule disorders, Sertoli cell vacuolization, germ cell shedding, and so on. Sperm we analyzed that the difference results is related to DBP dose, the time of exposure, the type, age of animal and other factors. But these suggested that DBP indirect damage to the testis through endocrine disruption, but also that Sertoli cells may be the direct target of DBP toxic effects [21]. Sertoli cells present in the testicular spermatogenic epithelium, the structure and function integrity of Sertoli cells is critical to the proliferation and maturation of spermatogenic cells. In this study, we used MTT assay to detect the inhibitory effect of DBP on the proliferation of Sertoli cells. It was found that DBP significantly inhibited cell proliferation compared with the control group. Flow cytometry showed that DBP-treated cell apoptotic rate was significantly higher than that the control group. The results confirm that DBP induces cytotoxicity in Sertoli cells causing apoptosis in Sertoli cells cultured in vitro. Many studies have found that male reproductive health have a relationship between MAPKs pathway. MAPKs involved in spermatogenesis [22], germ cell development and maturation [23], germ cell apoptosis [24]. We examined the phosphorylation of ERK1/2, JNK, p38 by Western blot to confirm whether DBP will interfere with MAPKs signaling pathway. In the vivo experiment study, the ratio of p-ERK/ERK and p-JNK/JNK in MAPKsrelated proteins increased compared with the solvent control group, the difference was statistically significant (P < 0.05), while the changes of p-P38/P38 were not significant, indicating that JNK, ERK phosphorylation level was significantly increased, and P38 phosphorylation level was not significant. These suggests that the JNK and ERK pathways in the MAPKs signal are involved in testicular damage. This result is not completely consistent with the relevant research reports. Qi et al. believe that JNKs/p38 MPAK is involved in the apoptosis of Sertoli cells [25]. Song et al. believe that p38 MPAK mainly plays a major role in Sertoli cell injury [26], Choi et al. believe that ERK pathway plays a major role in Sertoli cell damage [27]. This difference is due to the different reproductive chemical toxicants studied by the researchers, and different animal species and cell lines used in vivo and in vitro studies.
In addition, in order to further confirm the mechanism of JNK and ERK-MAPKs signaling pathway on testicular injury, we used testicular primary Sertoli in vitro experiments. The results showed that compared with the solvent control group, DBP could increase the phosphorylation of MAPKs-related proteins ERK and JNK, and the ratio of p-ERK/ERK and p-JNK/ERK increased, the difference was statistically significant (P < 0.05), indicating that DBP can activate the JNK and ERK-MAPKs signaling pathway of testicular Sertoli cells. To further confirm whether the phosphorylation of JNK and ERK1/2 is involved in apoptosis and whether the two effects are equal or which dominates, we used ERK1/2 inhibitors (U0126) and JNK inhibitors (SP600125) pretreatment primary Sertoli cells 2 h, and then incubated with 100 μg/mL of DBP. The results showed that ERK1/2 inhibitors (U0126) and JNK inhibitors (SP600125) reduced the expression of phosphorylated ERK1/2 and JNK, compared with the DBP group, the apoptosis rate of U0126 + DBP group and SP600125 + DBP group decreased, the cell proliferation rate increased, the ratio of p-ERK/ERK and p-JNK/ERK decreased, indicating that the inhibitors of JNK and ERK were all can weaken DBP-induced cell apoptosis, increase cell survival, reduce ERK1/2 and JNK phosphorylation. Furthermore, it was found that cell viability rate increase and the apoptotic rate decrease in JNK inhibitor (SP600125) + DBP group was more obvious, the difference was statistically significant (P < 0.05), indicating that the repress role of apoptosis by JNK inhibitors is more prominent. Western blot showed that compared with DBP group, the ratio of p-JNK/ERK was not significantly decreased in JNK inhibitor (SP600125) + DBP group, but in ERK1/2 inhibitor (U0126) + DBP group, the ratio of p-ERK/ ERK was significantly decreased. These findings illustrated that SP600125, a selective inhibitor of JNK, was shown to repress the DBP-induced JNK phosphorylation/activation in Sertoli cells, which indicated that DBP mainly activates JNK phosphorylation. So collectively, in vivo and in vitro study results, indicate that DBP mediates its disruptive effects at the Sertoli cell via the JNK-MAPK signaling pathway. These findings provide insightful information regarding a therapeutic approach to DBP-induced male infertility, such as via the use of specific JNK-MAPK inhibitor.
In summary, the results of this experiment indicate that the activation of MAPKs signaling pathway, especially JNK, may participate in the damage of DBP to testicular Sertoli cells. In addition, our previous in vitro results showed that the PTEN/PI3 K/AKT/mTOR signaling pathway plays an important role in DBP-induced apoptosis of testicular Sertoli cells [28]. These suggested that MAPK and Akt pathways both can mediate support cell death caused by reproductive toxicants which coincides with that nonylphenol-induced apoptosis in mouse Sertoli cell line TM4 cells can be mediated by MAPK and Akt pathways [29]. As noted herein, multiple pathways such as both Akt and MAPK signal pathways are involved in chemical toxicant-induced testicular injury, and research is needed to delineate the molecular mechanism(s) that regulates crosstalk between these signal pathways, which will be helpful in gaining insightful information to intervene toxicant induced testicular injury.
Conclusions
In conclusion, DBP can lead to testicular toxicity: decreased testosterone in blood serum, leaded to sperm reduction and malformation and even damaged the normal structure of seminiferous tubules. The activation of MAPKs signaling pathway, especially JNK, may participate in the damage of DBP to testicular Sertoli cells. We need to study more deeply such as the molecular mechanism(s) about signal pathways, which will be helpful in gaining insightful information to intervene the reproductive toxicity of DBP.
Animals and treatments
Male clean Sprague-Dawley rats (4 weeks old) were obtained from the Laboratory Animal Center of Jilin University (Jilin, China). All experimental protocols were conducted in accordance with the principles and procedures outlined in the "Guide for Care and Use of Laboratory animals" and approved by the Ethics Committee for the Use of Experimental Animals of Beihua University.
Rats were housed under a 12:12 h light-dark cycle with well ventilation and constant temperature (26 ± 1 °C). Animals were adapted to laboratory lighting and feeding condition for 1 week before experiment. Rats were allowed free access to food and drinking water. After adaptation, male SD rats were assigned randomly to four group (n = 8) and administered corn oil (vehicle control) or DBP (analytical grade, purity 99.5%) (Sigma, USA) at doses of 50, 500 and 1000 mg/kg/day by gavage for 35 days. Animals were sacrificed by decapitation after the last treatment and the blood, testes and epididymides were isolated immediately for the following analyses.
Measurement of reproductive hormone
The blood samples were centrifuged at 2000 rpm, 4 °C for 10 min and testosterone (T), follicle-stimulating hormone (FSH) and luteinizing hormone (LH) in serum were detected using ELISA kits according to the manufacturer's instructions (Shanghai Jiang Lai biological company, China).
Sperm analysis
The fresh epididymides were weighted then cut longitudinally and put in 2 mL 0.9% sodium chloride solution for 10 min at 35 °C to release sperms into the media. The suspension was filtered using nylon mesh and adjusted to an adequate concentration. Sperm count, viability rate and malformation rate were determined using WLJY-9000 color sperm quality detection system (Beijing Weili Inc., China).
Testis histological examination
Hematoxylin-eosin (HE) staining was applied to testes. Tissues were immersed in Bouin's solution (picric acidaqueous solution: formaldehyde:glacial acetic acid, 15:5:1) for 24 h at room temperature, embedded in paraffin wax and sectioned into 5 μm thick slices. After dewaxed in xylol, dehydrated in ethanol series and washed in water, slices were stained with hematoxylin and eosin, following the standard HE staining procedures. Images were taken by a digital camera (DP20, Olympus, Tokyo, Japan) to show the extent of testes injury.
Isolation of Sertoli cells
Primary Sertoli cells were isolated from 18-day-old male rats. Briefly, animals were euthanized by CO 2 asphyxiation, and testes were isolated, decapsulated, and cut into small pieces. After washed twice in DMEM-F12 (Hyclone, USA), fragments were digested with 0.1% (w/v) trypsin (Difco, USA) for 30 min then centrifuged twice at 800 rpm for 2 min. Cell debris was resuspended in 0.05% (w/v) collagenase I (Sigma, USA) with gentle pipetting using Pasteur pipette and digested for about 10 min, until seminiferous tubules were nearly invisible. Suspension was filtered through nylon mesh and centrifuged for 5 times. Resuspend cells with DMEM-F12 containing 15% fetal bovine serum (Hangzhou Sijiqing, China) and plate cells on 100 mm dishes at a density of 0.5 × 10 5 cells/cm 2 . High purity Sertoli cells were obtained after incubated for 36 h at 35 °C in 5% CO 2 followed by treated with 20 mM Tris-HCl buffer (pH 7.4) for 2.5 min to remove residual germ cells. The purity of Sertoli cells was routinely more than 90% [30]. On the 4th day of in vitro culture, Sertoli cells were ready for the subsequent treatments.
MTT assay
Sertoli cells were transferred into a 96-well plate at a density of 1 × 10 5 cells/mL. Cell viability was estimated by MTT assay after 24 h treatment. Cells were incubated in serum-free media containing 20 μL MTT (5 mg/ mL) for 4 h. 150 μL DMSO was added after discarding culture media. Shake for 10 min, and the absorbance were acquired at 490 nm by microplate reader (Tecan, Switzerland).
Sertoli apoptosis rate detection
After treatment for 24 h, the cells were digested and trypsinized, then harvested. The cells were treated according to the Annexin v-FITC/PI kit instructions (Biovision, USA). The cells were mixed with 400 μL binding buffer and then 4 μL PI and 4 μL Annesxin v were added. The cells were incubated at room temperature for 15-20 min at in dark room. FACS420 flow cytometer (Becton-Dickinson, USA) was used to detect cell apoptosis.
Western blot analysis
In vivo experiments, testicular tissue of each group rats was removed and quickly placed in liquid nitrogen to be stored for inspection. The sample was placed in 4 °C precooling lysis buffer, 4 °C ultrasonic mixing (6 s × 6 times), 4 °C incubated for 30 min, centrifuged 25,000 rpm, 10 min × 3 times, the supernatant sub-assembly; The content of protein was determined by Bradford method, and the protein was separated by 10% SDS-PAGE using standard bovine serum albumin. Each well was loaded with 60 μg of protein and electrophoresis was performed at a constant voltage of 160 V until the bottom of the gel was transplanted. After blocked with 5% non-fat dry milk for 2 h, 1: 100 rabbit anti-rat ERK1/2, p-ERK1/2, JNK, p-JNK, P38, p-P38 and tubulin monoclonal antibody (Beijing Solarbio, China) were added, shaken at 37 °C for 1 h, Washed three times with phosphate buffer solution-Tween 20 for 10 min. The conjugate horseradish peroxidase-labeled secondary antibody (1: 300) was added, shaken at 37 °C for 1 h, washed in PBST for 3 times and irradiated with chemiluminescence agent.
In vitro experiments, Sertoli cells were homogenized in lysis buffer on ice for 30 min and the homogenate was collected followed by centrifugation at 1000 rpm for 10 min at 4 °C. The protein concentration of supernatant was measured. Equal amounts of proteins (40 μg) were loaded on 10% polyacrylamide gel with 4% stacking gel to apply SDS-PAGE and then transferred to nitrocellulose membranes. After blocked with 5% nonfat milk for 1 h at room temperature, the membranes were incubated with primary antibodies (JNK, p-JNK, ERK1/2, p-ERK1/2, and tubulin, 1:1000 dilution) overnight at 4 °C. Protein blots were revealed by horseradish peroxidase-conjugated secondary antibody (1:3000 dilution) and visualized by enhanced chemiluminescent kit.
Proteins were quantified by densitometry with the ImageJ software (National Institutes of Health, USA). Data were normalized against tubulin in each group.
Statistical analysis
Data in this study were presented as mean ± SD and determined by one-way analysis of variance (ANOVA) followed by Bonferroni's test. Difference was considered statistically significant at P < 0.05.
Learn more biomedcentral.com/submissions
Ready to submit your research ? Choose BMC and benefit from: | 2019-08-08T13:13:50.113Z | 2019-08-06T00:00:00.000 | {
"year": 2019,
"sha1": "0b19f203e1b42040f9d7cc688b2fb941689943c8",
"oa_license": "CCBY",
"oa_url": "https://biolres.biomedcentral.com/track/pdf/10.1186/s40659-019-0248-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7449980d30886832c0f5f9d3e0de84d91cdcbb03",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
231839848 | pes2o/s2orc | v3-fos-license | The Plausibility Paradox for Resized Users in Virtual Environments
This paper identifies and confirms a perceptual phenomenon: when users interact with simulated objects in a virtual environment where the users' scale deviates greatly from normal, there is a mismatch between the object physics they consider realistic and the object physics that would be correct at that scale. We report the findings of two studies investigating the relationship between perceived realism and a physically accurate approximation of reality in a virtual reality experience in which the user has been scaled by a factor of ten. Study 1 investigated perception of physics when scaled-down by a factor of ten, whereas Study 2 focused on enlargement by a similar amount. Studies were carried out as within-subjects experiments in which a total of 84 subjects performed simple interaction tasks with objects under two different physics simulation conditions. In the true physics condition, the objects, when dropped and thrown, behaved accurately according to the physics that would be correct at that either reduced or enlarged scale in the real world. In the movie physics condition, the objects behaved in a similar manner as they would if no scaling of the user had occurred. We found that a significant majority of the users considered the movie physics condition to be the more realistic one. However, at enlarged scale, many users considered true physics to match their expectations even if they ultimately believed movie physics to be the realistic condition. We argue that our findings have implications for many virtual reality and telepresence applications involving operation with simulated or physical objects in abnormal and especially small scales.
INTRODUCTION
Many studies have confirmed the so-called "body-scaling effect": if presented with mismatching size cues, humans tend to use their visible body as the dominant cue when perceiving sizes and distances (Banakou et al. 2013;Langbehn et al. 2016a;Linkenauger et al. 2013;van der Hoort et al. 2011;Ogawa et al. 2017). For example, if a person was somehow shrunk to the size of a doll, the person would be inclined to regard the world as scaled-up and him/herself as normal-sized (van der Hoort et al. 2011). In this paper, we investigate the human perception of physics, specifically when subjects have been either scaled down or up by a significant amount. We believe this relatively underrepresented topic has implications to various virtual reality (VR) and telepresence applications. More specifically, we focus on the subjective credibility of rigid body dynamics when subjects are presented with realistic and unrealistic approximations of object motions when either scaled down or scaled up by a factor of ten. Previously, we investigated the perception of physics in VR when subjects were scaled down by a factor of ten (Pouke et al. 2020). We found out that subjects considered a physics model of regular human scale to be more realistic than an accurate approximation of physics in the scaled down environment. This offered additional proof of humans being oriented to Newtonian physics taking place at human scale, and anything deviating much from that scale appears unnatural. In this paper we extend our prior work by considering scaled-up subjects allowing us to compare the perception of physics in VR both in small and large scales.
Currently, not much is known about how scaling a person would affect their perception of physical phenomena, such as accelerations. Interestingly, if we consider the interaction of scaled-down characters with their surroundings in many works of fiction, the tendency to represent the world as scaled up in comparison to normal-sized protagonists can be observed. Early examples can be seen in the classic film The Incredible Shrinking Man. When the main character throws grains of sand off the table while insect-sized, the grains accelerate and fall as if they were boulders -when they should be falling down instantly. Similarly, when the character is awash with rainwater holding onto a pencil, the water and the pencil act more akin to a river and a log when the pencil should be bobbing with few waves and no visible whitewater should be apparent. Although the deficiencies in the realism of the Incredible Shrinking Man can be attributed to 1950s technologies, similar inaccuracies still remain in modern movies from Honey I Shrunk the Kids to Downsizing. These inaccuracies are not necessarily resulting from directors' lack of understanding of physics, but might be conscious choices to represent what the viewers would expect.
VR and telepresence applications allow humans to live through experiences such as the Incredible Shrinking Man through the eyes of a scaled-down entity. A specific category of virtual environments (VEs) providing such experiences are multiscale collaborative virtual environments (mCVEs), in which multiple users can collaborate in, for example, architectural or medical visualizations across multiple, nested levels of scale (e.g., Kopper et al. (2006); Zhang and Furnas (2005)). In addition, the scaling of users has been utilized in several collaborative mixed reality (MR) systems (e.g., Billinghurst et al. (2001); Piumsomboon et al. (2018b,a)). Teleoperation of robots can allow humans to interact with the physical world at micro-and nanoscale. Similar to mCVEs, robotic teleoperation systems using multiple scales are beginning to emerge (Izumihara et al. 2019). Although teleoperation in the physical world can leverage stereoscopic camera systems resembling immersive VR applications (Hatamura and Morishita 1990), purely virtual representations leveraging computer graphics can be used in, for example, educational and training systems for micro-and nanoscale tasks (Bolopion and Régnier 2013;Millet et al. 2008). Robotic surgery systems can perform operations at a microscopic level (Hongo et al. 2002) whereas stereoscopic VR can be utilized in telesurgery (Shenai et al. 2014). The benefits of VEs have been identified in various design and prototyping processes (Mujber et al. 2004) that can be extended into small-scale VEs, as well. Already two decades ago, both the design (Li and Sitte 2001) and assembly (Alex et al. 1998) of microelectromechanical systems (MEMS) were prototyped through desktop VEs. Recent studies have also investigated self-scaling as a method to help with aspects related to architectural and interior design (Zhang et al. 2020a,b).
Understanding human perception of scale-varying phenomena will be useful for the future design of applications such as those listed above. Although existing research has addressed many perceptual questions, such as the perception of distance and dimensions after altering one's virtual size (e.g., van der Hoort et al. (2011); Banakou et al. (2013); Kim and Interrante (2017)), the perception of the behavior of physical objects has received relatively little attention. There are many potential future use cases for user scaling that might require interaction with physical or physically simulated objects. We argue that it is not intuitive for humans to correctly perceive physical phenomena, such as rigid body dynamics, in scales that differ greatly from a normal human scale. An object dropped from 20 cm takes significantly less time to fall than an object dropped from 2 m, and their perceived accelerations are different. Additional physical phenomena, such as fluid dynamics, frictions, and static electricity might affect interactions even further as the scale of the operations becomes smaller. For this reason, additional consideration is required when designing systems in which real or virtual interactions take place on atypical scales, and thus it is important to understand human perception of physical phenomena on those atypical scales.
In this paper, we present our results on human perception of physics at abnormal scales. First, we focus on the mismatch between perceived realism and a physically accurate approximation of reality when interacting in a VE while scaled down by a factor of ten. Then, we present the results of a similar study where subjects were scaled up by a factor of ten and compare the results between the two studies. Based on previous research, we believe that humans generally perceive themselves at the correct scale when presented with mismatching size cues, as long as visual body cues are present (Langbehn et al. (2016b)). We also believe humans are generally accustomed to rigid body dynamics taking place at a human scale and under normal gravity conditions (McIntyre et al. (2001)). Therefore, we hypothesize that subjects neither accept the realistic approximation of physics at an abnormal scale, nor are they blind to changes in scale. Instead, when presented with two different scale-dependent rigid body dynamics, they are more likely to consider the physically inaccurate one to be the more perceptually realistic one. This paper is structured as follows. Section 2 reviews previous research related to this work. Section 3 presents the research method, experimental setup and the results of Study 1. Section 4 similarly reports Study 2 and also compares the results of both studies. Section 5 discusses our findings and Section 6 concludes the paper.
Perspective
The manipulation of a user's scale can be accomplished by changing various properties of the virtual character the user is controlling in the VE. Changing these properties has various subjective effects. When scaling a user's virtual size, one of the most obvious properties to change is the viewpoint height, as it defines the virtual camera origin in relation to the VE, simulating a change in physical size. Viewpoint height affects egocentric distance perception (Leyrer et al. 2011;Zhang and Furnas 2005). Interestingly, minor changes in viewpoint height might go unnoticed by users (Leyrer et al. 2011;Deng and Interrante 2019). Users' interaction capabilities such as locomotion speed and interaction distance can be changed according to scale, depending on the purpose of the application (Zhang and Furnas 2005). When using a head mounted display (HMD), the scaling of the user can also affect the virtual interpupillary distance (IPD), which is the distance between the two virtual cameras that are used to render the environment for the user. Changing this distance can affect the user's sense of their own size relative to the VE (Piumsomboon et al. (2018a); Kim and Interrante (2017)).
Body Scaling
As already mentioned, body scaling refers to humans utilizing their own body as a primary scale cue, hence the virtual representation of the user's body greatly affects their perception of sizes and distances in a VE (Ogawa et al. 2019;Ogawa et al. 2017). Linkenauger et al. (2013) studied the role of one's hand as a metric for size perception; they conducted an experiment where they scaled the users' virtual hand and found out that it had a strong correlation with perceived object size. Ogawa et al. (2019) studied the effect of hand visual fidelity on object size perception and found that the visual realism of the hand affects the extent of the body scaling effect. van der Hoort et al. (2011) embodied the entire user in a doll's body as well as in a giant's body using a stereoscopic video camera system and an HMD. They found that the embodiment significantly affected the users' distance and size perceptions, especially if the user experienced a strong body ownership illusion ) with the virtual body. Banakou et al. (2013) compared the effects of embodying the user as a child versus as a scaled-down adult. They found that the effect of altered size and distance perceptions was even larger when embodied as a child, and it also made the users associate themselves with childlike personality traits.
Environmental Cues
The environment, whether real or virtual, affects the perception of scale. There is evidence of humans generally underestimating egocentric distances in VEs, except when the VE is faithfully modeled to represent a real environment (Renner et al. 2013). However, if a familiar room is scaled slightly up or down, underestimations are reintroduced (Interrante et al. 2008). Familiar size cues also affect the sensitivity to eye height manipulations (Deng and Interrante 2019). Langbehn et al. (2016a) studied the effect of body and environment representations as well as the scale of external avatars on users' perception of dominant scale in mCVEs (the dominant referring to the "true" scale in an mCVE system where users can coexist in multiple scales). They found that humans tended to use their body as the primary metric for judging their own size and the environment if the representation of one's own body was not available. In addition, an environment with familiar size cues helps in the determination of scale, whereas an abstract environment does not. They also found that the majority of subjects tended to estimate external avatars to be at the dominant scale instead of themselves.
Perception of Physics
Previous research suggests that humans have an internal physics model according to which they expect the world to function. Studies in micro-and nanoscale teleoperation have revealed that, due to changes in physics, interactions at these scales can become difficult for the human operator, but education inside virtual reality environments has been found to alleviate this drawback (Millet et al. 2008;Sitti 2007). McIntyre et al. (2001) reported a study in which astronauts' movements to catch a vertically moving ball were more inaccurate in zero gravity (0g) in comparison to earth gravity (1g); this was interpreted as evidence that the central nervous system utilizes an internal model of gravity in addition to visual judgement of acceleration to synchronize movements. Senot et al. (2005) used VR to study human estimation capabilities to intercept moving balls and found further evidence on subjects being more capable of intercepting objects accelerating according to normal gravity. Yao and Hayward (2006) created a haptic illusion of an object rolling or sliding inside a cavity and studied the subjects' capability of estimating the lengths of virtual tubes. According to their results, the subjects performed better than chance in estimating the tube lengths even using reduced sensory cues, indicating a capability to estimate object movements under the influence of gravity. Ullman et al. (2017) compared humans' internal physics model to a contemporary game engine. Their findings suggest that although humans are not entirely capable of accurately predicting object motions, they are capable of making noisy, "good enough" approximations of Newtonian physics which can be compared to the physics simulations generated by physics engines that are integrated in contemporary game engines. McCoy and Ullman (2019) asked more than a thousand subjects to rate the 'effort' required by various imaginary magical spells violating physics and found the subjects' responses as strikingly consistent. Despite describing completely imaginary phenomena, the subjects were very consistent in defining relative efforts that seemed to depend not only the type of the spell (such as levitate or conjure) but the size of the target as well. Although this finding is not directly related to the perception of physical phenomena, it again speaks for internal intuitive physics model that is consistent across humans.
Presence and Plausibility
The concepts of immersion (Slater and Wilbur 1997), presence and plausibility (Slater 2009) are relevant for this study. In Slater's classical definition, the level of immersion refers to the level of technical fidelity of the VR system (i.e., resolution, field of view, vividness of graphics; Slater and Wilbur 1997). In addition, the realism of the user's response to the VR system depends on two orthogonal components, presence or place illusion (PI) and the plausibility illusion (PSI; Slater 2009). PI refers to the sensation of being in another place, whereas PSI refers to the perceived credibility of the virtual scenario or experience (the illusion of being there versus the realness of what is happening; Rovira et al. 2009). PSI depends on the extent to which the VE can produce authentic responses for user actions. Rovira et al. (2009) argued that for PSI to occur, participants must perceive themselves as beings that exist in the VE; user actions must elicit actions in the VE and the VE must acknowledge the user (for example, virtual characters react to the user). In addition, the VE should match the users' prior knowledge and expectations. Skarbez et al. (2017b) used the term coherence to refer to the aspects of a VE that contribute to PSI, such as virtual humans and the behavior of virtual objects. They argued that although immersion is a technical attribute that affects PI, coherence is a similar technical attribute affecting PSI.
In Study 1, we used the concept of PSI to study human perception of the behavior of physical objects while the subject was scaled down and interacting in a normal-sized environment. In Study 2, we repeated the same procedure for scaled-up subjects. However, we delimited virtual characters out from the scope of in both studies. Instead, we were interested in how subjects would perceive the coherence in terms of behavior of virtual objects, when it would be reasonable to expect a mismatch between expectations and correctly simulated reality. In addition, we investigated whether the extent of PI affected PSI in this particular context.
Building on the terminology discussed by Skarbez et al. (2017aSkarbez et al. ( , 2020, the phenomenon studied in this paper could also be referred to as coherencefidelity mismatch; the logic expected by the users mismatches with more faithful representation of reality. It is expected that coherence differs from reality in, for example, fantasy games or other entertainment applications where PSI is maintained even when unearthly phenomena are taking place. However, we consider the mismatch studied here to be specifically interesting due to its implications for VR and telepresence applications taking place at abnormal scales.
Physics Conditions
The specific objective of Study 1 was to investigate the PSI of subjects in two different physics conditions. The purpose of both conditions was to visually represent a scaled-down subject in a normalsized environment, and the physics simulations differed between the conditions as follows. In the true physics condition, the rigid body dynamics affect virtual objects in an approximately similar way to what would be accurate at that scale. In the movie physics condition (named after physical behavior as typically seen in Hollywood movies in scenes depicting scaled-down characters), rigid body dynamics behave in what would be the approximation of a normal human scale.
Our assumption was that the users would be able to distinguish the difference between true physics and movie physics, and we predicted that subjects would be more likely to expect and feel the movie physics condition to be the more perceptually realistic representation. This would suggest a Plausibility Paradox, a mismatch between perceived realism and the correct approximation of realism.
Hypotheses
We hypothesized that in the true physics conditions, the behavior of physical objects would feel incorrect for subjects despite their knowledge of being virtually shrunk down. More specifically, our hypotheses were as follows: H1: For a scaled-down user, movie physics is more likely to feel realistic than true physics.
H2: For a scaled-down user, movie physics is more likely to match the user's expectations than true physics.
Virtual Environment
We designed a VE for the two physics conditions described above using Unreal Engine 4.22 (UE). In both conditions, the scaling operations took place in one order of magnitude, giving the impression of a doll-sized perspective. We did not use full body tracking or attempt to induce a strong body ownership illusion ), so there was no visualization of any body parts in the VE other than the subject's hands. We used the default UE VR hand visualization for interaction and to present a medium-fidelity body size cue (Ogawa et al. 2019). There was no difference between the conditions regarding how the hands functioned or how the user was able to move.
To help in providing accurate size cues, we modeled the VE to resemble a location in the main corridor of the campus in which the study took place. The dimensions and materials of the VE were modeled after the real environment. In addition, we took measurements of various real-world objects, such as chairs, tables, and leaflets, which we modeled and scaled accordingly and placed in the VE as static objects.
The scaling of the user in the true physics condition was achieved by shrinking the user with the UE's built-in World to Meters parameter, which automatically scales the player character's height, virtual IPD and interaction distance. The skeletal meshes representing the player character's virtual hands were scaled down manually. In the movie physics condition, the player character properties were kept as default and the VE was scaled up instead. The purpose for this approach was to give the visual illusion of a scaled-down user, while retaining physics conditions that correspond to the normal human scale. The sizes and relative distances of scene objects were increased by a factor of ten. In addition, the properties of lights and reflection capture objects were adjusted so that the overall visual appearance of both conditions were kept as similar as possible.
Interaction Task
The interaction task consisted of the manipulation of virtual soda can pull tabs approximately 3 cm in length and 1.9 cm in width (as presented in Fig. 1). The tabs were chosen for the experiment both for their small, consistent mass as well as for being a reasonably authentic object that could be seen in the simulated VE. We considered a lightweight object to be most practical for simulating throwing in VR so that we would not have to simulate the decrease in hand acceleration due to increased inertia at the end of the arm or limitations due to arm strength (Cross 2004). In both conditions, the subjects would try dropping and throwing five tabs. Picking up and throwing the tabs took place utilizing the default mechanism in UE, similar to contemporary VR applications in general. The subjects simulated grabbing objects by squeezing the trigger of the motion controller and dropping them by releasing the trigger. Virtual throwing took place by swinging the motion controller and then releasing the trigger, and the object thrown retained its velocity at the moment of release, simulating throwing.
In the true physics condition, the tabs would drop down fast, similarly as to if they were dropped from the height of 15-20 cm (simulated falling speed approximately 0.175 s at 20 cm in UE). In addition, the throwing distances would appear short because of the limited velocity that can be actuated due to real hand movements scaled down by an order of magnitude. The movie physics condition, on the other hand, simulated the tabs as falling down more slowly, similarly to an object dropped from human height (simulated falling speed approximately 0.6375 s at 2 m in UE). In addition, the throwing distances were much larger in the movie physics condition due to the larger velocity that the subjects were able to actuate on the tabs by virtual throwing.
Due to the simulated size, the tabs were also different between conditions in terms of their bounciness (there were no changes in physics simulation properties, such as restitution). In the movie physics condition, the tabs bounced visibly off surfaces, or jittered slightly after being dropped. However, in the true physics condition, there was little to no visible bounciness.
The tabs were placed on top of a large book so that the subjects would not have to pick them up from the floor. The book also provided an additional size cue. We gave the book a neutral, non-distracting appearance and a general title so that it was recognizable as a book, but would not otherwise draw too much attention. A Coca-Cola can was placed as a familiar sized cue on the left side of the book. Fig. 1 shows the book and the tabs as seen in the beginning of the simulation. Fig. 2 A and B show the scene as seen at the beginning of the simulation when looking forward (A) and left (B).
The virtual mass of the tabs was set at 1g in both conditions. Default physics settings in UE were utilized, with the exception of turning on the physics sub-stepping for additional physics accuracy by enabling physics engine updates between frames. Drag by air resistance was set to zero in both conditions. The simulation itself ran at stable 80 FPS which is the maximum frame rate of Oculus Rift S.
Participants
The experiment was carried out as a within-subjects experiment, in which 44 subjects (23 females and 21 males) performed both conditions during one session. Two participants were excluded due to issues with the functionality of the VR equipment or due to vision impairments. The order of the conditions was counterbalanced so that there was an equal number of male and female participants starting with each condition. The subjects' ages ranged from 19 to 66, mean and median ages being 30 and 26, respectively. The standard deviation for the ages was 10.4. The study was conducted either in English (12 females and 7 males) or in Finnish (11 females and 14 males), depending on the preference of the subject. Each participant was rewarded with a gift voucher of two euros.
Experimental Procedure
The experiment was set in a laboratory in which the subjects used the Oculus Rift S system with provided Oculus Touch controllers for the experiment. The Rift S has a variable IPD software setting, so the IPD was set to 62.5 for females and 64.5 for males, the closest approximation available based on the averages reported for adults by Dodgson (2004). In the beginning of a session, the subject read through a written Information for Subjects document and signed an informed consent sheet. The subject was then instructed on using the VR hardware, specifically how to use the Rift S Touch motion controllers for picking up and throwing objects. Next, the subject was instructed to stand on a particular starting spot in the laboratory marked with a masking tape. When the subject was wearing the HMD and the motion controllers comfortably, an instruction script was read in English or Finnish. The script stated that the subjects were at the university central hallway, shrunk down 10-fold to a size of a doll, and were to drop and throw the tabs placed on top of a book in front of them.
Active noise-cancelling headphones were placed on the subject to block out any potential external noise from other rooms in the building, and then the experiment began. After performing both conditions, the headphones and the VR hardware were removed and the subject was asked to respond to a post-experiment questionnaire as well as a background questionnaire on a different laptop. The subject was asked for any additional comments or questions, and if he/she could be contacted for future studies, and then given her/his gift voucher. The average duration of the session was 20 minutes per subject.
Questionnaires
We collected plausibility related data using two forced choice questions (main questions 1 and 2), two open-ended questions (O1 and O2) and a 7-point Likert scale questionnaire regarding the behavior of the tabs (L1-L5). In addition, the subjects filled out the extended version of the Slater-Usoh-Steed (SUS) Presence questionnaire (Slater et al. 1994;Usoh et al. 2000), as well as a background information questionnaire. The main questions 1 and 2 were as follows: 1. Thinking back how the pull tabs were behaving in the experiment, which felt more realistic (like what would happen in the real world if you had been shrunk down), the first or the second time?
2. Thinking back how the pull tabs were behaving in the experiment, which matched your expectations (similar to what would happen in the real world if you had been shrunk down), the first or the second time?
The main questions were coupled with open-ended questions (O1 and O2), that were simply stated as "Why?". The purpose of the open-ended questions was to evaluate to what extent the subjects' responses were related to the physics or other reasons.
The forced-choice and open-ended questions were followed by a 7-point Likert scale questionnaire asking subjects to judge how they perceived various aspects related to the behavior of the tabs. Each question was stated twice in the questionnaire, referring to the first time and the second time subject interacted with the tabs (either using the true physics and then the movie physics or vice versa). The first three questions (L1-L3) were bipolar, whereas the last two (L4, L5) were unipolar. The Likert questions L1-L5 and their associated scales were as follows: L1 The falling speed of pull tabs (too slow, too fast) L2 The speed of pull tabs when thrown (too slow, too fast) L3 The distance of pull tabs when thrown (too close, too far) L4 The way the pull tabs were bouncing when thrown (incorrect, correct) L5 The impact of gravity on the pull tabs (incorrect, correct) Similarly to Study 1, we gathered qualitative data, subject background data as well as questionnaire data to better understand the responses given by the subjects.
All questions were presented in either English or Finnish, depending on which was chosen as the preferred language by the subject when signing up for the experiment.
Hypotheses
According to the responses to the main questions, the majority of the subjects considered the movie physics condition as the more realistic one. Out of 44 subjects, 32 participants (73%) responded to the first question that they considered the movie physics condition more realistic, which confirms H1. For the second question, 40 out of 44 (91%) subjects responded that the movie physics matched their expectations better, which confirms H2. Furthermore, we analyzed the frequencies of responses to questions 1 and 2 with a binomial test and found their corresponding two-tailed p values as p = 0.004 and p = 1.7051 −8 , respectively. From this we can conclude that it is unlikely that the responses to questions 1 and 2 were due to chance. In addition, this indicates that subjects were able to distinguish between the two physics conditions and more consistently selected the movie physics condition, which was the inaccurate physics condition.
Out of twelve respondents who considered true physics more realistic, nine responded that the movie physics matched their expectations more. Only one subject considered the movie physics more realistic while simultaneously stating that the true physics better matched her/his expectations.
Understanding Contributing Factors
We gathered supplementary data to further understand the results. These data include responses to openended questions O1 and O2, Likert-scale questions L1-L5, as well as subject background and self-reported level of presence.
The purpose of the open-ended questions was to evaluate to what extent the subjects' responses to the main questions 1 and 2 were related to the perceived realism of the physics. The responses consisted of one-sentence statements typed by the subjects. Thematic analysis with an inductive approach (e.g., Patton (2005)) was carried out independently by two researchers and used to identify codes in the response data. A summary of the codes can be viewed in Fig. 3 A and B). Examples of responses in O1 can be seen in Table 1, whereas examples of responses in O2 can be seen in Table 2.
In short, the responses to questions O1 and O2 indicate that majority of users (38 out of 44) made their choices primarily according to reasons related to the behavior of the physically simulated tabs. Other
Codes
Preference Response Gravity, natural Movie physics "Gravity felt more natural" Gravity, natural Movie physics "At the second time, the objects fell to the ground faster, which felt unreal" Gravity True physics "I think when the height of the object is not that high, it should reach the ground faster." Physics, visual Movie physics "movement in space felt more realistic, but the objects lacked 3D, ring pulls are not paper thin" Ability, distance traveled, physics Movie physics "because I was more comfortable with the controllers after using them for some time, and i knew i could do more things now like throwing more far away after some time, and also they were moving more smoothly" Bounciness, throwing, distance traveled, ability Movie physics "I am not sure but I think the second time they still moved a bit after I dropped them to the floor, before being completely still. I think I also managed to throw one of the pull tabs the second time, which felt more realistic than them dropping very quickly just right in front of me after I tried to throw them (but this could also just have been my inability to throw the first time)." Weight Movie physics "Second time they felt too heavy". Weight, strength, size True physics "Pull tabs are not heavy and when I'm small, I probably would not have the strength to throw them afar". Ability Movie physics "I was able to act more normal in the second round. I had worked out the mechanics of the VR better and spent less time attempting to make the task work".
primary reasons were related to general interaction and becoming accustomed to controllers. Few references were made to visual details (appearance of tabs and colors) as secondary reasons or general remarks.
Likert Data
Inspecting the Likert responses for questions L1-L5, we found that the movie physics condition was closer to perceived realism (median responses closer to 4 in questions 1 and 3 and closer to 7 in questions 4 and 5) in all questions except L2, in which the median response was the same for both conditions. We analyzed the responses to questions L1-L5 with the Wilcoxon Signed Rank test and found that the responses were significantly different (p <0.05) for all questions except L2 (p = 0.845). This gives additional confirmation that the subjects perceived the movie physics condition more realistic due to differences in the behavior of the physically simulated tabs. A summary of responses including, median, mode and standard deviation for questions L1-L5 can be seen in Table 6. In addition, box plots visualizing the medians, interquartile ranges as well as minimum and maximum responses can be seen in Fig. 4 A and B. "I didn't think at first (until I saw the previous question) shrinking down would also affect the time it takes for the objects to reach the ground. The physics first time behaved just like in normal life." Size Movie physics (different from O1) "Even though I knew I was shrunk down, I still could not think that way when doing the experiment" Natural, physics Movie physics "The behavior seemed more natural, although probably the laws of the physics tell otherwise" Ability, throwing Movie physics (different from O1) "I thought throwing the pull tabs would be relatively easy, like in the second time". Weight Movie physics "Intuitively I figured things would be light". Size, novelty True physics "I felt that I was really small in that world for the first time.". Natural Movie physics "First time. Felt somehow more natural. They didn't have much difference, though".
Effect of Background and SUS scores
Furthermore, we used a binary logistic regression to analyze the effects of subject background and presence on their responses to main question 1. We used educational background, gender, age, vr experience, gaming experience, SUS average and SUS score as independent variables and the response to main question 1 as the dependent variable.
For analysis purposes, we transformed the Background Questionnaire responses to educational background into a binary variable consisting of roughly equal sized groups of natural sciences and engineering (25 subjects) and social sciences (19 subjects). In addition, the open responses to VR experience and gaming experience was transformed into respective ordinal variables ranging from 0 (no experience) to 4 (plenty of experience). When interpreting the gaming experience responses, additional emphasis was given to recent experience as well as experience regarding PC and console based 3D gaming (such as first person shooters and simulators) due to the tendency of such games to contain game physics simulations similar to those used in this experiment. The responses to SUS scores were transformed into two ordinal variables consisting of the average of responses as well as the computed SUS score. The logistic regression model was unable to predict the response using the independent variables. The model explained 17% of the variance (Nagelkerke's R 2 ) in perceived realism. Although the overall classification rate was 72.7%, only 16.7% (two responses) of the true physics responses were correctly classified. None of the independent variables had a significant effect on the prediction of the response (p = 0.184 -0.858). According to this analysis, the perception of realism was not significantly affected by the background, education or gaming experience of our subjects. The level of presence according to self-reported SUS score did not have any effect either.
Perception of Mass and Strength
Although we never queried subjects directly regarding the physical properties of the tabs themselves, several subjects commented on the weight of the tabs or their own strength when interacting with the tabs. Five of the subjects who responded in English commented on the feeling of the perceived heaviness of the tabs (see Table. 1). It is interesting to consider these spontaneous responses regarding differences in the weight of the tabs given than there was no change in the controllers that the subjects used for each condition. This could be an indication of a pseudohaptic effect (Lécuyer 2009) (for example, manipulating the control-to-display ratio of the visual feedback when lifting an object can give the user an illusion of increased weight (Samad et al. 2019)). However, it is possible that the subjects were simply referring to the visible trajectories and falling speed of objects (as in the tabs seemed heavier instead of the tabs feeling heavier). Several of the responses in Finnish specifically contemplated the assumed weight of the tabs in regards to how more much power they would have needed to use to throw the tabs given their reduction in size. To investigate these findings further, we added additional pseudohaptic related questions in Study 2.
STUDY 2 4.1 Study Design
In Study 2, we wanted to investigate the perception of rigid body dynamics while the user was enlarged by a factor of ten. We followed a methodology similar to Study 1 so that we could easily compare subjects' perceptions in small and large scales. We introduced minor methodological changes described below.
Hypotheses
Our hypotheses for Study 2 were similar to those of Study 1.
H3: For a scaled-up user, movie physics is more likely to feel realistic than true physics.
H4: For a scaled-up user, movie physics is more likely to match a user's expectations than true physics.
Virtual Environment
Study 2 portrayed the subject as a giant, 10 times larger than a regular human. Similarly to Study 1, the VE was also based on a real-world environment we expected to be familiar for most of our subjects. More specifically, the VE depicted a marketplace and its surroundings located in the center of the City of Oulu, Finland. The environment used 3D assets from the "Virtual Oulu" model described in Alatalo et al. (2016). The assets were imported into a UE 4.24 scene. Some of the original materials were remade to follow a contemporary physically-based rendering (PBR) workflow for improved aesthetics. To enrich the model with additional size cues, the marketplace area of the model was augmented with additional detail such as street furniture, trees and foliage that were placed using Google Maps photographs and satellite photos as reference. GIS data from the City of Oulu were used to generate non-textured faraway buildings seen in the background of the scene. Also, generic textured building blocks were used in some areas to generate buildings not present in the original Virtual Oulu model but close enough to the viewer so that untextured models were not feasible. Although our aim was to make the scene appear realistic for the subjects, we took minor liberties in the placement of certain scene objects to make the scene more appropriate for the experiment. Namely, the immediate marketplace surroundings were left relatively empty to prevent the subjects from hitting random objects and making unwanted plausibility noise. In addition, the position of trees next to the shoreline were adjusted so that the logs have a free passage to water (see Fig. 5 A).
In addition to Virtual Oulu assets, GIS data and self-modeled assets, several commercial packages from the Unreal Marketplace were used in the VE. Animated seagulls and pigeons from the Birds package were scaled to correct size (approximated wingspans 70 cm and 140 cm, respectively) and deployed in the scene to provide animated size cues. The commercial packages "Nordic Harbour", "Country Side", "Vehicle Variety Pack", "Modern City Downtown", "Sky Pack" as well as "Trucks and Trailers" were also utilized for foliage, vehicles, street furniture and other minor details, such as traffic signs. Water shader and buoyancy for logs was generated using the Waterline Pro package. Screenshots of the scene can be seen in Fig. 5 A and B.
Similarly to Study 1, the scale-changing effect was achieved by scaling the world-to-meters parameter of UE and player character properties, this time making the user to appear 10 times larger instead of smaller. Similarly to Study 1, we defined the rigid body dynamics as simulated by the game engine to act as the true physics condition. For the movie physics condition, we upscaled the default gravity Z and bounce threshold (as instructed by UE when scaling gravity) properties by 10 to generate conditions similar to human scale. This approach was taken to avoid generating two different-sized versions of the level so that we could eliminate visual difference. These approaches resulted in object free fall times of 1.97 seconds and 0.68 seconds, when object is dropped from the height of 18m in true physics and movie physics respectively.
Interaction Task
The interaction task in Study 2 resembled the task in Study 1, consisting of dropping and throwing objects. Since the subject was a giant instead of doll-sized, the objects used in the interaction task were larger as well. Considering a handful of alternatives, we determined wooden logs as suitable objects for interaction, since they are somewhat familiar sized objects for most locals and frequently seen around town after being culled from local forests. The logs were approximately 2.9 m in length and 26.7 cm in diameter, matching the dimensions and mass of commercial pine logs. We placed the logs on top of a container, so that the subjects would not need to reach all the way down to the ground to grab the logs. The container with the logs can be seen in Fig. 6 A. In Study 1, the subjects were allowed to drop and throw the pull tabs in any way they wished. However, in Study 2 we instructed the subjects to drop exactly three logs to their right and throw two logs into the sea visible in front of the subjects (see Fig. 5 A). We also placed a "Drop Here" text on the ground to depict where exactly the logs should be dropped (see Fig. 6 B). There was a particle splashing effect when the logs hit the water surface as well a buoyancy effect. However, these effects were very subtle due to the distance to the water surface.
The specific instructions for interaction were included because in Study 1 we received feedback indicating that more specific instructions would have helped in observing the motions of the pull tabs. In addition, since the subjects were interacting in a large-scale urban environment, there were countless opportunities for "plausibility noise" which we wanted to avoid (such as subjects expecting logs to realistically break windows, dent cars, knock over tables, and so on). By giving specific instructions, we aimed to ensure that the subjects' responses were based on the motions of the logs only.
Participants
Similarly to Study 1, the experiment was carried out as a gender-counterbalanced within-subject experiment, this time with 40 participants (20 males and 20 females). Three participants were excluded due to failure to follow instructions or not giving their consent for data use. We did not allow people who had already participated in Study 1 to participate again to keep subjects naive for the purpose of the experiment. The subjects' ages ranged from 20 to 57, with mean and median ages being 26 and 25, respectively. The standard deviation of the ages was 6.0. Each participant was remunerated with a movie ticket worth 10 euros.
Experimental Procedure
Apart from COVID-19 related safety guidelines discussed below, the procedure was largely similar to Study 1. In Study 2, however, the subjects did not wear noise-cancelling headphones as there was no sound in the VE or from students in the laboratory hallways, and thus they were unnecessary.
In Study 1, a researcher checked the HMD before each participant to ensure that the Oculus main menu or other anomalies were not present when starting the experimental apparatus. In Study 2, however, we asked the subject to report what he/she saw in the beginning of the experiment since we could not be in close proximity to the subjects to check ourselves. In addition to checking anomalies, this also allowed us to check whether the subject recognized the VE as the Oulu marketplace. After this, an instructions script was read for the subject. The script confirmed that the subjects were at Oulu marketplace, enlarged 10-fold to a size of a giant, and they were to throw and drop the logs placed on a container in front of them. At the end of the experiment, the subject was given her/his movie ticket.
The experiment in Study 2 was conducted during the COVID-19 pandemic, hence additional safety precautions were taken. At the time of the experiment, the regional state of the epidemic was at so-called "baseline level" 1 . Due to the relatively calm local state of the epidemic, it was possible to conduct temporary on-campus work as long as university safety guidelines were followed.
The research space allowed for a maximum of two researchers who kept within safety distance to the participant. The researchers were also separated from the subject with a see-through barrier. The researchers wore safety masks, which were also offered for subjects. The participants were instructed to operate the VR equipment by themselves during the experiment and a researcher intervened only if necessary (for example, in cases of Oculus room setup resetting).
Virtual reality equipment was sanitized between each participant using a "Cleanbox" 2 device. In addition, all equipment and surfaces were wiped with alcohol disinfectants. Researchers wore rubber gloves during the cleaning process and the experiments. The default face padding of Oculus Rift S was covered with a silicone hygiene cover for easier cleaning. In addition, the subjects were offered optional disposable paper face hygiene covers. The research space was air-conditioned and ventilated between subjects. Participants were also asked to use hand disinfectant available in the research space. Participants were asked to join the experiment only when feeling completely healthy. The research space can be seen in Fig. 7.
Questionnaires
The questionnaires in Study 2 were kept mostly similar to Study 1, consisting of two forced choice questions (main questions 1 and 2), two open-ended questions (O1 and O2), a 7-point Likert questionnaire concerning log physics (L1-L5), and the extended SUS questionnaire Slater et al. (1994); Usoh et al. (2000).
In addition, we added extra 7-point Likert-scale questions L6-L8 concerning the experience of being large and pseudohaptics.
Main questions 1 and 2 were identical to Study 1, except replacing "pull tabs" with "logs". Similar to Study 1, both main questions were followed by open-ended questions O1 and O2 stating "Why?".
Questions L1, L3, L4 and L5 were kept similar to Study 1 so that only "pull tabs" were changed into "logs." Since the wording of L2 in Study 1 was found to be problematic, we paraphrased it from "the speed of pull tabs when thrown (slow -fast)" into "time of flight (slow -fast)".
The new questions L6-L8 assessed the feeling of size and the sensation of weight of the logs in both conditions. The questions were phrased as follows.
L6 During the experiences, did you feel more like a giant in a normal-sized city, or more like a normal-sized person in a miniature city? (normal-sized person, giant) L7 When picking up or holding the logs, did you feel a sensation of actual weight? (not at all, very much so) L8 The logs felt... (light, heavy) Similarly to Study 1, all questions were presented either in English, or Finnish, depending on the preference of the subject.
Hypotheses
Again, majority of the subjects considered the movie physics condition as the realistic one, but the expectations of the subjects were more mixed, however. For main question 1, 28 out of 40 (70%) subjects chose movie physics. As for response to the main question 2, 25 out of 40 subjects (63%) considered that movie physics matched their expectations better. Following the procedure in Study 1, we analyzed the frequencies of responses with a binomial test, and found their corresponding two-tailed p-values as p = 0.017 and p = 0.154. This indicates that the responses to main question 1 were significantly biased towards movie physics, whereas responses to main question 2 were closer to a random distribution. These results confirm H1, but not H2. Most of the subjects clearly considered movie physics as the more realistic condition, but their expectations were more evenly split between true physics and movie physics. Almost every subject mentioned recognizing the scene as the Oulu downtown marketplace.
Open-ended questions O1 and O2
Thematic analysis using the inductive approach (Patton 2005) was used to analyze the open-ended questions. The responses were first coded by two independent researchers, after which the final codes were agreed upon. One subject did not respond to the open-ended questions. A summary of codes and their frequencies per each question can be seen in Fig. 8.
The responses indicate that majority of the subjects considered the motions of the logs as their primary reason of preference; the logs were either moving at a speed they did not feel was realistic, or were under the effect of abnormal gravity. This is especially true for the subjects that perceived movie physics as more realistic. For the subjects choosing true physics, the ability to throw logs especially far came up relatively more often than for movie physics respondents. This could mean that these subjects considered a giant being capable of throwing the logs farther due to increased strength. However, similarly to Study 1, we did not simulate muscle strength per se; the ability to throw the logs far was due to the increased velocity the subjects were able to impart due to being scaled 10-fold. There was only one response to O1, in which strength was specifically mentioned, whereas for O2, strength came up in four responses. Examples of responses for O1 can be seen in Table 4. For O2, examples can be seen in Table 5. Distributions of qualitative codes in Study 2 can be seen in Fig. 8.
Likert Data
Similarly to Study 1, we analyzed questions L1-L5 (falling speed, time of flight, distance when thrown, bounciness and gravity) to get additional insight into the subjects' perceptions of the motions of the logs. In all questions, the respondents favored movie physics, with the median and mode closer to 4 in L1-L3 and closer to 7 in L4 and L5. A Wilcoxon Signed Rank test showed that the responses were significantly different (p < 0.05) between the conditions for all of the questions except L4 (p = 0.4). A summary of the responses can be seen in Table 9. Box plots visualizing medians, interquartile ranges, as well as the minimum and maximum responses can be seen in Fig. 9.
Presence Data
Similarly to Study 1, we acquired self-reported presence data according to the SUS questionnaire. Thirty out of 40 subjects (75%) had an SUS score higher than 0, indicating at least some level of presence. The Table 4. Examples of O1 responses (justification to main question 1) in Study 2
Codes
Preference Response Gravity Movie physics "me being big should not affect the gravity of other objects" Speed of motion True physics "Not really sure, but when I picture a giant it feels like that way. Like things going on slow motion." Speed of motion, gravity Movie physics "It happened with normal speed/gravity" Physics, gravity Movie physics "Acceleration felt somewhat realistic, the latter felt like surface of the moon" Throwing distance True physics "when using a strong force, the logs was thrown far away, matching my expectation" Throwing distance True physics "If I were a giant, the logs would fly a little farther, which was highlighted in the second time" Novelty, physics, bounciness, interaction True physics "Everything felt new, not only that you were in VR in the first place, but also the point of view, which was of course higher than normal. It also felt like, in terms of physics, the logs were behaving more realistically in the first time, because I was handling them more carefully. On the second time I just dropped the logs from high up, and they were bouncing any which way". Gravity, weight, naturalness Movie physics "The gravity and motion of the logs felt more natural. In the other one, they were floating like feathers in space and were clearly lighter than real". Gravity, speed of motion, strength Movie physics "According to my own assumptions, objects would feel like they were moving more slowly in relation to myself if I were a giant, but the first time felt more like I was underwater. In my opinion, the second time was more real, even if it was a little fast-ish. I did feel as if I was stronger in the second time, though.". "This bias might be partly because of movies, but also in many real life videos, big objects fall "more slowly" when seen from afar. In the first version the logs were much more slower, which matched my expectations more." Naturalness, speed of motion Movie physics (different from O1) "Somehow faster motions felt more natural"
Naturalness
Movie physics "Logs felt more credible in the second experiment" Physics True physics "The second time matched my expectations more since the motions of the logs were more realistic" Speed of motion Movie physics "Because the logs acted as they should. A log in the real would not fall slowly." Interaction, bounciness, throwing Movie physics "I find it easy to grab and on throwing it was more realistic. When I drop the log it bounced back as well, making it more realistic. In second, I was also able to see the log clearly when it was in air during the throw." Speed of motion Movie physics "Still the first one. I can not realistically think the world working in slow motion.". Speed of motion True physics "because of the speed when I drop the logs". Strength, throwing distance True physics "As a giant I would expect to be stronger, therefore being able to throw the logs further. ". median SUS score was 1. Again, we divided the subjects into groups of high presence (SUS score > 2) and low presence (SUS score < 3). The proportion of the high presence group was 16 out of 40 (40%), whereas the low presence group consisted of 24 subjects (60%). Seven subjects out of 16 (44%) from the high presence group chose true physics, whereas only five subjects out of 24 (21%) from low presence group did the same. However, according to Fisher's exact test, this difference was not significant (p > 0.05), which means we can assume that belonging to either high or low presence group did not affect the response to main question 1.
Own Size and Pseudohaptics Sensations
We added three new questions L6-L8 to investigate the subjects' perception of his/her own size, as well as pseudohaptic sensations. It appears that although the subjects generally considered movie physics as the more realistic condition, true physics was able to more successfully convey the sensation of being large. The median and mode responses to L6, feeling of own size, were 6 and 6, respectively, for the true physics conditions. As for movie physics, these responses were 4 and 2, respectively. This is somewhat supported by the responses to open-ended questions O1 and O2 (for example, subjects considering true physics more natural, see Table 5). We found these differences to be statistically significant using the Wilcoxon Signed-Rank test (p = 0.002).
When inspecting the open-ended data from Study 1, we found a number of subjects mentioning the pull tabs feeling heavier in one condition or another. To investigate this further, we added new questions L7 and L8 to inquire about pseudohaptic sensations. However, these sensations were reported as very low in general. For L7, sense of actual weight, the median and mode responses were 1.5 and 1 for true physics and 2 and 1 for movie physics, respectively. As for L8, logs felt light/heavy, median and mode was 2 and 1 for true physics and 3 and 1 for movie physics. Out of 40 subjects, 4 and 6 subjects reported pseudohaptic sensations stronger than 4 out of 7 in the true physics and movie physics conditions, respectively. However, Study 2 might have been less appropriate to study the sensations of weight reported by subjects in Study 1, since in Study 2 the physics ranged from normal to perceptually slower, instead of vice versa. Again, using the Wilcoxon Signed-Rank test, we found statistical differences for L7 (physical sensation of weight) to be insignificant (p > 0.05). However, there was a statistically significant difference (p = 0.026) in responses for L8 (the logs were light/heavy) . These results could be interpreted so that the subjects generally considered the logs simulated heavier in the true physics condition, but failed to notice any differences regarding pseudohaptic sensations, however.
Effect of Background, Presence and Perception of Own Size
Similarly to Study 1, we analyzed the effects of subject background and self-reported presence on their preference on physics. This time around, we also added responses to L6 as variables own size true and own size movie to estimate whether subjects' perception of their own size (in essence, the extent of feeling like a giant) affected responses. Using the same categories and the same coding mechanisms as in Study 1 (this time with 22 subjects categorized having a background in Natural Sciences and Engineering and 17 subjects with a Social Science background), we performed binary logistic regression analysis. The model explained 38% of the variance (Nagelkerke's R 2 ) with 79.5% overall accuracy. Although we found, similar to Study 1, that background, gaming or VR experience and self-reported presence did not affect responses, the variable own size movie had a significant effect (p = 0.041). This finding indicates that true physics respondents felt smaller specifically during the movie physics condition. However, since the distributions of both true physics and movie physics responses were quite large, but the number of true physics respondents was rather small, we would hesitate to put too much confidence in this implication until further evidence is found.
Comparing Studies 1 and 2
The percentage of subjects that chose movie physics for main question 1 was 73% in Study 1 and 70% in Study 2. As for main question 2, these percentages were 91% for Study 1 and 63% for Study 2. We compared the results for main questions 1 and 2 from Studies 1 and 2 with Fisher's exact test. We found that responses to main question 1 were statistically similar (p > 0.05), whereas responses to main question 2 were different (p = 0.003). This suggests that a majority of similar proportions considered movie physics more realistic in both studies. The proportions were largely different for main question 2. Although almost all subjects considered movie physics as better matching their expectations in Study 1, only a statistically insignificant majority considered the same in Study 2.
The results for Likert questions L1, L3, and L5 were very similar in Studies 1 and 2, consistently favoring movie physics. Responses to L2 were very mixed in Study 1, which we attribute to bad wording of the question. In Study 2, the responses were more consistent and clearly favored movie physics. In Study 2, the responses for L4 were mixed while in Study 1 movie physics was preferred.
In both studies, we examined the effect of various contributing factors in an effort to gather additional insights for interpreting the results. In Study 1, we used background data as well as self-reported presence as predictors to main question 1. In Study 2, we also added two new variables own size true and own size movie. In Study 1, however, we did not find any significant predictors. In Study 2, a new variable, own size movie came out as significant.
If we compare the presence scores to those of Study 1, we can see that subjects in Study 2 experienced somewhat less presence. In Study 1, some presence (SUS score > 0) was experienced by 82% of the participants with median SUS score being 3. In addition, the proportion of high and low presence groups were almost equal in Study 1 (53% experiencing high sense of presence). In Study 2, 75% responded with SUS score > 0 while the median SUS score was 1. The proportions of high and low presence groups were 40% and 60% respectively. However, despite these differences, the SUS scores for Study 1 (44 subjects) and Study 2 (40 subjects) were not statistically different (Mann-Whitney U test p > 0.05). Also, presence did not have a predictive capability on the preference of realism in either study.
DISCUSSION
Our results demonstrate that we have identified a strong paradox concerning PSI in VEs in which the user has been scaled either up or down. However, this fits the definition of PSI: the plausibility illusion is more dependent on the expectations of the subjects than objective reality (Slater et al. 1994;Skarbez et al. 2017a). We believe that this paradox has implications for VR and telepresence applications.
The proportion of the subjects that chose movie physics in main question 1 was almost identical in Study 1 and Study 2. Close to a 3/4ths majority (73% in Study 1 and 70% in Study 2) chose movie physics as the realistic representation. As for main question 2, the responses were quite different, however. In Study 1, 91% of subjects considered movie physics as matching their expectations more, whereas in Study 2 only 63% of the subjects considered the same. It appears that realistically approximated physical phenomenon at a small scale was surprising for almost all subjects. However, many subjects considered true physics to better match their expectations at a large scale, even if they actually regarded movie physics as the realistic one.
The purpose of open-ended questions regarding the reason why subjects rated one of the physics conditions being more realistic (O1) or matching their expectations better (O2), was first to confirm that the subjects gave their responses according to object motions and not other plausibility related factors, and second to give additional insights, for example regarding different responses to O1 and O2.
In Study 1, according to O1 and O2, almost all of the subjects considered their perception of realism to be related to the physics behavior of the tabs. In addition, a small number of subjects gave responses motivated by general interaction, including learning how to use the controllers correctly. A few secondary reasons or remarks were made referring to a scene object or other visual details. According to the responses to O2, most of the subjects preferring true physics as the realistic one stated that during the experiment it was difficult to understand why the physics functioned the way that it did -the behavior of the tabs was still surprising even if they considered it realistic.
As for Study 2, the reasons given by the subjects were also most often related to the behavior of the logs. Some exceptions included interaction (learning to use controllers or other interaction related issues) and novelty (for example, the experience being more overwhelming in the first part of the experiment). In Study 2, no visual aspects came up in the open-ended responses.
Whereas in Study 1, only one subject responded with movie physics in O1 and true physics in O2, this was the case for five subjects in Study 2. The most popular reason in these cases was the ability to throw the logs farther (3 responses). The other reason was that the slower motions somehow seemed more natural, even if unrealistic, as a giant (2 responses). Another difference to Study 1 was that for Study 2, subjects choosing true physics in O1 usually gave the same response also to O2; the behavior of the logs at large scale was not surprising to the same extent as the behavior of the tabs in small scale.
We used Likert scale rating questionnaires to gather additional insight into our findings. The questions focused on various dynamic properties of the objects so that we could more specifically pinpoint the effects of physics simulations on perceived realism. These responses indicated preferences towards movie physics as well, with significant differences regarding the perceived realism of the object behavior (with the exceptions of question L2, speed when thrown, in Study 1 and L4, bouncincess, in Study 2). The Likert data further confirms that physically accurate representations of physics in abnormal scales are not inherently intuitive for VR users.
According to our results, accurate accelerations and falling speeds of objects were perceived as unrealistic. The distance that the subjects were able to throw the objects was seen mostly as too short in Study 1 and too long in Study 2. However, there were also responses in both studies that considered movie physics to be too extreme.
In Study 1, responses regarding the bounciness of the of the tabs indicated that subjects expected the tabs to behave as if they were enlarged 10-fold. In Study 2, however, the reactions to bounciness were much more mixed; even if median and mode responses preferred movie physics, the responses were too mixed to cross the threshold of significance at p = 0.05. We believe the main reason for the difference for these responses is the scale. In Study 1, the tabs were not practically bouncing at all in the true physics condition. In Study 2, however, the logs were bouncing in both conditions.
In Study 2, we inquired about the extent to which the subjects felt like a giant in a normal-sized city instead of a regular-sized person in a miniature city. We found a significant difference between the conditions, indicating the subjects in the true physics condition felt larger. This may mean that even if the subjects did not generally believe the true physics condition to be realistic, it succeeded better in providing the illusion of being large.
We inspected the effects of various aspects of the subjects' background on their responses to O1. It could be that the subjects with knowledge of physics, for example, might prefer the true physics condition. However, we found no such effects in either of our subject groups. In addition, we did not find the self-reported level of presence (Slater et al. 1994), either as SUS scores or by dividing subjects into groups of high and low presence, to affect the response to O1 in either study. In Study 2, we found a significant effect for the variable L6 feeling of own size -movie physics. This suggests that the extent to which the subjects experienced the illusion of being a giant in the movie physics condition had at least some effect on the subjects' perception of physics. However, the overall performance of the classifier was not very good, and the distributions of the subjects' responses were very large. For this reason, we believe further investigation is necessary before we can claim whether or not the extent of the small-scale or large-scale illusion affects the perception of physics.
In Study 2, we also studied pseudohaptic sensations experienced by the subjects and found that the overall extent of the sensations was very low. A handful of subjects reported strong pseudohaptic sensations. There was, as expected, a perceived difference regarding the overall weight of the logs between conditions. We found that the level of presence experienced by subjects in Study 2 was somewhat lower. However, as of now, we do not have evidence to claim whether the illusions of being small or large affected self-reported presence or whether, for example, the properties of the VEs used in the studies would explain these differences.
Implications
Slater (2009) discussed the role of conformity to expectations, prior beliefs and knowledge for causing and maintaining PSI. Skarbez et al. (2020) conceptualized the former as coherence, the reasonable behavior of the VE, which, according to Skarbez, is related to PSI similarly as immersion is related to PI.
Looking at the results against this framework, we can see that in Study 1, movie physics was clearly the reasonable behavior for subjects. Even if 27% of subjects considered true physics as real, only a handful of subjects considered it matching their expectations. Therefore according to the results of Study 1, realistic object behavior in small scale clearly violated coherence.
According to the results of Study 2, it is somewhat unclear which behavior is the coherent one, even if the results are somewhat pointing towards movie physics. Although a significant majority did consider movie physics as the realistic behavior, the expectations of subjects were matched almost even. Because of this mixed response to expectations, it is not straightforward to say, which model would yield good coherence in VEs.
If one was to design a multiscale VR application that would aim at maximizing coherence instead of realism, it would make sense simply to match the physics with the scale of the user, at least in smallscale applications. If the user is allowed to change scale, the physics behavior would follow similarly to Hollywood movies such as Honey I Shrunk the Kids where object motions constantly change in speed from scene to scene according to perspective changes. In large scales, however, this type of behavior might lead to bad coherence. In addition, in mCVEs this approach would break since the physics model would not be able to feasibly accommodate all users' perspectives simultaneously during multi-user interaction.
If realistic physics are intended, then users' expectations would have to be modified by some type of training so that realistic behavior does not come up as surprising. According to Skarbez et al. (2020) bad coherence in VEs, especially in relation to unexpectedly behaving environment, can lead to stress and discomfort. In addition, there might be cases where human interaction capabilities are reduced due to unexpected physics. Micro-and nano-evel robotics operations are an example of this Sitti (2007). For this reason we consider interaction at abnormal scales and perceptual training as important future research directions; even if users would expect realistic physics, their performance might still be affected.
Through recent advances in consumer VR hardware as well as sub-microscopic (Plisson and Zotkina 2015) and even atomic (Zheng et al. 2017) level imaging techniques, it is possible that we will witness an increasing exploitation of scaled-down VR applications in the future. They could potentially include commercial systems such as teleoperated maintenance robots or commercial virtual design solutions at a microscopic scale. However, at this stage, it is unclear whether it would be intuitive for humans to operate at small scales, especially if it involves operating in the real world or with realistically simulated physics. As can be seen from our results, the perception of physical phenomena as a scaled-down entity is likely to be unintuitive for most. However, it was interesting to note that half of the subjects experienced a strong PI despite the apparent improbability of the experience of being doll-sized. As the scale of operation decreases, perceived frictions and accelerations increase, which has already been found problematic for humans in robotic micro-and nano-level operations (Sitti 2007). As the scale decreases further, these perceived distortions amplify, and additional phenomena such as fluid dynamics and static electricity, come into play as well. Relative changes in the environment would also provide additional challenges in the physical domain. For example, a floor that is experienced as smooth at a regular scale might become bumpy and full of cracks. Grit and dirt might become actual obstacles for navigation. Vibrations from passersby otherwise indistinguishable might feel like earthquakes. We also investigated the perception of physics at large scale. Study 2 enlarged the subjects 10-fold while giving them a similar interaction task. Although the users believed movie physics as realistic similarly to Study 1, the expectations of users was much more mixed. We believe this finding might be useful for designing abnormal-scale VEs where PSI is more important than realism such as games. Realistic physics in small-scale interactions greatly violated the expectations of users while in large-scale slow, realistic motions sometimes seemed natural, even if ultimately unrealistic.
We argue that our study opens up interesting avenues for future VR research. VR education has already been seen as a potential remedy for some issues of small-scale activities in the field of teleoperation (Millet et al. 2008). Further research on the effect of perception-related mismatches on interaction and performance in various applications could yield interesting findings. Also, as of now, we do not know whether the body-scaling effect affects the perception of physics the same way it affects the perception of sizes and distances (eg. van der Hoort et al. (2011)). In both studies, we used virtual hands to provide a body-based size cue, but we did not investigate the effect the absence of these cues would have had. Langbehn et al. (2016a) found that groups of human avatars can override the dominant scale otherwise dictated by body-based size cues. Theoretically, this could have implications for perception of physics, as well.
Challenges and Limitations
Outliers in responses were L2 in Study 1 and L4 in Study 2. Inspecting the distribution of responses in question L2 in Study 1, we see that the true physics condition contains responses that are rather uniformly distributed in comparison to the movie physics condition; the STD in the true physics condition is twice as large as in the movie physics condition. Whereas in responses to L2 the movie physics condition was considered realistic (4, neither too fast nor two slow) by a vast majority, the real physics condition received an almost equal number of responses between 2 (too slow) and 6 (too fast). We suspect that the uncharacteristic distribution of the responses might be due to a poor wording of L2 (The speed of pull tabs when thrown). Although we tried to ask how the subjects perceived the time of flight of the tabs, it could be that subjects had other interpretations of the question resulting in inconsistent responses. Similar inconsistency was found in responses from both Finnish and English speaking subjects, so we do not think the confusion could be attributed to the specific wordings in either language. Rather, we speculate that some subjects thought we meant the speed of the tab in leaving their hand (resulting in short flight distance) upon throwing, and others thought we meant the speed that the tab moved through the air. Alternate interpretations could have resulted from misinterpreting the action of the tabs as having been caused by their own inability to throw the tabs correctly. We changed the wording of this question in Study 2 to "Time of Flight".
In Study 2, L4 (the bounciness of the logs) received mixed responses. Although mean and mode responses preferred movie physics similarly to other questions, the responses were overall more mixed. We consider bounciness as the most unrealistic aspect of Study 2, since we did not simulate splintering, or otherwise breaking the log due to impact. We considered these aspects as confounding variables in a study that mainly focused on the perception of rigid body dynamics.
In Study 1, according to both verbal comments during the experiment as well as responses to questions O1 and O2, some of the subjects starting with the true physics condition thought that the reason for their difficulty in throwing the tabs to a far distance was their own inability to use the controllers and not related to aspects of the VE. Although some subjects realized during the subsequent movie physics condition that the behavior of the tabs was an experimental manipulation and not due to their own failure, there were still three subjects that stated as their main reason for preferring the movie physics condition to be the fact that they had learned how to use the controllers. For subjects experiencing movie physics first, there did not seem to be any ambiguity that the difference in the behavior of the tabs was related to the VE. Although a training session helping to learn the controllers might have been helpful, we believe that it could have introduced unwanted priming of the subjects regarding the expected behavior of physics. We received these types of responses far less in Study 2, which might be due to opposite behavior of objects when throwing.
Another obvious limitation is the fact that it is currently difficult to realistically simulate object mass in VR since subjects can feel only the weight of the controllers. Although we chose the soda can pull tabs for the task in Study 1 partly because of their light mass, there was some speculation among responses to O1-O2 on whether the weight of the object and/or simulated arm strength affected object manipulation. There were responses in Study 2 as well that considered throwing distance to be affected by the arm strength of the giant. However, simulating muscle strength in itself was not in the scope of either study. Human-scale arm motions were simply scaled either down or up by a factor of ten, which resulted in either very small or very large velocity imparted on the thrown object.
During a few experimental sessions, there were occurrences which could have broken presence or caused differences in the experiences of the participants. Two subjects in Study 1 became very active in the VE and accidentally bumped into furniture in the research space. In Study 2, one subject accidentally stepped on the HMD cord during the session. In Study 1, a physics engine bug caused a single tab to land in an unrealistic orientation during the true physics condition for two subjects. For one subject trying to throw the tab with two hands, a bug caused the tab to catapult unrealistically far. We are not sure to what extent the subjects noticed these bugs or if it affected their responses. In Study 2, we did not observe physics bugs as obvious as those seen in Study 1. This might be partly due to the instructions for object manipulation being stricter and the scale of the objects being less prone for errors in the physics engine. Even still, we cannot guarantee that the bounciness of the logs was realistic at all times.
Additionally, although we tried to keep the visual appearances of the two conditions as similar as possible in Study 1, the differences in the VE scale in the UE to simulate the two types of physics led to very subtle differences in their brightness. This deficiency was fixed in Study 2 by simulating human-scale and giant-scale physics by manipulating gravity instead of scaling the scene objects.
Finally, there were subjects who were not always paying close attention to the flying or falling characteristics of the tabs, or did not wait until the instructions were read in their entirety. This limitation was somewhat alleviated in Study 2 due to the stricter instructions given to the subjects.
CONCLUSION AND FUTURE WORK
In this paper, we studied a phenomenon regarding the plausibility of physical interactions for scaled-down and scaled-up users in normal-sized VEs; when users interact with physically simulated objects in a VE where the user is scaled 10-fold smaller or larger from a regular human scale, there is a mismatch between expected physics and the accurate approximation of physics at that scale. A similarly sized and significant majority of both scaled-down and scaled-up subjects judged rigid-body dynamics close to human-scale realistic instead of what would be the correct approximation of realism at the resized scale the subjects were on. Almost all subjects at a small-scale considered rigid-body dynamics at that scale to be surprising, while the expectations of large-scale subjects were more mixed. We argue that these findings open many interesting avenues for future research regarding mCVEs, scaled-down user VR applications in general, as well as telepresence and teleoperation taking place at a modified scale. In addition, our findings can prove useful to designers of VR applications utilizing abnormal scales, who wish to maintain PSI, or who are seeking to find a trade-off between PSI and realism.
In the future, we intend to study the body scaling effect and its influence on interactions with physically simulated objects. In addition, we will investigate interaction, performance, and perceptual training at abnormal scales. We will consider scales smaller than 1 order of magnitude since we expect them to provide even greater plausibility mismatches in physical interactions. We will also seek to confirm the existence of our finding outside VR, for example using robotic teleoperation or telepresence at small scale. Moreover, | 2021-02-08T02:16:07.664Z | 2021-02-05T00:00:00.000 | {
"year": 2021,
"sha1": "033b3b46ee282d279681769f5fa5aaca28ad1a0a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/frvir.2021.655744/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "033b3b46ee282d279681769f5fa5aaca28ad1a0a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
247044464 | pes2o/s2orc | v3-fos-license | Investigating the ‘Short Pain’ and ‘Long Gain’ Effect of Environmental Regulation on Financial Performance: Evidence from Chinese Listed Polluting Firms
: Environmental regulation affects the financial performance of firms, while the findings are mixed. This paper quantitatively analyzes the current and lagged effect of environmental regulation (ER) on financial performance (FP), based on the data of 361 highly polluting A-shares firms and 936 mildly polluting A-shares firms in China. It is proved that ER exerts a negative effect on the FP of polluting firms in the short term and a positive effect in the long term, which unifies the ‘Porter Hypothesis’ (PH) and the ‘Costly Regulation Hypothesis’ (CRH) on the temporal dimension. Mechanism analysis reveals that ER negatively affects current FP of highly polluting firms by improving their green innovation investment. In addition, ER has a significant positive lagged effect on the FP of polluting firms by improving their operating efficiency, rather than reducing production costs. Furthermore, we find that ER significantly improves the FP of highly polluting firms, especially state-owned firms, as opposed to mildly polluting firms and privately-owned firms. The conclusions imply that government should make subsidies for green firms or firms going green, and firms should pay more attention to green innovation investment and green development. supervision, J.D.; J.D. X.L. (Xiuting Li); funding acquisition, J.D. and X.L. Li).
Introduction
Environmental regulation (ER) has contributed a lot to environmental protection [1], but has encountered some obstacles from major polluters [2] because of the direct and opportunity costs incurred by ER [3][4][5]. Thus, it is an important task for governments to explore the effect of ER on financial performance (FP) of highly polluting firms and then guide these firms to go green. ER has become a significant topic for many countries all over the world, especial for China, which has created a special green sector in the green-development era. This research ultimately is aimed at boosting the greenness of China and other countries implementing ER for environmental protection.
There is a massive amount of research on the effect of ER on the FP of firms, but the findings are mixed. The Porter Hypothesis (PH) and its proponents argue that ER can increase the research and development (R&D) investment and innovation level of firms [6], and then improve their productivity [7][8][9][10] and FP [11][12][13][14][15]. However, the Costly Regulation Hypothesis (CRH) and its proponents argue that ERs are trying to reduce the negative external affect caused by business, which increases the costs of compliance for the firms and thus reduces their productivity and FP [16][17][18][19][20].
The differences in the two hypotheses stem from the different lengths of time studied. While ER increases the costs of environmental compliance and reduces FP of major polluters in the short term, it is conducive to improving FP of firms in the long term [21][22][23][24]. Therefore, this paper will shed light on the short-term and long-term impacts of ER on FP based on polluting A-shares firms, which are firms listed on either the Shanghai or Shenzhen Stock Exchange and whose shares are traded in Renminbi by domestic investors in China. The results reveal that ER significantly reduces the FP of polluting firms in the current period, but improves FP in the two to three periods that follow, which unifies PH and CRH on the temporal dimension. (In China, the fiscal year is the same as the calendar year, running from 1 January to 31 December. We will adopt 'current period' to refer to both simultaneously and to avoid confusion.) The conclusions still hold when the model and core variables are changed, and an exogenous instrumental variable are involved to make IV (Instrumental Variables)-2SLS (Two-stages Least Squares) analysis.
In addition, we make mechanism analysis by exploring the current and lagged effect of ER on the firms' operating efficiency, production costs, and innovation investment. The results verify that ER exerts negative effects on the current FP of highly polluting firms by improving their green innovation investment. The positive impact of ER on FP in the two to three periods that follow is mainly realized by improving operational efficiency of pollutants.
Furthermore, given that the impact of ER varies with firms' pollution levels and ownership types [14], we explore the heterogeneity of the ER effects on FP of firms in highly polluting industries versus mildly polluting industries and FP of state-owned firms versus private firms. The results show that ER mainly affects state-owned and highly polluting firms, while the effects are nonsignificant for private and mildly polluting firms.
This paper contributes to the existing research in the following three ways. First, this paper explores the different impacts of ER on the FP of polluting firms on the temporal dimension. We empirically explore the negative and current impacts and positive but lagging impacts of ER on FP, which unify CRH and PH. Second, this paper contributes to the literature by investigating the mechanism and heterogeneity of the effects of ER on FP. We found ER significantly improves the green innovation investment of highly polluting firms while simultaneously reducing their FP. However, FP, in the following two to three periods, can be promoted by improvements in firms' operational efficiency. Furthermore, we found the impacts of ER on FP of mildly polluting and privately-owned firms are nonsignificant. Third, the results show the 'short pain' and 'long gain' effect of ER on the FP of highly polluting firms, which provides implications to governments and the firms, and helps to understand ER dialectically.
The rest of the paper proceeds as follows. Section 2 surveys the literature and proposes hypotheses. Section 3 introduces empirical strategies and data. Section 4 is the empirical results and analysis. Sections 5 and 6 further discuss the mechanism and heterogeneity of ER effect on FP. Section 7 concludes.
Literature and Hypotheses
In the first half of the 20th century, environmental pollution has gradually become one of the focuses around the world, such as the photochemical pollution incident in the United States in the 1950s, the smog incident in London, and the Minamata disease in Japan. Since the 1970s, many countries have begun to implement environmental regulations to alleviate the environmental pollution, and related research has to be increasingly rich.
The Effect of ER on FP of Firms
There are two major hypotheses regarding the effect of ER on the FP of firms: Porter Hypothesis (PH) and Costly Regulation Hypothesis (CRH). PH holds that ER can improve FP. PH is named after M. E. Porter [6], who was the first to propose that a properly designed ER could stimulate innovation, thus making products more competitive. Porter and van der Linde [7] further examined the environment-competitiveness relationship through theoretical and case study approaches and argued that it did not have to involve a trade-off between regulation and competitiveness. Jaffe and Palmer [17] divided PH into the weak version, the strong version, and the narrow version and argued that only under the strong version can regulation induce innovation with benefits exceeding the compliance costs. Following these studies from the 1990s, Berman and Bui [8], Ambec and Barla [9], Rassier and Earnhart [25], and Jefferson et al. [10] verified PH by constructing theoretical and/or empirical models.
CRH, on the other hand, argues that ER increases the costs of compliance and thus negatively affects FP. Palmer et al. [3] argue that PH is premised on two hard-to-fulfill presumptions: first, the private sector systematically overlooks opportunities for profitable innovation; second, there is some regulatory authority that can make the private sector realize these opportunities for profit from innovation. Jaffe et al. [16], Wagner et al. [18], and Lanoie et al. [20] rejected the existence of PH based on data from the U.S., Europe, and OECD, respectively. The literature studying developing countries such as China has also verified the validity of CRH by exploring the negative effects of ER on innovation investment [26], total factor productivity (TFP) [14], and financial performance [27].
Some scholars have offered explanations for the different impacts of ER on the innovation or productivity of firms. One strand in the literature explores the heterogeneous effects of different types of ER. For example, Zhao et al. [28] argued that administrative ER had a more significant effect on the technological innovation of firms, while market-based ER was more conducive to green transformation. Xie et al. [5] reached the same conclusion by exploring the effects of command-and-control and market-based ER on productivity of firms. Another strand in the literature explains the difference from the firms' perspective. For example, Rassier and Earnhart [11] distinguished the ER effects on actual profitability and expected profitability of firms. In addition, some studies attributed the heterogeneity to the different types of firms, such as the highly polluting firms versus mildly polluting firms or state-owned firms versus private firms [14,29].
However, the above-mentioned studies barely considered the lagged effects of ER. Rassier and Earnhart [25] confirmed the short-run and long-run positive effect of Clean Water Act regulation on FP based on quarterly data. However, there may be heterogeneity in the effects of ER on FP of firms in different years. ER increases the compliance costs and reduces total factor productivity (TFP) as well as the book value of firms in the current period [23], but under the ER constraint, profit-maximizing firms are likely to adjust their production and operation strategies to reduce their costs, which will have a positive effect on the production efficiency, operating performance, and value of firms in future periods [21][22][23][24]. Based on the above analysis, Hypotheses 1a and 1b are proposed.
Hypothesis 1a. The effect of ER on the firm's current FP conforms with CRH, i.e., ER significantly reduces the firm's current FP.
Hypothesis 1b. The effect of ER on the firm's future FP conforms with PH, i.e., ER significantly increases the firm's future FP.
Mechanism Analysis
CRH can be used to summarize the effect of ER on current FP. Based on the findings of Rassier and Earnhart [11] and Tang et al. [14], ER increases the current costs of firms, including fines, green innovation investment, etc., and thus reduces current FP, which is only partially offset by incentives such as green loan schemes and tax breaks for companies that go green. In addition, increased ER may reduce financial institutions' expectations of polluters' future profitability, thus increasing their loan costs and reducing their FP. Furthermore, intensified ER may force polluters to shut down some polluting production lines and thus reduce their current operating efficiency. Therefore, the following Hypotheses 2a and 2b can be proposed.
Hypothesis 2a. ER increases the current production costs of polluting firms and decreases their current FP.
Hypothesis 2b. ER increases the current green inputs of polluting firms and decreases their current FP.
The positive impact of ER on the future FP of firms has been discussed from the perspective of reducing production costs and increasing sales profits. For instance, Porter and van der Linde [7] argued that ER can lead to technological innovations that improve production efficiency, thus reducing production costs while improving product quality, which will ultimately improve firms' profitability and competitiveness. Rassier and Earnhart [11] outlined three pathways through which ER affects firms' profitability: improving innovation efficiency and revenue, reducing costs, and improving operational efficiency. Hu et al. [30] and Xing et al. [31] confirmed that ER improved the FP of firms by increasing their green innovation capacity. Based on the above analysis, we propose the following three hypotheses.
Hypothesis 3a. ER improves the future FP of polluting firms by reducing their production costs.
Hypothesis 3b. ER improves the future FP of polluting firms by increasing their operational efficiency.
Hypothesis 3c. ER improves the future FP of polluting firms by improving their investment in green innovation.
Heterogeneity Analysis
Studies have been conducted to verify the heterogeneity of the effect of ER on the FP of different types of firms. For example, Hering and Poncet [29] and Tang et al. [14] found that the negative effects of ER on TFP and innovation are exacerbated for enterprises in more heavily polluting industries, those of smaller size, and those owned by foreign companies. This paper explores the heterogeneity of the effect of ER on FP from two perspectives: whether the firm is in a highly polluting industry and the ownership of the firm.
In 2008, China's Ministry of Environmental Protection (MEP) issued the Directory of Categorized Environment Inspection of Public Companies (Letter of the MEP General Office [2008] No. 373), which classified 14 industries as highly polluting industries. The Directory provides a standard and basis for regulation. Considering that the targets of environmental regulation are mainly heavy polluters [32], we therefore propose Hypothesis 4a. Hypothesis 4a. ER has a more pronounced effect on the FP of firms in highly polluting industries than on those in mildly polluting industries.
State-owned firms are generally more likely to have access to resources and advice from officials, and they bear more social responsibilities than private firms. In addition, state-owned firms are more able to take risks with projects such as green innovation than a private company because they can easily receive support from the government when they lose money. Thus, the effect of ER on state-owned firms in highly polluting industries may be more significant. Therefore, Hypothesis 4b is proposed. The theoretical framework is shown in Figure 1.
Hypothesis 4b.
The positive effect of ER on the future FP of state-owned firms in highly polluting industries is significantly greater than that on private firms.
To illustrate the above hypotheses, we can answer the following three questions. Firstly, how ER impacts the FP of highly polluting firms in the short run and long run. Secondly, how ER impacts the FP of highly polluting firms, according to green innovation, operation, or production improvement. Thirdly, how the impacts vary for highly polluting firms and mildly polluting firms, for firms with different ownerships, and on actual profitability and expected profitability. The answers will unify PH and CRH on the temporal level and provide targeted recommendations for highly polluting firms and governments.
Sustainability 2022, 14, x FOR PEER REVIEW operation, or production improvement. Thirdly, how the impacts vary for highly p ing firms and mildly polluting firms, for firms with different ownerships, and on profitability and expected profitability. The answers will unify PH and CRH on th poral level and provide targeted recommendations for highly polluting firms and go ments.
Independent Variables
Existing research measures ER in the following three ways: first, input-based i tors, such as pollution abatement costs, pollution treatment investment, number of latory inspections, and government environmental protection expenditures [32,33 second, performance-based indicators, such as sewage tax and fees, emissions, or di rates of major pollutants [34,35], etc.; third, the environmental policies released [36, addition, some studies combine multiple indicators to construct a comprehensive s to evaluate ER.
In this paper, we mainly refer to the second way, since we are not concerned how or by whom the ER implemented, but instead focus on the efficacy of ER and me ER in a performance-based way, which is also suggested in the study of Zhao an [38] based on a comparison of the above-mentioned measurements of ER in terms sonability and data availability. Thus, following the method proposed by Zhao an [38] and Shen et al. [39], we construct an indicator to describe the intensity of ER removal rates of pollutants, e.g., SO2 and industrial smoke (dust) at the city level. T in this paper only ranges from 2011 to 2016 as the result of data availability, whic not affect the results and conclusions. The calculation process is divided into the foll three steps.
1: Normalization of variables
The normalization process can be described as follows: where is the removal rate of pollutant for city , , are the removal amount, generated amount, and emission amount of tant for city . and are the maximum and minimum of the removal rate of pollutant for all cities, respectively, and is the norm removal rate of pollutant , which includes the SO2 and industrial smoke (dust).
Independent Variables
Existing research measures ER in the following three ways: first, input-based indicators, such as pollution abatement costs, pollution treatment investment, number of regulatory inspections, and government environmental protection expenditures [32,33], etc.; second, performance-based indicators, such as sewage tax and fees, emissions, or disposal rates of major pollutants [34,35], etc.; third, the environmental policies released [36,37]. In addition, some studies combine multiple indicators to construct a comprehensive system to evaluate ER.
In this paper, we mainly refer to the second way, since we are not concerned with how or by whom the ER implemented, but instead focus on the efficacy of ER and measure ER in a performance-based way, which is also suggested in the study of Zhao and Sun [38] based on a comparison of the above-mentioned measurements of ER in terms of reasonability and data availability. Thus, following the method proposed by Zhao and Sun [38] and Shen et al. [39], we construct an indicator to describe the intensity of ER by the removal rates of pollutants, e.g., SO 2 and industrial smoke (dust) at the city level. The ER in this paper only ranges from 2011 to 2016 as the result of data availability, which will not affect the results and conclusions. The calculation process is divided into the following three steps.
1: Normalization of variables
The normalization process can be described as follows: where Base ij is the removal rate of pollutant j for city i, Removal ij , Generate ij , and Emission ij are the removal amount, generated amount, and emission amount of pollutant j for city i. max Base ij and min Base ij are the maximum and minimum values of the removal rate of pollutant j for all cities, respectively, and Base S ij is the normalized removal rate of pollutant j, which includes the SO 2 and industrial smoke (dust).
•
Step 2: Adjustment factor Considering that the emission levels of different pollutants in the same city are different and that the emission levels of the same pollutant in different cities are also different, the adjustment factor of pollutant j in the city i is calculated with Equation (3).
where A ij is the adjustment factor of city i corresponding to the pollutant j, Emission ij denotes the emission of pollutant j in city i, Emission Cj is the total emission of pollutant j of all cities, GDP i and GDP C denote the GDP of the city i and the nation, respectively.
•
Step 3: ER We calculated the ER of city i based on the normalized removal rates of two pollutants and the adjustment factors by the Equation (4) as follows: where ER i is the ER for city i, and the number 2 indicates that two pollutants are considered in our calculation process. The data of removal amount, generated amount, and emission amount of SO 2 and industrial smoke (dust) are obtained from the China City Statistical Yearbook.
Dependent and Other Variables
Dependent variables: Return on Assets (ROA) and Return on Equity (ROE) are applied to measure FP of firms.
Mediating variables: The mediating variables are the firm's operating costs' rate, total asset turnover rate, and the number of green patent applications.
Control variables: Referring to Rassier and Earnhart [11] and Tang et al. [14], we selected firm-level variables, including firm size, firm age, capital density, revenue and profit growth rate, asset-liability ratio, earned interest multiple ratio, and the proportion of researchers as control variables.
In addition, considering that factors particular to each city, such as pollution level [14,33], economic development, and fiscal decentralization degree, may affect both ER and FP, we added such indicators as city pollution, per capita GDP, and fiscal decentralization degree to alleviate the endogeneity problem. (Referring to Liu and Lin [40]), we constructed a total pollutant emission index based on SO 2 emission, industrial smoke (dust) emission, and industrial wastewater discharge. The specific calculation process and results are kept for future reference.) Definitions and calculations of variables are shown in Table 1. Firm-level indicators of FP and control variables are obtained from the CSMAR database. The data on green innovation are obtained from the State Intellectual Property Office of China. We match the city-level data and firm-level data using the registered addresses of firms.
Identification of Highly Polluting Firms
According to the Directory of Categorized Environment Inspection of Public Companies (Letter of the MEP General Office (2008) No. 373) and 2017 Industrial Classification for National Economic Activities (GB/T4754-2017), ER mainly targets highly polluting industries [32], such as the manufacturing and smelting industries. We define 143 four-digit SIC industries, including thermal power generation, cement manufacturing, crude oil processing, petroleum products manufacturing, et al., as highly polluting industries.
A total of 891 companies listed on Shanghai Stock Exchange and/or Shenzhen Stock Exchange (excluding ST and *ST companies, which is a classification indicating abnormal performance or status) in the above industries, were selected. After excluding firms with missing financial data in any year from 2009 to 2016, we obtain balanced panel data of 361 highly polluting firms in our research period, with a total of 2888 observations.
Descriptive Statistics
To avoid problems with outliers, we winsorize these variables at the 1st and 99th percentiles. The descriptive statistics are shown in Table 2
Empirical Models
We first explore the effect of ER on FP in the current period and lagged period using Equation (5).
where Pro f it ijt denotes FP of firm j in region i in year t. Specifically, it includes ROA and ROE. ER it−n denotes the ER in region i in year t − n, where n = 0, 1, 2, 3. We also control for other factors by adding X ijt−1 , which includes characteristics of the firm in year t − 1 as well as the regional characteristics such as pollution level, per capita GDP, and local fiscal strength. v i and θ t are the firm fixed effect and year fixed effect, respectively. ε ijt is the idiosyncratic error term. In addition, in order to explore the mechanism of ER on FP, we construct a mediating effect model to explore the mediating effects of production costs, operating efficiency, and green innovation, as shown in Equations (6) and (7).
where Med ijt is the mediating variable, including the firm's operating costs rate, total asset turnover rate, and the number of green patent applications. We first explore the effect of ER on FP of highly polluting firms in the current period, and the results are shown in Table 3. With ROE as the dependent variable, the negative effect of ER on FP in the current period is not significant when no control variables are added (columns 1 and 2). In addition, the negative effect stays nonsignificant when control variables are added, but firm effects and year effects are not fixed (column 3). However, ER significantly reduces FP in the current period, when control variables are added, and firm effects (column 4) as well as year effects (column 5) are fixed. The conclusion still holds, when the dependent variable is changed to ROA, which verifies Hypothesis 1a, that ER significantly reduces the FP of highly polluting firms in the current period and the results are in line with Liu et al. [27]. This table shows the effect of ER on the FP of highly polluting firms in the current period. In order to avoid the problem of reverse causality between firm-level control variables and FP, all firm-level control variables in this paper are at a one-period lag. The t-values are in parentheses, and *, **, and *** represent significance levels at 10%, 5%, and 1%, respectively.
Negative Effects of ER on Future FP
Next, we investigate whether there is a significant effect of ER on FP in the future periods by including ER with a lag of one period, two periods, and three periods, respectively. As shown in Table 4, the effect of ER on FP with a lag of one period (column 1 and 2) and two periods (column 3 and 4) is not significant, but the signs of the two coefficients are opposite, while the effect of ER with a lag of three periods on FP is significantly positive (column 5 and 6), which verifies Hypothesis 1b that there is a lagged effect of ER on FP of highly polluting firms, and the results here indicate that the lag is three periods, which is consistent with Chen and Ma [24]. This table shows the lagged effect of ER on the FP of highly polluting firms. In order to avoid the problem of reverse causality between firm-level control variables and FP, all firm-level control variables in this paper are at a one-period lag. L1.ER, L2.ER, and L3.ER indicate the environmental regulations with a lag of one period, two periods, and three periods, respectively. Control variables are the same as those in Table 3. The t-values are in parentheses, and *, **, and *** represent significance levels at 10%, 5%, and 1%, respectively.
Alternative Models and Variables
We neglect the continuous nature and volatility of financial performance in the above empirical models. In addition, the interaction effects of environmental regulation in different periods are not considered. In this case, we overcome the above questions and verify the robustness of the above findings in the following three ways: (1) adding the dependent variables with the lag of one period, (2) taking the logarithms of the dependent variables, and (3) adding the independent variables of different periods at the same time [41].
The results in Table 5 show that when the dependent variables with a lag of one period are added and/or the logarithms of the dependent variables are taken, ER still has a significant negative effect on FP in the current period (column 1, 2, and 3), but a significant positive effect on FP in the future of three periods (column 4, 5, and 6). However, when both current and lagged ER are included, the negative effect of current ER on FP is not significant, but the positive effect of ER with a three-period lag on FP is still significant (column 7 and 8). This table shows the robustness analysis of the effect of ER on the FP of highly polluting firms. In order to avoid the problem of reverse causality between firm-level control variables and FP, all firm-level control variables in this paper are at a one-period lag. L1.ROA/ROE indicates ROA or ROE with one period lagged. L1.ER, L2.ER, and L3.ER indicate the environmental regulations with a lag of one period, two periods, and three periods, respectively. Control variables are the same as those in Table 3. The t-values are in parentheses, and *, **, and *** represent significance levels at 10%, 5%, and 1%, respectively.
Endogeneity Test
The endogeneity problem should be taken into consideration for the following reasons. First, ER and the profitability of the firm may be influenced by factors such as that region's level of status, financial resources, and economic development. Second, ER and FP of highly polluting firms can influence each other because taxes taken from highly polluting firms are closely related to their FP. Third, there may be some neglected factors that affect FP.
We address the endogeneity problem with the method proposed by Hering and Poncet [29]. Specifically, we select the air circulation coefficient, measured by the wind speed multiplied by the height of the boundary layer as an Instrumental Variable (IV) for ER. (The calculation of the air circulation coefficient is based on the wind speed and the boundary layer height data at a 10-m height on the global 0.750 × 0.750 grid provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) and matched with latitude and longitude data for Chinese cities). In general, the stronger the air circulation, the more significant the pollution dispersion is, and therefore a more stringent ER is needed.
The results of the first stage calculation show that the air circulation coefficient is significantly positively related to ER, and both the F-value and the minimum eigenvalue statistic indicate that there are no weak instrument concerns. (In verifying the ER effect on FP in the current period, the coefficient of air circulation in the current period is positively correlated with ER at a significance level of 1% with a coefficient of 0.729. In verifying the lagged ER effect on FP, the coefficient of air circulation with a two-period lag is positively correlated at a significance level of 1% with a correlation coefficient of 0.466.) The second stage regression results from Table 6 show that although the negative effect of ER on the current ROE of polluting firms is nonsignificant, there is a significant negative effect on current ROA and lnROA. In addition, ER with a two-period lag significantly increases ROA and ROE. The effect with a three-period lag is no longer significant, which slightly deviates from the baseline results, but basically verifies the lagging positive effect of ER on FP. This table shows the regression results of IV-2SLS to check the robustness of the effect of ER on the FP of highly polluting firms. In order to avoid the problem of reverse causality between firm-level control variables and FP, all firm-level control variables in this paper are at a one-period lag. L2.ER indicates the environmental regulations with a lag of two periods. Control variables are the same as those in Table 3. The Z-values are in parentheses, and *, **, and *** represent significance levels at 10%, 5%, and 1%, respectively. The F-values and the minimum eigenvalue statistics are larger than 10, indicating that there are no weak instrument concerns.
Mechanism Analysis
We combined Equations (1)-(3) to explore the intrinsic mechanism of the effect of ER on FP from the perspectives of the production costs, operating efficiency, and green innovation of polluting firms.
Mechanism Analysis of ER Effects on Current FP
Considering that ER generally increases the compliance costs or green innovation input of polluting firms, we add the operating costs rate and the number of green patent applications to the model to explore the mechanism of ER effects on current FP. (The reason for choosing the number of green patent applications instead of R&D expenditure is that ER directly affects firms' green innovation, and the number of green patent applications directly reflects firms' investment in green innovation, while R&D expenses cover not only green innovation but also non-green innovation.) The regression results from Table 7 cannot support the positive effect of ER on the operating costs' rate (column 1 and 2), but the positive effect of ER on the number of green patent applications is significant from OLS and IV-2SLS models, which verifies Hypothesis 2b. Table 7. Mechanism analysis of the effects of ER on current FP of polluting firms.
Variable
(1) This table shows the mechanism analysis of the effects of ER on the current FP of polluting firms. In order to avoid the problem of reverse causality between firm-level control variables and FP, all firm-level control variables in this paper are at a one-period lag. Control variables are the same as those in Table 3. The mediating variable in columns (1) Specifically, results from Table 8 show that ER significantly increases the number of green invention and utility patent applications of polluting firms in the current period. The number of green invention patent applications has a significant negative effect on current FP, while the effect of green utility patent applications is nonsignificant. This table shows the mechanism analysis of the effects of ER on the current FP of polluting firms. In order to avoid the problem of reverse causality between firm-level control variables and FP, all firm-level control variables in this paper are at a one-period lag. Control variables are the same as those in Table 3. The mediating variable in columns 2 and 3 is the number of invention patent applications, and the mediating variable in columns 5 and 6 is the number of utility patent applications. Columns 3 and 6 are the results of IV-2SLS, and the F-values (18.264 and 52.671) of the first-stage regression indicate that there is no weak IV issue. The Z-values are in parentheses, and *, **, and *** represent significance levels at 10%, 5%, and 1%, respectively.
Mechanism Analysis of ER Effects on Future FP
The results in Table 9 show that ER with a three-period lag significantly increases the total asset turnover rate, thus improving FP, which verifies Hypothesis 3b. In contrast, the effect of ER with a three-period lag on the operating costs rate is nonsignificant, so Hypothesis 3a is not verified. In addition, we conducted robustness tests with the IV-2SLS model and the results still hold.
In addition, we find no evidence that there is a positive effect of green investment on FP (Hypothesis 3c) (owing to space limitations, the results are omitted but retained for reference), which may be due to the fact that the effect of green innovation on FP is not yet shown in the short term.
From the mechanism analysis, we find that ER will increase the green innovation investment, especially invention investment, thus decrease the financial performance in the current period. In addition, ER increases the firm's financial performance in the three periods mainly by promoting its operating efficiency. This table shows the mechanism analysis of the lagged effect of ER on the FP of polluting firms. In order to avoid the problem of reverse causality between firm-level control variables and FP, all firm-level control variables in this paper are at a one-period lag. The independent variable is L3.ER in columns (1)-(4) and estimated by OLS models and in columns (5)- (8), the independent variable is L2.ER and the results are estimated by IV-2SLS models. L2.ER and L3.ER indicate the environmental regulations with a lag of two periods and three periods, respectively. Control variables are the same as those in Table 3.
Effect of ER on FP of Mildly Polluting Firms
We also obtained data on 936 mildly polluting A-shares firms to analyze the effect of ER on their FP and compare it with heavy polluters. The results in Table 10 show that the effect of ER on FP of mildly polluting firms is not significant either for the current period or one to three periods later. This table shows the effect of ER on FP of mildly polluting firms. In order to avoid the problem of reverse causality between firm-level control variables and FP, all firm-level control variables in this paper are at a one-period lag.
Control variables are the same as those in Table 3. L2.ER and L3.ER indicate the environmental regulations with a lag of two periods and three periods, respectively. The results in columns (1)-(2) are estimated by OLS models and t-values are in parentheses. In addition, the results in columns (3)-(4) are estimated by IV-2SLS models and the Z-values are in parentheses. *, **, and *** represent significance levels at 10%, 5%, and 1%, respectively.
The Effect of ER on FP of Highly Polluting Firms by Ownership
We grouped the highly polluting firms by their ownership into state-owned and private to make a heterogeneous analysis. The results from Table 11 show that ER mainly affects state-owned firms, while it has a limited effect on private firms. This is likely because state-owned firms have a greater responsibility to be 'leaders' in implementing government policy and they can bear more financial risks associated with green innovation. Therefore, the negative effect of ER on current FP (column 1) and the positive effect on the future FP (column 4) of state-owned firms are significant. The last two columns verify that the ER with a three-period lag increases the total asset turnover rate, thus improving the FP of state-owned firms. The results are basically in line with those of Xu et al. [42]. This table shows the heterogeneous effect of ER on the FP of polluting firms. In order to avoid the problem of reverse causality between firm-level control variables and FP, all firm-level control variables in this paper are at a one-period lag. Control variables are the same as those in Table 3. L3.ER indicate the environmental regulations with a lag of three periods. The results in columns (1), (3), and (5) are for state-owned firms and the results in columns (2), (4), and (6) are for privately-owned firms. The t-values are in parentheses and *, **, and *** represent significance levels at 10%, 5%, and 1%, respectively.
The Effect of ER on Expected FP of Highly Polluting Firms
There are differences in the effect of ER on actual profitability and expected profitability [11]. Therefore, we use Tobin's Q to characterize expected FP and explore the effect of ER on expected FP of highly polluting firms. Table 12 shows that ER has a significant negative effect on the expected FP of highly polluting firms in the current period (column 1) and one period later (column 2), while the effect on expected FP two or three periods later is nonsignificant (column 3 and 4). In addition, the results still hold when Tobin's Q is logarithmized (lnTobin s Q = ln(Tobin s Q + 100)) (column 5 and 6). Our results are consistent with the conclusions of Rassier and Earnhart [11]. This table shows the effect of ER on Tobin's Q of highly polluting firms. In order to avoid the problem of reverse causality between firm-level control variables and FP, all firm-level control variables in this paper are at a oneperiod lag. Control variables are the same as those in Table 3. L1.ER, L2.ER, and L3.ER indicate the environmental regulations with a lag of one period, two periods, and three periods, respectively. The dependent variable in columns (1)-(4) is Tobin's Q, while the dependent variable in columns (5) and (6) is logarithmic Tobin's Q. The t-values are in parentheses and *, **, and *** represent significance levels at 10%, 5%, and 1%, respectively.
In the further discussion, we confirm that the effect of ER on the FP of mildly polluting firms is not significant. In addition, ER mainly affects state-owned firms, while it has a limited effect on private firms due to the greater responsibility for state-owned firms. Furthermore, ER has a significant negative effect on expected profitability in the current period and one period later, because the ER will exert a negative and short-run effect on the market value.
Conclusions
This paper explores the effect of ER on the FP of polluting firms, on which there is no consensus among scholars. We conduct an empirical analysis of the current and lagged effects of ER on FP, which verifies the negative effect of ER on FP in the current period and the lagged positive effect on FP, which unifies CRH and PH on the temporal dimension. The main findings of this paper are as follows.
First, ER significantly reduces the FP of polluting firms in the current period, and one of the channels is to increase their green innovation investment. Second, ER compels polluting firms to improve their operating efficiency by changing their production and operation methods, which has a significant positive effect on their FP, but there is a certain lagged effect. The above two conclusions unify CRH and PH, which are mixed in the existing research. Third, ER mainly affects the FP of highly polluting firms and state-owned firms, but neither mildly polluting firms nor private firms, and this result indicates that the ER should be targeted. In addition, we test the conclusions from Rassier and Earnhart [11] that there is a negative effect of ER on expected FP using the data from China.
The conclusions show the 'short pain' and 'long gain' effect of ER on the FP of highly polluting firms and provide implications for governments and highly polluting firms. For government, more green innovation subsidies should be provided for firms to promote their green transformation. For highly polluting firms, they should change their development concepts to realize green and sustainable development. From the theoretical point, this paper illustrates the different impacts of ER on the FP of polluting firms on the temporal dimension and explores the influencing mechanism, implying the potential effects of green innovation investment and operating efficiency. Future research can compare the different impacts of command-to-control and marketbased environmental regulations on current as well as future FP of polluting firms. Furthermore, the research can be extended to other emerging markets and developed countries. | 2022-02-23T16:12:52.481Z | 2022-02-20T00:00:00.000 | {
"year": 2022,
"sha1": "2716a7cc86f7d9fe3e1ae99ad27cf179c7927c34",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/14/4/2412/pdf?version=1645338278",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a1173b0a4ee64a5a6feebd561f91060e1c806abc",
"s2fieldsofstudy": [
"Environmental Science",
"Economics",
"Business"
],
"extfieldsofstudy": []
} |
247103501 | pes2o/s2orc | v3-fos-license | The Shortcomings of COVID-19 Testing in Ecuador: Time to Incentivize Research and Innovation
The COVID-19 pandemic hit Ecuador severely. The country caught the attention of international media due to its high death toll and overwhelmed healthcare system. The clinical diagnostics system was rapidly overloaded, and the import of PCR tests was delayed. The case of Ecuador illustrates how middle-income countries rely heavily on the importation of biotechnological products for their healthcare systems. The Ecuadorian experience during the COVID-19 pandemic serves as a call for the formation of policies for the development of the biotechnological industry.
Ecuador has been one of the hardest-hit Latin American countries during the COVID-19 pandemic [1]. The initial ballooning of cases in Guayaquil during the first weeks of the epidemic received international media attention due to the high death toll and the overwhelmed healthcare system [2]. Nationwide, the number of confirmed COVID-19 deaths per million people is high compared to other countries-but as these numbers have accumulated over time, the daily number of diagnostic tests has not scaled accordingly ( Figure 1A). Numerous countries with developing economies have faced challenges establishing effective mass-testing strategies [3], and these tend to be countries with underdeveloped healthcare/biotechnology industries ( Figure 1B). The experience from countries with solid testing regimes shows that widespread testing is an essential tool for identifying and containing pockets of transmission promptly [3] and for supporting surveillance efforts [4].
In March 2020, during the first week of the epidemic in Ecuador, the National Institute for Public Health Research (INSPI) received 6826 samples for COVID-19 diagnosis using RT-PCR in its main lab in the city of Guayaquil. However, the maximum testing capacity of the INSPI laboratory in Guayaquil-the only one in the country authorized to process those samples at the time-was 350 tests per day. By March 18th, the INSPI laboratories in Quito and Cuenca had also begun to process samples to assist with the high testing demand. However, at this point, the entire diagnostics system was quickly overwhelmed, so much so that the delayed sample processing would not have been brought up to date until the end of April 2020 [5]. At this time, several private universities and specialized diagnostics laboratories began processing samples, some with a cost of up to $120 per person. Notably, the limiting factor for diagnostic scaling was the lack of PCR testing kits rather than the unavailability of certified laboratories. The often inaccessible cost of taking a test in the private sector further deepened the testing problems in the country, an issue that was also exacerbated by severe delays in importation times of essential consumables and reagents.
policies that can be translated from international scenarios to national and local realities [11,14,15].
The Ecuadorian experience during the COVID-19 pandemic illustrates critical shortcomings that other middle-income countries may be experiencing and serves as a call for the establishment of strategies for biotechnological development. An area to be prioritized is the development of products for genomic surveillance and clinical diagnostics, including but not limited to the synthesis of oligonucleotides and the production of antibodies. Such strategies will require participation and contributions from multiple fronts including the public and private sectors, industry, and academia and the establishment of favorable social and commercial conditions for research and innovation. The diagnostic gold standard for COVID-19 is the polymerase chain reaction (PCR), which is a fundamental technique in diagnosing ongoing SARS-CoV-2 infection. The availability of equipment and reagents needed for real-time PCR diagnostics in many low-and middle-income countries depends entirely on importations, which can be lengthy, bureaucratic, and costly. Importation issues ultimately delay the implementation of widespread testing, increasing the turnover times of test results. In Ecuador, private businesses can require up to 7 weeks for their imports to arrive. This may take several months in the public sector due to the mandatory public procurement procedures and complex processes for selecting qualified local suppliers.
The financial burden of this reliance on importations is considerable, with final supply costs reaching as high as 45% above the original retail price. The Ecuadorian government has granted tariff relief to institutions of higher education, yet universities can still only reduce their importation costs to 25% above the actual retail price. Moreover, the successful application of this tariff reduction scheme is slow-it can take at least 12 weeks for supplies to arrive.
The application processes that public institutions need to go through to acquire and import tax-free goods are complex and time-consuming under normal conditions, let alone under the high global demand for COVID-19 testing reagents when expediency is essential in securing key assets to maintain a national surveillance program. These challenges call for local and regional capacity building to counteract the negative effect of productive and technological dependency for key reagents and supplies. Despite these challenges, tariff benefits have allowed several universities to provide critical support to testing programs in the country.
During the pandemic, academia has become a significant ally to the health sector by providing expertise and data analysis [6], building up complementary testing capacity, developing clinical solutions [7], and performing genomic surveillance [8]. Ecuadorian law exempts higher education institutions from paying importation tariffs for goods to be used in research or teaching. However, it was unclear whether these efforts pertained to academic research, posing an additional challenge to establish whether these tributary exemptions were extended to resources destined to provide health services, such as diagnostics. Universities with more robust institutional structures were able to address these bureaucratic, legal, and financial challenges earlier in the health emergency, resulting in a predominant role from academic institutions in cities such as Quito (Ecuador's capital) to support the response to the pandemic. Institutions in other regions of the country were delayed or unable to support testing programs or actively engage in research related to COVID-19.
Unfortunately, the limitations mentioned above are not new for Ecuadorian scientists. Before the beginning of the COVID-19 pandemic, the elevated costs of laboratory supplies and the delays during importation processes were already a burden for academic research [4]. The current health emergency has brought home the urgency of addressing these challenges by improving the conditions in which research is performed and promoting the development of the local biotechnology industry for the medical and healthcare sectors to better handle current and future health challenges in the country [9].
Ecuador, over the course of the last decade, has made progress amassing highly skilled and specialized human resources, increasing its ability to develop innovative biotechnological solutions which would benefit from competitive incentive programs. The COVID-19 pandemic has shown how nations can generate inventive designs for facing mass diagnostic challenges [10]; however, performing laboratory-based diagnosis is impossible without the resources that the biotechnology industry provides [9]. This is particularly important given that SARS-CoV-2 features a remarkable adaptive capacity derived from its high evolutionary rate and transmissibility, highlighting the need for diagnostic capabilities that can match the dynamic nature of the pandemic [11][12][13]. Furthermore, the emergence of new variants such as Omicron (detected in Ecuador in December 2021 [14]) emphasizes the value of incorporating genomic surveillance to the diagnostic pipelines to identify the drivers of viral evolution and to respond with appropriate public policies that can be translated from international scenarios to national and local realities [11,14,15]. The Ecuadorian experience during the COVID-19 pandemic illustrates critical shortcomings that other middle-income countries may be experiencing and serves as a call for the establishment of strategies for biotechnological development. An area to be prioritized is the development of products for genomic surveillance and clinical diagnostics, including but not limited to the synthesis of oligonucleotides and the production of antibodies. Such strategies will require participation and contributions from multiple fronts including the public and private sectors, industry, and academia and the establishment of favorable social and commercial conditions for research and innovation.
Data Availability Statement:
The data that support the findings of this study are openly available in Our World in Data https://ourworldindata.org/coronavirus (accessed on 10 January 2022) and PATSTAT v.2.6.8 from the European Patent Office https://data.epo.org/expert-services/index.html (accessed on 10 January 2022).
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-02-26T00:07:57.593Z | 2022-02-22T00:00:00.000 | {
"year": 2022,
"sha1": "59c246e47401f629649497b64269c699f6e8a84f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1729/12/3/325/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "06717e75cc1dbd313718c6e18a07fdb09171ccb1",
"s2fieldsofstudy": [
"Political Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221038448 | pes2o/s2orc | v3-fos-license | A case of congenital Rett variant in a Chinese patient caused by a FOXG1 mutation
ABSTRACT Rett syndrome (RTT) is a severe progressive neurodevelopmental disease characterized by psychomotor regression. The FOXG1 gene is one of the pathogenic genes associated with the congenital Rett variant, which is less studied. Only a few Chinese patients with FOXG1 mutation have been reported. In this study, we describe a Chinese female patient with congenital Rett variant who presented with psycho-motor retardation, developmental regression, microcephaly, seizure, stereotypic hand movement and hypotonia. Targeted high-throughput sequencing was conducted, and a heterozygous FOXG1 mutation [NM_005249.4: c.506dupG (P.G169Gfs* 286)] was identified. It was a frameshift mutation resulting in alteration of the reading frames downstream of the mutation. SIMILAR CASES PUBLISHED: 10. CONFLICT OF INTEREST: None.
R ett syndrome (RTT) is a serious neurodevelopmental disorder predominantly in females with an incidence about 1 in 10 000 live births. 1 The MECP2 gene is the major pathogenic gene of RTT, which accounts for 95-97% of typical RTT cases and 50-70% of atypical RTT cases. 2 Mutations in the CDKL5 gene are correlated with the early-onset seizure variant of RTT. Additionally, the FOXG1 gene is associated with the congenital Rett variant, which is less studied. In 2008, Ariani et al 3 first recognized that the FOXG1 gene was correlated with the congenital Rett variant. The incidence is reported to account for 1.5-15% depending on different inclusion criteria. 2 There are more than 50 mutations reported in the literature. 4 The mutation types include missense mutation, deletion and duplication mutation as well as copy number variation involving the FOXG1 gene.Mutations of the FOXG1 gene associated with RTT are rarely found in Asians. 2 The mutation rate in China is also lower, as (0.7% in Chinese patients). 5 To our knowledge, only two reports with ten patients appear in the literature. 5,6 In this study, we present a Chinese patient with a congenital Rett variant caused by a mutation of FOXG1. Informed consent was obtained from all participants.
CASE
An 11-month-old female patient was admitted to our hospital due to psychomotor retardation after birth. She was a full-term spontaneous delivery after an uneventful pregnancy. Birthweight was 2900 g. The patient was the first child of healthy no-consanguineous parents. There was no family history of psychomotor retardation or relevant genetic diseases. She was unable to roll over, crawl and sit. Her head control was unsteady. She dis-Rett syndrome (RTT) is a severe progressive neurodevelopmental disease characterized by psychomotor regression. The FOXG1 gene is one of the pathogenic genes associated with the congenital Rett variant, which is less studied. Only a few Chinese patients with FOXG1 mutation have been reported. In this study, we describe a Chinese female patient with congenital Rett variant who presented with psychomotor retardation, developmental regression, microcephaly, seizure, stereotypic hand movement and hypotonia. Targeted high-throughput sequencing was conducted, and a heterozygous FOXG1 mutation [NM_005249.4: c.506dupG (P.G169Gfs* 286)] was identified. It was a frameshift mutation resulting in alteration of the reading frames downstream of the mutation. SIMILAR CASES PUBLISHED: 10. CONFLICT OF INTEREST: None. played repetitive thrusting of the tongue, stereotypical movement of the hands and sucking fingers. She did not respond to others. She ate little, and only slowly gained weight. Sleep disturbance was also observed with shortened sleep duration and she was easily woken. There was no history of seizure. On physical examination, her head circumference was 40.5 cm with closed anterior fontanel. Facial signs consisted of synophridia, slightly round nose, high palatomaxillary arch and micromandible. Additionally, she also had hypomyotonia of the upper limb, dystonia of the lower limb and hyperreflexia of the knee. Ocular investigation showed horizontal nystagmus, and the eye could not gaze. Gesell development scale evaluation showed the developmental quotient was seriously delayed with a score of 27 in adaptability, 19 in gross motor, 24 in fine motor, 22 in language, 10 in person-society. International scoring system score was 23.
Routine chromosome analysis showed 46, XX karyotype. On nerve electrophysiological examination, somatosensory evoked potential revealed abnormality of cortical segment in extremity. Visual evoked potential displayed an almost normal right latent period with decreased amplitude. Electroencephalogram (EEG) revealed middle amplitude sharp waves in the bilateral occipital region (Figure 1). Cerebral magnetic resonance imaging (MRI) displayed dysplasia of the corpus callosum, and the frontal and parietal lobes ( Figure 2).
During a follow-up period of 18 months, she suffered tonic clonic seizures at the age of 16 months. The EEG result was the same as before. At the age of 29 months, she could only take a small amount of liquid diet with severe salivation. The development was delayed with the height of 90 cm (-2SD) and weight of 1250 g (-1SD). She was able to crawl a short distance, but she did not sit alone. The bilateral ankle joint became contracted with limitation of dorsiflexion. Eye contact, attention and purposeful hand use were improved but at the age of 40 months, she presented with regression in the cognitive ability, language, motor and sociality. She could recognize a family member before, but at 40 months she could not distinguish a family member from a stranger. She could say mother and father before, but could not speak any words. With respect to sociality, she could express emotion and play simple games with others as before, but could not respond to external stimuli. In motor skills, she even lost the ability to grasp things to her mouth.
Genomic DNA extraction
This study was approved by the medical ethics committee. Informed consent was obtained from the guardians of the patient. Blood samples were collected from the peripheral venous blood of the patient and her parents. Subsequently, genomic DNA was extracted from blood samples using DNA Extraction Kit (Tiangen Biotech Co., Ltd., Beijing, China) according to the manufacturer's instructions. The quality and quantity of DNA were quantified by Thermo Fisher Multiskan FC (Thermo Labsystems, Multiskan FC, USA). The purity and concentration were measured by calculating the absorbance at 260 nm and 280 nm. Pure DNA has an A260/A280 ratio of 1.7-1.9.
Targeted high-throughput sequencing
The high-throughput sequencing was conducted by the Illumina HiSeq 2500 platform (Illumina, United States). Briefly, the library was first constructed after sheared, end-repaired, adaptors ligated, size selected, PCR amplified and normalized with special kits or devices. The library was prepared using the NimbleGen SeqCap EZ Choice Kit (Roche, Switzerland). Referred to the relevant literature and the OMIM database (https://www.omim. org), approximately 4000 genes related to RTT and psychomotor retardation were involved, such as CDKL5, FOXG1, CNTNAP2, FOLR1, FOXG1 and so on.
Candidate mutation confirmation by Sanger sequencing
The candidate variant was confirmed by Sanger sequencing among the patient and her parents. The specific primers were designed using Primer3 (http://primer3.ut.ee/). The primers were as follows: F: TACATGACTTGCCAGCGCCCGAGCC; R: CCCACATTGC ACCTCGCTGA CACTCC. The reaction condition was as below: pre-denaturation at 95' for 5 minutes and 30 cycles of denaturation at 95' for 30 s, annealing at 65' for 30s and extension at 72' for 10s. The amplification products were sequenced by ABI 3730 DNA Sequencer (Applied Biosystems, CA, USA). The Sanger sequencing data were analyzed by DNASTAR software.
Chromosomal microarray analysis
The DNA sample was detected by Affymetrix CytoScan HD Array (Affymetrix, USA). Data analysis was performed using the software of Affymetrix ® Chromosome Analysis Suite 2.0.
RESULTS
A heterozygous mutation of FOXG1 gene [NM_005249.4: c.506dupG (P.G169Gfs*286)] was detected in the patient, which has been reported previously. 4 It was a de novo mutation, which was not inherited from her parents (Figure 3). It resulted in the frameshift at 169 position of the protein, and caused an alteration of the reading frames downstream of the mutation. The detection of other genes was negative. The chromosomal microarray analysis showed no copy number variants with clinical significance.
A literature search was performed on the Pubmed (https://www.ncbi.nlm.nih. gov/pubmed) from the time when the library was built to February 2018. The search terms were RTT, Rett syndrome and FOXG1 mutation. Studies were considered eligible if complete clinical data were contained including age of onset, clinical features and imaging data. Twenty relevant reports were retrieved from the databases, but only 10 were selected due to incomplete or obscure clinical data ( Table 1). Among these, there are 18 patients with congenital Rett variant caused by FOXG1 mutation. There are 18 variants of FOXG1 including 8 frameshift, 5 nonsense and 5 missense mutations. Except for two cases, 5,7 no family history was reported.
DISCUSSION
RTT is a serious neurodevelopmental disorder involving defects in motor, cognitive and social ability.The FOXG1 gene is one of the pathogenic genes of congenital Rett variant. In 2008, Ariani et al 3 first recognized that the FOXG1 gene was correlated with the congenital Rett variant. The mutation rate of FOXG1 is lower. The report- Age of onset 0-3 months ed incidence is as low as 1.5% or as high as 15% depending on the inclusion criteria. 2 In China, the main pathogenic gene of RTT is the MECP2 gene. The mutation of FOXG1 is rarely reported. 5,6 The mutation rate is 0.7% in Chinese patients. 5 In 2017, Zhang Q et al 5 In this study, the patient presented with psychomotor retardation, microcephaly, seizure, stereotypic hand movement, hypotonia and developmental regression, which was consistent with the diagnostic criteria of congenital Rett variant. 1 Compared with other types of RTT, the congenital Rett variant caused by the FOXG1 mutation is characterized by congenital or early onset, acquired microcephaly, serious language defect, attention and social deficit, seizure, feeding difficulty, developmental delay, psychomotor regression and stereotypic hand movement. 15 Among the 19 cases (18 from the literature and one of our own), onset is after birth in most patients. The regression period is not identified in the majority probably due to the severe developmental delay in the early onset. 8 All patients present with shared characteristics of congenital or acquired microcephaly, movement retardation, language and attention disor-der. 2,5,[7][8][9][10][11][12][13][14] Several clinical features are also manifested in some patients: hypotonia, sleep disturbance, stereotypical upper limb, hand function disorder and mood abnormality. Microcephaly may be related to the role of FOXG1 in the development of telencephalon. 14 The seizure types are unspecific and include generalized tonicclonic seizures, drop attacks, myoclonic seizures, atonic seizures and generalized tonic seizures. The symptoms of seizure are relatively slight compared with CDKL-5 related seizure and are easily controlled with less than three antiepileptic drugs. 2 Eleven patients including our case presented with dysphagia or feeding difficulty. 5,7,[9][10][11][12][13][14] The muscles involved in swallowing mostly are voluntary motor muscles. They are prone to be complicated with dysmyotonia, which may be the cause of dysphagia or feeding difficulty. Malnutrition is common. Ten patients in this study had height and weight abnormalities. Several older cases were even treated by long-term nasal feeding or by a fistula in the digestive tract to improve nutrition. 11,13 It is suggested that dysphagia is the predominant cause of physical development disorder. Thus, therapy for dysphagia is the main part of clinical care. In addition, only a few reports described symptoms of scoliosis, strephenopodia and joint contracture in older patients, which may be correlated with the shorter long-term follow up. 7,11,13 Thus, we think it is necessary to prevent bone malformation in the early stage of the disease. On imaging, cerebral MRI usually reveals delayed myelination or hypomyelination, atrophy of frontal and temporal lobes with gyral simplification as well as a hypoplastic corpus callosum. 2,5,7-8,10-13 Hypoplastic hippocampus has also been reported. 8 The management of congenital Rett variant caused by FOXG1 is still challenging. There is no effective therapy method at present. Early comprehensive rehabilitation treatment is advised. The nervous system symptoms such as epilepsy and dystonia, can be treated with medicine. As for the treatment of patients with dysphagia or feeding difficulty, the proper feeding method should aim to improve the physical state of nutrition to prevent early skeletal deformity during the process of clinical treatment. | 2020-08-08T13:05:58.123Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "fc4ce655bbf1b644b93c157d8aeec3a1b7059e9b",
"oa_license": "CCBYNCND",
"oa_url": "https://www.annsaudimed.net/doi/pdf/10.5144/0256-4947.2020.347",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a963cc79b6128d20e54221c01ff0a581a69c0a51",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3646688 | pes2o/s2orc | v3-fos-license | Multiparameter analysis of homogeneously R-CHOP-treated diffuse large B cell lymphomas identifies CD5 and FOXP1 as relevant prognostic biomarkers: report of the prospective SAKK 38/07 study
Background The prognostic role of tumor-related parameters in diffuse large B cell lymphoma (DLBCL) is a matter of controversy. Methods We investigated the prognostic value of phenotypic and genotypic profiles in DLBCL in clinical trial (NCT00544219) patients homogenously treated with six cycles of rituximab, cyclophosphamide, hydroxydaunorubicin, vincristine, prednisone (R-CHOP), followed by two cycles of R (R-CHOP-14). The primary endpoint was event-free survival at 2 years (EFS). Secondary endpoints were progression-free (PFS) and overall survival (OS). Immunohistochemical (bcl2, bcl6, CD5, CD10, CD20, CD95, CD168, cyclin E, FOXP1, GCET, Ki-67, LMO2, MUM1p, pSTAT3) and in situ hybridization analyses (BCL2 break apart probe, C-MYC break apart probe and C-MYC/IGH double-fusion probe, and Epstein–Barr virus probe) were performed and correlated with the endpoints. Results One hundred twenty-three patients (median age 58 years) were evaluable. Immunohistochemical assessment succeeded in all cases. Fluorescence in situ hybridization was successful in 82 instances. According to the Tally algorithm, 81 cases (66 %) were classified as non-germinal center (GC) DLBCL, while 42 cases (34 %) were GC DLBCL. BCL2 gene breaks were observed in 7/82 cases (9 %) and C-MYC breaks in 6/82 cases (8 %). “Double-hit” cases with BCL2 and C-MYC rearrangements were not observed. Within the median follow-up of 53 months, there were 51 events, including 16 lethal events and 12 relapses. Factors able to predict worse EFS in univariable models were failure to achieve response according to international criteria, failure to achieve positron emission tomography response (p < 0.005), expression of CD5 (p = 0.02), and higher stage (p = 0.021). Factors predicting inferior PFS were failure to achieve response according to international criteria (p < 0.005), higher stage (p = 0.005), higher International Prognostic Index (IPI; p = 0.006), and presence of either C-MYC or BCL2 gene rearrangements (p = 0.033). Factors predicting inferior OS were failure to achieve response according to international criteria and expression of FOXP1 (p < 0.005), cyclin E, CD5, bcl2, CD95, and pSTAT3 (p = 0.005, 0.007, 0.016, and 0.025, respectively). Multivariable analyses revealed that expression of CD5 (p = 0.044) and FOXP1 (p = 0.004) are independent prognostic factors for EFS and OS, respectively. Conclusion Phenotypic studies with carefully selected biomarkers like CD5 and FOXP1 are able to prognosticate DLBCL course at diagnosis, independent of stage and IPI and independent of response to R-CHOP. Electronic supplementary material The online version of this article (doi:10.1186/s13045-015-0168-7) contains supplementary material, which is available to authorized users.
Background
Diffuse large B cell lymphoma (DLBCL) is the most common nodal lymphoid malignancy, comprising approximately 30 % of all adult lymphomas, with a rapidly rising incidence [1,2]. DLBCL demonstrates an aggressive clinical course, but potentially 60-70 % of patients can be cured with the established rituximab, cyclophosphamide, hydroxydaunorubicin, vincristine, prednisone (R-CHOP) treatment standard [3]. Prediction of survival and stratification of patients for risk-adjusted therapy is based on the International Prognostic Index (IPI) [4]. R-CHOP has not only led to a marked improvement of survival in DLBCL but has also called into question the significance of the IPI [5], leading to introduction of the revised IPI (R-IPI) [6]. Recent data suggests that IPI and R-IPI no longer reliably identify DLBCL risk groups with a <50 % chance of survival, despite about 30-40 % of patients will still die of/with disease. Thus, there is a need for additional, particularly tumor-related, prognostic (and predictive) factors in DLBCL [7].
To date, only a limited number of tumor-related prognostic parameters exist for DLBCL like presence of C-MYC rearrangements or co-expression of bcl2 and c-myc. The morphological heterogeneity of DLBCL is reflected by significant molecular diversity at the genotypic, gene expression, and phenotypic levels [8,9]. Gene expression profiling data convincingly showed that DLBCLs are derived from germinal center B cells (GCB) or activated B cells (ABC) [9][10][11]. Although the scientific evidence is robust and prognostically relevant, its translation into daily practice remains impractical because of the required high standard of tissue preservation, procedure duration, and costs. This problem prompted the search for molecular prognostic markers applicable to routine biopsies from patients with DLBCL. As a result, a large body of surrogate (phenotypic) models and algorithms to identify GCB and non-GCB DLBCL have been proposed and linked to outcomes [12]. Unfortunately, reliability and reproducibility of these models is often poor, impeding their translation into standard practice to predict survival and stratify patients for risk-adjusted therapy [12][13][14]. Technical issues, poor study designs, lack of standardization of evaluation procedures, and, particularly, lack of prospective trials all prevent an efficient clinical translation. A PubMed search for "DLBCL," "R-CHOP," "prognostic," "marker," and "prospective" identifies only a few prospective studies, in which biomarkers have been considered (e.g., [15][16][17][18][19][20][21][22][23][24]). Thus, there is an unmet requirement for further marker validation in prospective trials.
The translational study of the clinical trial "SAKK 38/ 07 Prospective evaluation of the prognostic value of positron emission tomography (PET) in patients with diffuse large B-cell-lymphoma under R-CHOP-14. A multicenter study" offered a unique opportunity to prospectively analyze the prognostic and predictive value of phenotypic and genotypic biomarkers suggested to play a prognostic role in DLBCL on a well-documented and homogenously treated clinical trial collective.
Materials and methods
Patient recruitment, selection, and treatment The recruitment of patients for the SAKK 38/07 study started in November 2007 and finished in June 2010. Evaluation of the prognostic value of metabolic responses, as assessed by early PET after two cycles of R-CHOP-14, to identify a poor outcome patient subgroup was the main objective. PET was performed before, after two cycles of therapy, and at the end of treatment and was evaluated according to a 5-point scoring system with a cutoff determining positivity being set at 4 points (moderately increased uptake compared with the liver) [25]. The primary endpoint was event-free survival (EFS) at 2 years, and the secondary endpoints were progression-free (PFS) and overall survival (OS) after 2 and 5 years as well as the objective responses according to international criteria [26]. In accordance with the statistical advice for reaching sufficient power to address the two endpoints, recruitment of 154 patients was aimed. Because of concurrent registrations on the last recruitment day, 156 instead of 154 patients were recruited. Inclusion criteria were histologically proven diagnosis of CD20-positive DLBCL (no pretreatment revision of the slides by an expert hematopathologist was planned) including all Ann Arbor stages, tumor size >14 mm on CT or MRI (because lymph nodes ≥15 mm are considered "pathologic" on computerized imaging), PET positivity of the tumors (documented 2 weeks to 4 days prior to registration), performance status 0-2 on the ECOG scale, age >17, as well as no evidence of symptomatic central nervous system (CNS) disease, HIV, and/or hepatitis infection [27]. The study treatment consisted of R-CHOP given for six cycles followed by additional two applications of rituximab every 2 weeks (R-CHOP-14). Additionally, G-CSF support was given. The patients were asked to provide informed consent for the study and, separately, for the translational research. The primary pathology institutions were asked to send representative paraffin blocks for translational research after accomplishing the in-house diagnostic procedures to the Institute of Pathology at the University Hospital Basel. The study was approved by the Ethics Committee Beider Basel. Details of the SAKK 38/07 study are reported elsewhere [28].
In situ biomarker analysis
Immunohistochemical (bcl2, bcl6, c-myc, CD5, CD10, CD95, CD168, cyclin E, FOXP1, GCET, LMO2, MUM1p, pSTAT3) and in situ hybridization analyses [BCL2 break apart probe (BAP), C-MYC BAP and C-MYC/IGH double-fusion probe (DFP), and Epstein-Barr virus probe (EBER)] were performed and correlated with clinicopathological parameters and clinical endpoints. Cell of origin (COO) was determined according to the Tally algorithm [29]. Additionally, selected cases were stained for CD23, CD30, cyclin D1, D2, D3, Ki-67, p27, p63, and SOX11 for specification of diagnosis. Reagent sources, pretreatment and incubation conditions, and cutoff scores are listed in Table 1. Immunohistochemical markers were assessed by microscopic counting of positive cells/tumor cells and were recorded in 5 % increments in the primary statistical table. All cases were scored after training by at least two observers (either AT, SM, or SD), and only markers for which Cronbach's alpha analysis suggested good agreement between observers (alpha >0.75) were considered for prognostic evaluation. Relevant cutoff scores were either taken from the literature [29,30] or calculated applying receiver operating characteristic (ROC) analysis [12]. Discrepancies in the results for evaluated markers, which were almost exclusively due to differential assessment of weak staining signals, were discussed at a double-headed microscope and the concordant result was considered. Fluorescence in situ hybridization (FISH) was performed exactly as described elsewhere [31]. All cases were FISH-scored twice (NL and AT) with an excellent agreement (alpha = 1) between both observers.
Statistics
All statistical analyses were performed using the Statistical Package of Social Sciences (IBM SPSS version 19.0, Chicago, IL, USA) for Windows and reported applying the REMARK guidelines [32]. The inter-observer agreement was assessed using the Cronbach's alpha reliability analysis; an alpha value of >0.75 indicates very good agreement. The Spearman rank correlation was used to analyze relationships between biomarkers and clinical and laboratory parameters; only correlations with a rho ≥ ±0.300 were considered. The Mann-Whitney U and Kruskal-Wallis tests were applied, where appropriate, to identify quantitative differences between groups. The prognostic performance of variables and determination of optimal cutoff values (except those extracted from the most recent literature) was assessed by ROC curve plotting sensitivity versus 1-specificity with special consideration of the respective area under the ROC (AUROC). The optimal cutoff point was calculated using Youden's index (Y), denoting Y = sensitivity + specificity − 1, since this method can be applied to find the optimal unbiased cutoff value with the highest sensitivity and specificity [12]. OS was measured from registration to death or last follow-up, PFS from registration to relapse, death of any cause, or to last follow-up, and EFS from registration to relapse or death of any cause, initiation of any nonprotocol anticancer treatment because of lymphoma symptoms or need of concomitant radiotherapy or to For diagnostic purposes and to "subtract" CD3-positive T cells in CD5-positive DLBCL, CD3 and CD20 stainings were also performed, but these were not considered biomarkers sensu stricto last follow-up. The probabilities of survival were determined using the Kaplan-Meier method, and differences were compared using the log-rank test. All biomarkers of prognostic significance in univariable models underwent multivariable analysis using the Cox proportional hazards model in a two-step manner since only that response criterion (either according to international criteria or PET or combined PET/CT response) with the highest relevance in an independent first step Cox model, run without biomarkers, was considered and compared to the biomarkers in the second step.
All p values were two-sided and considered statistically significant if <0.05. No adjustment for multiple testing was applied for secondary analyses because they were considered hypothesis generating and exploratory.
Patients, case review, and clinico-pathologic characteristics
Nineteen patients refused a participation in the translational research part of the project. In 11 cases, no material for translational research was present. Thus, 126 cases were further studied: DLBCL diagnosis could not be confirmed in three of these cases by conventional morphology and additional immunohistochemical evaluation (the final diagnosis of marginal zone lymphoma was established in two cases and one turned to be a blastoid mantle cell lymphoma). Thus, the analysis was finally performed on 123 cases. Patient characteristics are given in Table 2. Survival data were complete for 116 patients. Eighty-nine lymphomas were primary nodal or of lymphoid tissue (including the mediastinum, the spleen, and Waldeyer's ring), while 34 were extranodal (most commonly soft tissue, gastrointestinal tract, and bones). Based on integrative analysis, 100 cases were shown to be centroblastic DLBCL, five were immunoblastic DLBCL, three were anaplastic DLBCL, six were unclassifiable, six were primary mediastinal large B cell lymphomas (PMBL; thereof, two were nodal DLBCL with morphologic and phenotypic features of PMBL), two were T cell-and histiocyte-rich B cell lymphomas (THRBCL), and one was a lymphomatoid granulomatosis (LG) grade 3.
The study material consisted of 66 (54 %) lymphadenectomy specimens that were studied on tissue microarrays (TMA) and 57 (46 %) cases with only small core needle biopsy material available, which were considered non-arrayable and were studied on conventional serial sections. Arrayable cases were brought into a TMA format applying the 1-mm core needle as described [33].
Outcome analysis
The primary study endpoint, i.e., EFS at 2 years, correlated with failure to achieve response according to international criteria and failure to achieve complete combined metabolic and morphologic response or metabolic response (rho values for all >0.470, p values for all <1e − 5). The median follow-up period was 53 months (95 % CI 45-51). There were 48 events, including 16 lethal events and 12 relapses 3 months after achievement of CR, of which 6 occurred >12 months after initial diagnosis. The 16 lethal events encompassed 9 deceases with/of disease and 7 deaths unrelated to cancer. Mean OS was 68 months (95 % CI 64-71), mean PFS was 59 months (95 % CI 53-65), and mean EFS was 46 months (95 % CI 40-52); median OS, PFS, and EFS for the whole collective were not reached. All biomarkers were assessed for their prognostic importance after rational dichotomization (cutoffs listed in Table 1). Factors able to predict worse EFS in univariate Kaplan-Meier models were failure to achieve response according to international criteria, failure to achieve complete combined metabolic and morphologic response or metabolic response (p values for all <0.005), expression of CD5 (p = 0.02; Fig. 2a), and higher stage (p = 0.021). Factors predicting inferior PFS were failure to achieve response according to international criteria, failure to achieve complete combined metabolic and morphologic (but not only metabolic) response (p < 0.005), higher IPI (p = 0.006), higher stage (p = 0.005), presence of either C-MYC or BCL2 gene rearrangements (p = 0.033; Fig. 2b), and expression of cyclin E in >12 % of tumor cells (p = 0.046; Fig. 2c). Finally, factors predicting inferior OS were failure to achieve response according to international criteria, failure to achieve complete combined metabolic and morphologic (but not only metabolic) response (p values for all <0.005), expression of FOXP1 in >50 % of tumor cells (p < 0.005; Fig. 2d Table 4. Subgroup analysis limited to the DLBCL, not otherwise specified (NOS) cohort (omitting PMBL, THRBCL, and LG because of their more specific biology) revealed that expression of CD5 (p = 0.044) retained its independent prognostic significance with respect to EFS (more sensitive for early events) and expression of FOXP1 (p = 0.004) with respect to OS (later events), while all other biomarkers failed to add prognostic information. In the case of CD5 because of the only weak correlation of CD5 with phenotypic bcl2/c-myc double hits, the limited number of CD5-positive cases, and the lacking prognostic significance of phenotypic bcl2/c-myc double hits in that series, multivariable analysis was not adjusted for phenotypic bcl2/c-myc double hits. Adjustment for phenotypic bcl2/c-myc double-hit scores in the case of FOXP1 showed that it retained its prognostic significance in those DLBCL, NOS cases scored 0 and 1 (and outperformed failure to achieve combined metabolic and morphologic remission in cases scored 0), but neither expression of FOXP1 nor failure to achieve complete combined metabolic and morphologic remission were of prognostic significance with respect to OS in phenotypic bcl2/c-myc double-hit score 2 DLBCL, NOS cases (data not shown in detail).
Since CD5 expression appeared to be of significant relevance, we thoroughly revised the four CD5-positive cases and evaluated multiple immunohistochemical markers to exclude blastoid mantle cell lymphomas (shown above). The four CD5-positive DLBCL were negative for cyclin D1 and SOX11 and expressed p27. These cases stained positively for CD5 in 50 to 100 % of tumor cells did not show an intravascular component and were negative for EBER; three were classified as non-GCB, while one was GCB; and three showed centroblastic morphology, while one was classified as centroblastic with increased immunoblasts. None of these four CD5-positive cases showed presence of either C-MYC or BCL2 gene rearrangements; however, two patients fulfilled phenotypic criteria for double-hit lymphoma, expressing bcl2 or c-myc above the respective cutoff scores. Two patients were male; two suffered from nodal lymphomas; two were Ann Arbor stage II, while the other two were stage I and III, respectively; and two patients had an IPI of 1 and two an IPI of 2. The mean age of the CD5-positive patients was 64 ± 13 years, while that of the CD5-negative was 58 ± 13 (difference not of statistical significance). Two of the four patients failed to achieve remission (one of these two patients died of/ with lymphoma) and in the other two DLBCL relapsed after 8 and 38 months, respectively. Finally, DNA of the four CD5-positive cases was extracted and subjected to array comparative genomic hybridization (aCGH) analysis (Fig. 3) exactly as described elsewhere [35]. The analysis was successful in two cases and showed recurrent gains of 19q and losses of 1q43 [36], thus further corroborating the diagnosis of DLBCL. One of the cases showed specific loss of 9p21 (INK4A locus, also known as p16) known to be associated with DLBCL resistance to R-CHOP [37].
Discussion
Within this prospective study, we identified potential biomarkers (expression of CD5 for EFS and expression of FOXP1 for OS) that were able to predict the course of DLBCL at diagnosis, independent of stage and IPI. As expected ( [38] and literature therein), dynamic parameters, such as response to therapy and especially failure to achieve complete remission, which are not obtainable at diagnosis, seem to be the most reliable outcome indicators in DLBCL, yet expression of CD5 and FOXP1 added information independent of these disease dynamic parameters.
Concerning the central aim of our study, i.e., to detect in situ biomarkers that reliably help predicting the outcome of DLBCL in a prospective, homogeneously treated collective of patients, our phenotypic and genotypic analyses show that carefully selected indicators such as CD5 might identify small yet prognostically relevant subgroups with adverse outcomes under R-CHOP. CD5 as biomarker has a special sensitivity towards early adverse events, which might not be the case for some of the currently propagated biomarkers of prognostic relevance such as c-myc expression/C-MYC gene status. Furthermore, our data reappraise the prognostic role of FOXP1 with respect to OS. Several other previously studied biomarkers with suspected prognostic potential like COO, expression of bcl2, or phenotypic double-hit score appeared to be less potent in the studied collective. This might in part be due to the small size of our study, in part to genuine properties of these markers, and in part to the fact that some of these markers, while being applicable to CHOP-treated DLBCL patients, are not applicable to cases treated with R-CHOP [39]. Considering our study size, there are obvious and inevitable limitations. Yet, because of the other characteristics of our collective (123 uniformly treated patients with a median follow-up period of 53 months and altogether 51 adverse events), our data solidifies understanding of the prognostic importance of in situ biomarkers in DLBLC and the 2-year EFS analysis delivers important results. Respecting the genuine properties of some markers, especially those used as surrogates to determine COO, our results as well as observations of others [14] seriously challenge their reliability to identify prognostically and/or biologically meaningful groups among DLBCL.
Our observed prognostic role of CD5 and FOXP1 and possible prognostic role of bcl2 as well as structural genetic aberrations of (either) BCL2 or C-MYC are supported by other reports ( [31,[40][41][42][43][44][45][46] and literature therein). While a considerable number of recent papers focused on the role of bcl2 and c-myc in DLBCL [34,46,47], it seems that CD5 merits special attention for several reasons: (a) it can be very easily detected in DLBCL by standard application of CD5 (instead of CD3) immunohistochemistry in the primary diagnostic panel with subsequent application of CD3 in CD5-positive cases (to subtract the "true" T cells), as well as CD23, cyclin D1, and SOX11 (to exclude transformed small lymphocytic B cell lymphomas and blastoid mantle cell lymphomas); (b) the respective cases express CD5 in a high proportion of tumor cells (>50-100 %) with a moderate to strong staining intensity, and thus, its evaluation is unequivocal without the need for subjective and error-prone cutoff scores; and (c) because there is an increasing body of literature suggesting that CD5-positive DLBCL might represent a distinct biologic entity, being more prone to intravascular spread and extranodal location (particularly CNS), affecting individuals from the Far East and displaying a more aggressive behavior probably requiring alternative treatment approaches [40]. CD5-positive DLBCL are typically ABC [42,48], show recurrent gains of 16p and losses of 1p and of 9q21 [36,49], the latter being involved in chemoresistance [37], and display downregulation of extracellular matrix-related genes and upregulation of neurological function-related genes [48]. Addition of rituximab to CHOP improved the survival of CD5-positive DLBCL patients [50]; however, similarly to our results, the outcome of these patients is still significantly poorer compared to CD5-negative DLBCL patients [51], and the rate of CNS involvement seems not to be lowered by rituximab [52]. A recent very large retrospective report on 879 R-CHOP-treated DLBCL cases convincingly showed CD5 to be an IPI (and bcl2 and pSTAT3)-independent prognosticator in DLBCL as well [53] and pointed out distinct clinico-pathological peculiarities of such patients such as increased age, bone marrow spread, poor performance status, and B symptoms. Considering the possible direct biological effect of CD5 on B cells, namely its role as a negative regulator of B cell signaling, its influence on the ERK, PI3K, and calcineurin pathways as well as survival stimulation through autocrine IL10-related loops and the predominant expression of integrin beta-1 on the tumor cells, CD5 seems to be of probable functional and therapeutic importance for targeted approaches [40,[54][55][56]. In addition, CD5-positive cases seem to overexpress bcl2, CARD11, CCND2, and FOXP1 at the protein and mRNA level and to be more rich in c-Rel, p65, and pSTAT3 [53], all known to identify DLBCL patients at risk; this study [53] also confirmed [48] downregulation of cellular adhesion genes in such instances. Taken together, previous data and our observations might justify a separation of CD5-positive DLBCL out of the group of DLBCL, NOS, as a distinct clinicopathological entity in need of R-CHOP treatment alternatives and, probably, CNS prophylaxis.
The prognostic role of FOXP1 in DLBCL was well established in the "pre-rituximab" era ( [45] and references therein), while less attention has been paid to it in R-CHOP-treated cases. Importantly, prognostically relevant COO algorithms pay special attention towards expression of FOXP1 to classify non-GCB-like DLBCL and >90 % concordance with GEP was only achievable by consideration of FOXP1 in these algorithms (e.g., [29,44]). In line with these results, the recent report on the very poor prognosis of DLBCL reciprocally expressing the endocytic protein Huntingtin-interacting protein 1-related (HIP1R) and FOXP1 (the latter being a direct repressor of the HIP1R gene), i.e., FOXP1(hi)/HIP1R(lo) patients [57], and our prospective study findings suggest a more substantial relevance of FOXP1 in DLBCL. Importantly, FOXP1 belongs to the most reproducibly assessable markers in DLBCL as shown in an international inter-and intrainstitutional and inter-and intra-observer study [58], further calling for its regular evaluation.
Unexpectedly, a significant (33 % for FISH and 50 % for aCGH) dropout of cases for genotypic studies was noted. Detailed analysis of these cases revealed that pre-analytic conditions like inappropriate application of un-buffered formalin, fixation duration, surrounding temperature, and exact dehydration procedures were probably more relevant for lack of analytic success than the exact amount of examined tissue. Indeed, these failures were evenly distributed between core needle biopsies and lymphadenectomy specimens but were more commonly observed among tissues from a few centers. As expected, diagnostic tissue obtained by core needle biopsy procedures (usually 14-18G needles) was not arrayable and was rapidly exhausted for purposes of the study, precluding further analyses. Since cohorts of prospective clinical trials are characterized by meticulous documentation and uniform treatment of patients (the latter, if not uniform, can more substantially affect disease prognosis than many biomarkers), biomarker analyses should desirably be performed on cases collected within such studies. Therefore, the amount and the pre-analytical handling of tissue required for study inclusion must be considered also under the aspect of biomarker analyses. This particularly implies that physicians obtaining and handling the respective biopsies as well as the pathology laboratories must take responsibility for error-free and safe pre-analytic conduits, guaranteeing optimal tissue fixation and dehydration, which are indispensable for an accurate morphologic, phenotypic, and genetic analysis. For practical purposes, the protocol for probe handling from the laboratory, which provided probes with least dropout on molecular testing, is given in Additional file 1: Table S1.
Conclusions
In summary, distinct biomarkers like CD5 and FOXP1 are able to prognosticate DLBCL course at diagnosis, independent of stage and IPI and independent of initial therapy response. For the design of prospective DLBCL studies, issues like review of the slides by a central pathology, pre-analytic factors such as time to and time of fixation, choice of fixative, and dehydration as well as handling of biological entities and sub-entities in the spectrum of aggressive large B cell lymphomas should be properly discussed and promptly addressed.
Additional file
Additional file 1: Table S1. Summary of pre-analytics in the lab, submitting probes with least number of molecular testing dropouts. | 2016-05-12T22:15:10.714Z | 2015-06-14T00:00:00.000 | {
"year": 2015,
"sha1": "f8d3a13f69ad5071ac91c588d8fa421bf1b37743",
"oa_license": "CCBY",
"oa_url": "https://jhoonline.biomedcentral.com/track/pdf/10.1186/s13045-015-0168-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8d3a13f69ad5071ac91c588d8fa421bf1b37743",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211563187 | pes2o/s2orc | v3-fos-license | Bullous pemphigoid as an injection site reaction of glatiramer acetate.
Glatiramer acetate (GA) is one of the well-tolerated disease-modifying therapeutic options, which is commonly administered subcutaneously in patients with multiple sclerosis (MS). The current study aimed at defining a bullous pemphigoid (BP) skin reaction in a patient with MS receiving treatment with GA.
A 29-year-old women with MS, receiving GA treatment within the past nine months, November 11, 2018, was admitted to our MS clinic due to itching skin eruptions at the site of injection. Her disease was started at February 12, 2017 with left optic neuritis; and because of six periventricular and eleven juxtacortical brain magnetic resonance imaging (MRI) lesions without any enhancement, lumbar puncture was done for her. Due to positive cerebrospinal fluid (CSF) oligoclonal band, MS was diagnosed for her. At that time, she refused to start disease modifying treatment. GA was started 9 months before admission for her.
After dermatologic consultation, the dermatologist defined the lesions as fluid-filled and blistering at the site of injection without mucosal involvement (Figure 1).
Figure 1. Large, fluid-filled blisters
Timeline: Skin lesions appeared exactly at the GA injection site.
Glatiramer acetate (GA) is one of the welltolerated disease-modifying therapeutic options, which is commonly administered subcutaneously in patients with multiple sclerosis (MS). The current study aimed at defining a bullous pemphigoid (BP) skin reaction in a patient with MS receiving treatment with GA.
A 29-year-old women with MS, receiving GA treatment within the past nine months, November 11, 2018, was admitted to our MS clinic due to itching skin eruptions at the site of injection. Her disease was started at February 12, 2017 with left optic neuritis; and because of six periventricular and eleven juxtacortical brain magnetic resonance imaging (MRI) lesions without any enhancement, lumbar puncture was done for her. Due to positive cerebrospinal fluid (CSF) oligoclonal band, MS was diagnosed for her. At that time, she refused to start disease modifying treatment. GA was started 9 months before admission for her.
After dermatologic consultation, the dermatologist defined the lesions as fluid-filled and blistering at the site of injection without mucosal involvement ( Figure 1). Edematous papillary dermis showed congested blood vessels with mixed perivascular inflammatory cells infiltration (Figure 2). Direct immunofluorescence showed continuous linear IgG and partial C3 deposition in basement membrane zone. The immunoreactivity with anti-IgA and anti-IgM was negative (Figure 3), and BP was diagnosed in the patient. Therapeutic intervention: One month after GA discontinuation, the skin lesions resolved completely.
Follow-up and Outcomes:
No scarring appeared at the lesion site after recovery.
BP is generally described as an immunemediated skin disorder. Autoimmunity against BP antigens, Ag1 and BPAg2, in the lower layer of epidermal keratinocytes characterizes the pathogenesis of BP. 1 The incidence of BP is reported to be 14 to 43 cases per million populations in Europe. 2 BP is associated with neurological disorders such as MS, 1 Parkinson's disease, and dementia, and cardiovascular disease. 1 A cross-reaction between autoimmunity against BPAg1 and neurological disorders is hypothesized. The frequency of MS increases in patients with BP both during and after diagnosis. 1 On the other hand, the risk of MS in patients with BP is reported to be six times higher than that of the matched general population. 1 BP mainly affects elderly patients, and in the first reports of BP comorbid with MS, the mean age at skin reaction onset was reported to be 49 and 62 years; 3,4 however, in the current study, skin reactions were detected only nine months after MS diagnosis at the age of 29 years.
In previous reports, 3,4 no association was identified between skin eruptions and site of injection. Another case report of MS and BP indicated skin eruptions at the site of indwelling catheter of bedridden patients. 5 It seems that BP is mostly comorbid with MS, compared to other autoimmune disorders. Changes in the earlier stages of the disease and the relationship between its pathogenesis and clinical course remain unknown. Nonetheless, since BP mortality increases by time, 1 the prognosis of MS may be affected. To the best of our knowledge, this is the first report of BP as an injection site reaction of GA. Therefore, the patients with MS should be asked about any injection site reactions.
Conflict of Interests
The authors declare no conflict of interest in this study. | 2020-01-09T09:13:24.917Z | 2019-10-07T00:00:00.000 | {
"year": 2019,
"sha1": "a5dc6188ca8355c3bc25edaf95718095c8fae9d4",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "758ebfd5e4cb506ac1e19f4490dac800a720e42b",
"s2fieldsofstudy": [
"Medicine",
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15846263 | pes2o/s2orc | v3-fos-license | Depression and Anxiety Change from Adolescence to Adulthood in Individuals with and without Language Impairment
This prospective longitudinal study aims to determine patterns and predictors of change in depression and anxiety from adolescence to adulthood in individuals with language impairment (LI). Individuals with LI originally recruited at age 7 years and a comparison group of age-matched peers (AMPs) were followed from adolescence (16 years) to adulthood (24 years). We determine patterns of change in depression and anxiety using the Child Manifest Anxiety Scale-Revised (CMAS-R) and Short Moods and Feelings Questionnaire (SMFQ). In addition to examining associations with gender, verbal and nonverbal skills, we use a time-varying variable to investigate relationships between depression and anxiety symptoms and transitions in educational/employment circumstances. The results show that anxiety was higher in participants with LI than age matched peers and remained so from adolescence to adulthood. Individuals with LI had higher levels of depression symptoms than did AMPs at 16 years. Levels in those with LI decreased post-compulsory schooling but rose again by 24 years of age. Those who left compulsory school provision (regardless of school type) for more choice-driven college but who were not in full-time employment or study by 24 years of age were more likely to show this depression pathway. Verbal and nonverbal skills were not predictive of this pattern of depression over time. The typical female vulnerability for depression and anxiety was observed for AMPs but not for individuals with LI. These findings have implications for service provision, career/employment advice and support for individuals with a history of LI during different transitions from adolescence to adulthood.
Introduction
Language impairment (LI) is a neurodevelopmental disorder that affects around 7% of the population and which can take different forms, with either expressive language or both expressive vulnerabilities to depression and anxiety symptoms from adolescence to adulthood in a prospective longitudinal investigation of children with LI who were attending language units when they were 7 years of age. In the light of previous evidence that individuals with this disorder do experience higher levels of anxiety than do their typically developing peers, we expected to find high levels of anxiety symptoms continuing into early adulthood. Given previous evidence of a reduction in hitherto high symptoms of depression in those with LI at around 16-17 years of age, we sought to determine whether or not this improvement was enduring; a strong possibility was that environmental adversity, such as poor employment circumstances, could impact negatively. In addition to examining verbal and nonverbal skills, we developed a time-varying variable to investigate how transitions from school to employment between adolescence and early adulthood relate to patterns of depression and anxiety symptoms during the same developmental period. Given the widely reported finding that females are significantly more likely to develop depression and anxiety than males [29,30] we investigate also whether the pathways observed differ by gender.
Participants
Participants were recruited as part of a large-scale longitudinal research programme which began when the children with LI were 7 years of age [31,32]. At 16 years of age, a typically developing group of young people was recruited as a comparison sample.
Young people with LI. The initial cohort of 242 children with LI originally consisted of 186 boys (77%) and 56 girls (23%), and was recruited from 118 language units across England. They represented a random sample of 50% of all 7-year olds attending language units. Language units are specialist resource classes for children who have been identified with primary language difficulties, which are attached to regular schools. The language profiles of the children at recruitment indicated mostly mixed Expressive-Receptive difficulties (53%), and Expressive difficulties only (38%). The remaining children had poor receptive language scores and social communication difficulties. During adolescence, individuals in this group took part in follow-up stages at age 16 (N = 139), age 17 (N = 90) and age 24 (N = 84). Although some attrition occurred over this time, this was partly due to funding constraints/ sub-sampling at follow-up stages of the study at 17 years of age. In addition, some participants who had taken part at age 17 were not traced at age 24 (N = 27, these individuals had data available from 16 and 17 years), and not all of those taking part at age 24 took part at age 17 (N = 21 came back into the study and thus have data at 16 and 24 years). There were no significant differences in receptive or expressive language nor nonverbal IQ (NVIQ) at age 7 between those who participated at age 24 and those who did not (all p values >0.2). Attrition was higher for males (60%) compared to females (41%) (χ 2 (1) = 7.5, p = .006) but the proportion of males (67%) was not significantly different from the age matched peer group (56%; Fishers exact p = 0.16). Participants were included in the study if data was available for at least two of the three time points (16, 17, 24 years), resulting in 107 participants with LI (74 males, 33 females) for the growth curve analysis. In total 59 (55%) of these had data at all three time points, whilst the remainder had 2 data points available (see breakdown above).
Age-matched peers (AMP). The comparison group comprised 99 age-matched peers (AMP; 58 males, 41females) with data for at least two of the three time points for use in the growth curve analysis. This group was recruited to the study aged 16, wherever possible from the same schools as the young people with LI. Thus no early developmental information about language ability at 7 years of age is available. As with the LI group, participation varied at age 16 (N = 121), age 17 (N = 90) and age 24 (N = 66). Some participants had data from age 16 and age 17 (N = 33) and some had data from age 16 and age 24 (N = 10) whilst others had all three data points available (N = 56; 56%). These participants had no history of special educational needs nor speech and language therapy provision. Groups did not differ on age, gender, household income at age 16 (p = .80) nor personal income at age 24 (p = .40). As expected, language and NVIQ profiles were different for the groups at each time point (Table 1).
Measures
For anxiety symptoms, the self-report version of the Child Manifest Anxiety Scale-Revised (CMAS-R [33]) was completed at each time point. This is a 28-item questionnaire designed to measure anxiety symptoms in young people aged 6-19 years. Respondents are required to say whether statements are 'true' or 'not true' for the previous 3 months. The threshold for clinicallevel difficulties on this measure is a score above 18.
Depression symptoms were assessed using a self-report version of the Short Form Moods and Feelings Questionnaire (SMFQ [34]), a 13 item questionnaire designed to measure depressed mood in young people aged 8-17. Respondents are required to say whether statements about their feelings were 'definitely true' 'somewhat true' or 'not true' over the previous three months. Both these scales have been used in studies involving young adults and were deemed to remain appropriate for our participants at age 24 years (e.g. [35]). The threshold for clinical-level difficulties on this measure is a score above 7.
For language abilities, the Clinical Evaluation of Language Fundamentals (CELF-R [36] at age 16, CELF-4 UK [37] at ages 17 and 24) was used. To afford measurement continuity the CELF-4 UK was deemed the best fit assessment for our cohort at 24 years of age (neither group reached ceiling levels on this assessment, which is normed up to age 21 years 11 months). The Word Classes subscale for receptive language and Recalling Sentences subscale for expressive language were used at all three time points.
For nonverbal skills, the Wechsler Intelligence Scale for Children (WISC-III [38]) was used at 16 years and the Wechsler Abbreviated Scale of Intelligence (WASI [39]) was used at 17 and 24 years.
Ethics and procedure
Ethical approval was obtained from The University of Manchester Research Ethics Committee, UK. Written informed consent was obtained from parents or guardians on behalf of the participants enrolled in the study under the age of 18 years. Written informed consent was obtained from the participants themselves at or over the age of 18 years. The participants were interviewed face-to-face at their school or home on the measures described above as part of a wider battery. Interviews took place in a quiet room, wherever possible with only the participant and a trained researcher present. During the interview, the items were read aloud to the participants. The items and response options were also presented visually to ensure comprehension.
Statistical analysis
A 3-way ANOVA approach was used in the first instance for ease of understanding and interpretation. We report Wilks Lambda statistics because Mauchly's Test for Sphericity was significant in all cases [40]. However, we are aware that the lack of sphericity combined with incomplete data in places mean that these ANOVAs are likely to underestimate longitudinal effects in this dataset [41]. Thus, to confirm these findings, targeted linear mixed (growth curve) modelling (LMM) was used as this approach affords modelling accounting for attrition across time. A mixed effects model with a maximum likelihood (ML) estimator was used. This allowed for the intercept (depression or anxiety symptoms at baseline) and slope (the rate of change) to vary across individuals. That is, we allowed for starting values of depression or anxiety to vary between individuals and also for individuals to change at a different rate over time. Models were run using the "xtmixed" command. The random part of all models included participant ID and a first order polynomial (time). Figures reported are unstandardized Beta values with 95% confidence intervals. We acknowledge that the LMM analysis makes different assumptions about the correlation and homoscedasticity of the data and also different assumptions as to missing data. However we have included both the ANOVA and the LMM analyses to demonstrate the robustness of the findings. We are also aware that these two methods may be familiar to different audiences and we thought that providing both ANOVA and LMM results would make the findings as accessible as possible.
In addition to the key outcome measures, for the LI group only, concurrent language and IQ scores taken at each time point were regressed onto depression at each age. We also developed a time-varying variable to capture educational and employment transitions of young people with LI at age 16, 17 and 24 years (referred to as the 'Transition' variable for ease). The items included in this variable are shown in Table 2, and were binary coded as 0 for more mainstream (1 for less mainstream situations) and were then used as a within-subject profile of transition for each participant allowing us to look at variation in circumstance across time. Each participant was therefore given a grouping classification for transition (000,001,010,100,011,110 or 111). This factor was then used as an independent variable in the modelling. Note that the term 'Timevarying' does not suggest time-point as a variable. The variable is included in the model as a single variable, but the value of that variable is not constant over time, and may change from one assessment to the next assessment for any particular individual. It is therefore used as a categorical predictor and is not modelled. Since it is included as a single predictor, without any interaction with time, a single time-constant coefficient is estimated, which is what we report in the results All statistical analyses were conducted in SPSS v 22.0 [42] or Stata/SE 13.1 [43]. A two-tailed significance level of p = .05 was used unless otherwise specified. Different statistical analyses involve different numbers of participants depending on whether data at all 3 time points (3-way ANOVA) or 2 out of 3 time points (growth curves) were required.
Group and gender differences
Descriptive statistics, including the percentage of individuals above the clinical threshold for anxiety and depression symptoms, are shown in Table 3. Anxiety and depression scores correlated with each other highly at each time point (Spearman's r = .7 to .8), and within-anxiety and within-depression moderately across time points (r = .3 to .6). A similar pattern was observed between the different language and nonverbal measures. In contrast, associations across language/nonverbal measures and depression/anxiety measures were weaker overall (r = .1 to .4).
Growth curve models for depression: Groups separate
Given the longitudinal nature of the data and subsequent attrition in our sample, we then sought to confirm the significant interaction between Group x Time for depression using growth curve models. As mentioned before, growth curve models make different assumptions about the correlation and homescedasticity but also, importantly, different assumptions about missing data. Although there is debate about whether three time points are ideal for this analysis, in this instance it allowed us to examine whether the different patterns of change were robust when missing data were modelled. Since we used depression scores as the outcome, all coefficients reported below are group differences or regression coefficients (estimated in a mixed effects context).
Growth curve models were run separately for LI and AMP for depression using participants who had data from at least 2/3 time points available. Predictors were linear time, quadratic time, and gender. For LI participants, linear (β-1.
Lack of associations between depression and verbal and nonverbal kills
Next, because depression in the LI group was of most interest, we tested whether the language and IQ variables predicted this outcome for the LI group only at each of the three time-points separately. None of the predictors were significant: receptive language (16 years:
Transition variable predicts changes in depression in the LI group
The time-varying Transition variable was a significant predictor of change in depression (β 1.6 [0.3, 2.8], p = .013). Young people with LI who moved out of school (regardless of
Discussion
This study revealed differences in the depression and anxiety pathways of young people with LI from adolescence to adulthood. On the one hand, anxiety symptoms stay stable across time for both groups, the LI group experiencing higher levels of anxiety than their peers from adolescence to adulthood. On the other hand, depression shows a more complex picture, with LI participants experiencing a lessening of symptoms at 17, which is not maintained in adulthood. This picture is mirrored in the number of individuals scoring above clinical thresholds on the measures: more than a third of young people with LI fall into this higher risk group at 16 and 24 years of age, compared to 15-18% of typically developing individuals. Furthermore, the change in depression over time is associated with a particular pattern of changing experienceone in which the pressures of compulsory education are alleviated by more choice-driven college attendance or work experience, only to rise again as employment difficulties become more apparent in adulthood. These findings support the smaller scale research carried out by Rutter and colleagues (e.g. [5]) and that of Beitchman's team (e.g. [8,44]) who report higher mental health risk, particularly anxiety, in teenagers and young adults with LI. This longitudinal investigation further specifies that, for anxiety, symptoms remain stable from adolescence to adulthood. In contrast, for depression, the change is complex and non-linear for those with LI. This is not the case for age-matched peers where both depression and anxiety symptoms are stable. Neither language nor nonverbal abilities were significant associates of the depression observed for LI, nor of levels of anxiety at any time point. This is in keeping with previous research on this cohort which showed that, while some weak relationships existed between early language and mental health, this is not an important predictor of depression and anxiety outcomes per se [13,18]. Whilst NVIQ was different between the groups and had lowered over time [45], NVIQ was not a predictor of outcome. On the other hand, our analysis does indicate that environmental factors interact with mood vulnerability in this group. Our time-varying variable suggests that different patterns of school and employment transition relate more closely to depression symptoms than language or IQ. In particular, young people moving from school provision into college, who later find themselves without full time employment show a pattern of fluctuating symptoms. Whilst our data does not conclusively speak to why this may be the case, it has been suggested that young people with LI may be more satisfied with lower formal educational outcomes [46] and that a more central construct is peer relationships and friendships [26] which college attendance may have afforded. Because the AMP group showed a stable pattern of depression, we did not further investigate the effect of employment on emotional health more generally. A link between depression and employment status has been reported in the general population, however it should be noted that the direction of association is not clear with suggestion that emotional health influences employment status rather than vice versa [47]. The overall unemployment rate for our AMP group was 7% with a further 19% in part-time work which is lower than the rates for young adults with LI (48%; see Table 2). Future research is needed to examine whether differences in employment across groups could be a cause or an outcome of different patterns of emotional health over time. Those with LI also appear to carry a larger burden of anxiety regardless of these transitions or the severity of their LI (recall that language performance per se did not predict anxiety). These results are consistent with other analyses of mental health and language in individuals with LI at different ages [9,20]. They suggest that differences in depression and anxiety symptoms may be part of an inherent co-morbidity that, at least in the case of depression, interacts with environmental factors. It is of course extremely difficult to disentangle the issue of aetiology and phenotype, but it is worth noting that there are alternative explanations to a psychosocial model in which the experience of living with a language difficulty leads directly to emotional health symptoms. This is the first study to examine mental health changes from adolescence to adulthood in LI in relation to changes in environmental contexts. It reveals important information for policy and practice. First, the results suggest that individuals with LI are at increased risk of mental health issues over a prolonged period from adolescence to adulthood and that both males and females are equally vulnerable in this population. Awareness of these links is needed in public health services such as community-based doctors/GP practices, mental health teams and social service providers. To our knowledge, in the UK, mental health service providers do not routinely inquire about individuals' history of language difficulties. Second, it raises the possibility that young people and adults with LI could benefit from improved mental health, in particular depression symptoms, if education/ employment transitions are managed more effectively, and might avoid the need for later service input. Attention to the educational experiences and school transition plans of individuals with LI could help prevention. For example, school transition plans may need to consider mental health support more explicitly and support workers need to be aware that talking therapies may need to be adjusted to take into consideration language abilities, particularly comprehension. Career/employment advice and support in early adulthood could also help mitigate the development and severity of depression symptoms. Such services are available in adulthood for individuals with intellectual and other disabilities such as autism [48], however individuals with LI do not usually meet criteria for such services. LI is a hidden disability [49]. Individuals with LI do not have any outward sign of their difficulties and can fall between stools in terms of access to support.
Participants with LI in this study were recruited once they were attending language units and therefore represent a group of children with severe and persistent LI. Longitudinal examination of mental health pathways of individuals with less severe childhood LI would be useful. There are indications, for example, that mental health outcomes in adulthood of individuals with mild to moderate childhood LI may be comparable to those expected in the general population [17] and a wider ranging sample may reveal stronger associations between emotional health and language. Aspects of the measurements we used could also be built upon. It is worth acknowledging that our 'transition' variable was relatively crude and further studies could include more detailed elements of the transition process. As discussed earlier, it would be interesting to research the reasons for the relationship between life transitions and emotional health. Finally, growth curve modelling enables examination of longitudinal patterns where participant attrition has occurred. However these were used in a confirmatory manner, and we acknowledge that increased assessment points at more evenly spaced intervals should be the aim of future studies. In addition, while we are confident that individuals in our original sample who did not participate into adulthood are no different in the early years to those included within this investigation (at least in terms of language and NVIQ-see method), it is not possible to tell whether the pathways of these individuals would have been qualitatively different. In short, data may not be missing at random, but may represent individuals showing important developmental trends not captured here. Further research is needed to replicate our results that introduce the notion of time-varying factors affecting mental health when vulnerabilities are already present.
Nonetheless, this study into the longitudinal pathways of individuals with LI has helped highlight the long-term risk of anxiety and the complex nature of depressive symptoms in this group. | 2018-01-18T08:13:10.391Z | 2016-07-12T00:00:00.000 | {
"year": 2016,
"sha1": "6c1cd897e4b2d227545d6e8cbed3d014f713ff03",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0156678&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f01e9a5f9cf2c4851afc985eacd1209e9b07ea4b",
"s2fieldsofstudy": [
"Linguistics",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
91942434 | pes2o/s2orc | v3-fos-license | Endophytes : As Potential Biocontrol Agent — Review and Future Prospects
Endophytes are the microbes residing internally in the host tissues without causing visible disease symptoms. They have found involved in a balanced interaction with the plants and providing benefits such as, growth enhancement and disease resistance. In this review we hypothesize that endophytes can be employed as a potential biocontrol agent, as biocontrol is becoming most suitable disease management strategy due to its health and environment conservational benefits. This aspect of endophytes should be consider, there are several investigations that have revealed and proved the role of endophytes as best biocontrol agent. Mutualistic interaction of endophytes involve different mechanisms, as it may trigger certain genes involved in induced systemic resistance (ISR) that may initiate defense mechanism against attack of pathogens or by formulating secondary metabolites and other chemical compounds that are directly toxic to the pathogens. There is a need to explore the endophytic interaction and its mechanism of causing disease resistance more precisely.
Endophytes
Endophyte was defined as "endophytes colonize internal tissues of host without causing symptoms, but chances are there that endophytes may cause disease after completing latency period" (Petrini, 1991).The word endophytes literally means "within plants" (In Greek; endon-within and phyton-plants).Endophytes is a vast term with respect to its literal meaning, host plants and inhabitants, such as fungi (Stone, Bacon and White, 2000), bacteria (Kobayashi & Palumbo, 2000), insects (Feller, 1995) and algae (Peters, 1991).Endophyte colonizes plant tissues internally (Carroll, 1986), without causing visible disease symptoms.They live in symbiotic interaction with plants.And they also show variation in symbiotic interaction, which ranges from facultative saprobe, to parasite, to mutualistic.However, like all endophytic interactions provides nutritional benefits and protection against environmental and microbial stresses (Schulz & Boyle, 2005).different regions such as temperate, tropical regions and in boreal forests (Zhang et al., 2006).Arbuscular mycorrhiza fungi are present extensively throughout the terrestrial ecosystem, and fossil records and molecular analyses shows their association with plants from their origin millions of years ago (Redecker, Kodner, & Graham, 2000).Mutualistic bacteria has been identified in both monocots and dicots, that ranges from higher plants such as oak and pear, to lower plants like sugar beet and maize (Ryan et al., 2007).
Endophytic bacteria found within the plant system are dynamic, varied, and diverse (Sturz et al., 1997).For such plant-endophyte relationship to be stable and successful, some form of synchronization must be present.Bacterial endophytes lives, adapts and survives within the suitable environment provided by the host plants.And the host plants also get benefits from this partnership, such as growth promotion and protection (Shishido et al., 1995).
Endophytic Diversity
Endophytes show more diversity and abundance than plant pathogens within the plant systems (Ganley et al., 2004).These symbionts are very diverse, only small number of them has been characterized (Rodriguez et al., 2009).Endophytes mostly belong to phylums Basidiomycota and Ascomycota and they may be from orders Hypocreales and Xylariales of class Sordariomycetes or Loculoascomycetes (Unterseher et al., 2011).
Grasses mostly involve endophytic fungi belonging to family Clavicipitaceae, tribe Balansiae.There are five genera and about 30 species in the tribe (Luttrell & Bacon, 1977).The genera Atkinsonella and Myriogenospora contain only one specie while the genera Balansia, Balansiopsis and Epichloe' contain more than one species.Balansia is the most diverse of all having 15 species (Diehl, 1950).These genera are classified on the basis of conidia formation (Clay, 1986).These fungi are termed endophytes, found in host meristem, young leaves and inflorescence (Leuchtmann & Clay, 1988).However, most species invade vegetatively running parallel to the long axis of host leaf and stem tissue cells (Clay, 1989).
Arbuscular mycorrhiza fungi is a part of mutualistic rhizosphere, these are micro symbionts that are involved in improvement of plant nutrient uptake and provides protection against different stresses (Smith & Read, 1997).AMF involves biotrophic Glomeromycota associated with different species of plants ( Van der Heijden et al., 2015).
Review of previous studies on bacterial endophytes have characterized some of the bacterial types isolated from within the plant tissues after surface cleaning of plant tissues by using disinfectant such as sodium hypochlorite (Miche & Balandreau, 2001).The diversity of endophytes extracted from the poplar trees have been explained in a study (Porteous-Moore et al., 2006).Five taxa of endophytic bacteria were identified as Microbacterium, Pseudomonas, Clavibacter, Curtobacterium, Cellulomonas by molecular techniques such as gene sequencing and by fatty acid analyses (Zinniel et al., 2002).Number of bacterial endophytes has been extracted from the vascular tissues of citrus varieties such as E. aerogenes, Acinetobacter baumanii, Bacillus spp., Burkholderia cepacia, Citrobacter freundii, Corynebacterium spp., Arthrobacter spp., Enterobacter cloacae, and Pseudomonas aeruginosa, Acromobacter spp., Acinetobacter iwoffii, Alcaligenes-Moraxella.Some studies have concluded bacterial endophytes as polyphyletic belonging to vast range of taxa, such as Actinobacteria, α-Proteobacteria, β-Proteobacteria, γ-Proteobacteria, and Fermicutes (Miliute et al., 2015).
Rhizobacteria is also included in bacterial endophytes, playing vital role in host plants survival (Dobereiner, 1993).
Mode of Action
Number of studies has been done but how endophytes effect the plant disease severity is still unknown (Busby, Ridout, & Newcombe, 2016).Induction of host defense mechanism is consider to happen first, as Bacteria (Sequeira et al., 1977), nematodes (Kosaka et al., 2001), Viruses (Ross, 1961) and fungi (Pozo et al., 2002) induces plant defense mechanism, such as Systemic acquired resistance (SAR) and Induced systemic resistance (ISR) (Van Wees et al., 2000).For example, a fungi Colletotrichum tropicale has stimulated hundreds of genes and their expression caused greater plant immunity in Theobroma cocao (Mejia et al., 2014).
Endophytes can also minimize the defense mechanism of plant allowing other pathogens to cause disease (Houterman et al., 2008).Many studies have been done and their results have shown suppressing effects of endophytes due to competition or endophytic metabolites (Martin et al., 2015).For example, Ampelomyces spp.Suppress the powdery mildew sporulation (Kiss, 2003).
Induced systemic resistance (ISR) is a unique way by which endophytes enhances the plant defenses against number of pathogens.Various root-inhabiting mutualists, such as Trichoderma, Bacillus, mycorrhiza species and Pseudomonas triggers the immune system of plant for enhanced defenses against pathogens (Pieterse et al., 2014).
"Induced resistance" is a term used for the resistance stimulated by chemical or biological agents, which helps the plants to fight against the pathogen attacks in the future (Kuc, 1982).ISR is only initiated when endophytes colonizes the root system of host plants (Lugtenberg & Kamilova, 2009).Biofilm formation is important for the root establishment of B. subtilis, polysaccharides of host cell wall stimulates the matrix production by triggering the bacterial genes (Beauregard et al., 2013).
The endophyte adapts new lifestyle for the sake of survival, in the dynamic medium of the host cells by host-specific metabolic cues (Lahrmann et al., 2013).Trichoderma spp.establishes around the plant roots, where it forms a structure like appressorium which is an important characteristic of pathogenic fungus (Mukherjee et al., 2013).Pseudomonas, Bacillus, and Trichoderma strains for establishing themselves around plant roots uses auxin as a triggering agent for the formation of large number of lateral roots, which helps in better nutrient uptake and defense against pathogens (Contreras-Cornejo et al., 2009).
Endophytes are found responsible of producing bioactive compounds that contributes to their biocontrol activity (Akinsanya et al., 2015).An endophytic fungus Phomopsis spp. is found responsible of producing number of secondary metabolites including antimicrobial and antifungal compounds (Erbert et al., 2012).The biologically active Xanthones were found in the fermentation products of Phomopsis spp.(Yang et al., 2013).
Dependency
Review of literature shows the dependency of endophytes on biotic and abiotic factors, host and pathogen (Busby et al., 2016).Environmental factors such as humidity, pH and temperature effects the endophytic interaction of fungi (Cook & Baker, 1983).For example, Trichoderma activity is influenced by soil moisture (Jones & Bienkowski, 2015), and Candida activity is effected atmospheric conditions add strength against apple pathogen (Usall et al., 2000).In a trial, variations were observed in the endophytic activity against Dutch elm disease, indicating that there may be some abiotic factors involved that influences endophytic activity (Martin et al., 2015).
Nonconductive or poor soil conditions are thought to affect the biological control activity of endophytes against plant pathogens (Handelsman & Stabb, 1996).
Disease triangle consists of three components, host, pathogen and the environment, each component should be present for the occurrence of the disease, besides environment the other two components also influences the activity of endophytes.In case of rust disease the influence of these components has been observed (Nischwitz et al., 2005;Kiss, 2003).
But in an experiment, it has been proved that pathogen matters most in disease occurring activity.i.e., Colletotrichum gloeosporioides and Pestalotia psidii were tested against fifteen endophytic species.Fourteen of them showed defense against C. gloeosporioides, while nine were defensive against P. psidii (Pandey et al., 1993).In another finding, nine endophytic species were tested against following wheat pathogens: Drechslera tritici-repenti, Alternaria triticimaculans, Zymoseptoria tritici, and Bipolaris sorokiniana.Nine of them showed full defense against Zymoseptoria tritici and Drechslera tritici-repentis, eight showed defense against Bipolaris sorokiniana, and four against Alternaria triticimaculans (Perello et al., 2002).
Claims
In this review we have claimed that "Endophytes are potential biocontrol agent" and our claim is based on the present research literature that has been reviewed in this regard.
Endophytes are believed to have biocontrol potential against plant pathogens (Sapak et al., 2008).Presence of endophytes in plant systems provides beneficial effects (Ting et al., 2010).Many studies have concluded the role of endophytes as potential biocontrol agent mainly against pathogens of vegetable and fruit crops, as in case of Chinese cabbage (Narisawa et al., 1998).Endophytes shows biocontrol agent properties against pathogens of tomato (Hallman & Sikora, 1995), banana (Ting et al., 2008), barley (Boyle et al., 2001).Biocontrol properties are also shown by endophytes by controlling Ganoderma boninense in oil palm (Sapak et al., 2008).
Number of investigations has revealed that endophytic fungi can be used as a biocontrol tool (Sikora et al., 2008).The endophytic fungi play antagonistic role and minimizes the threat of nematode attack (Sikora, 1992).Endophytic Fusarium oxysporum decreased the number of nematodes on banana (Sikora et al., 2008).
Many studies have also concluded that some bacteria, along with endophytic bacteria (EB), enhances the symbiotic activity of AMF in the host, and can be used as biocontrol agent against plant pathogens (AzcÓnAguilar et al., 1998).So, it has been recommended to use AMF and EB together as biocontrol partners and it has been proved (Gianinazzi et al., 2010).
Endophytes consist of mutualistic symbionts that can be used as potential biocontrol agent of plant pathogens.The potential of endophytes during symbiont/host interaction has been revealed in number of studies (Sturz et al., 2000).
Endophytic bacteria live in the same environmental conditions as the plant pathogens, such as vascular wilt pathogens.This is a positive aspect for endophytes to serve as biocontrol agents.Excessive research on biocontrol properties of microbes has revealed that endophytic symbionts extracted from plant tissues shows potential as biocontrol agent against pathogens (Duijff et al., 1997), nematodes (Hallmann et al., 1998) and insects (Azevedo et al., 2000).
Biological control of plant pathogens has been observed and proved on grasses having symbiotic association with endophytes.In vitro and field demonstration has been performed and suppression of diseases have noted in case of grasses associated with endophytes (Siegel & Latch, 1991).In past, potential use of endophytes as biocontrol agent has been revealed by many investigators (Schardl, 2001;Sturz et al., 2000).
Why Biocontrol?
Because, there are growing concerns with the detrimental environmental effects of chemicals used to control plant diseases.They cause soil, water and air pollution, and are often made from expensive and non-renewable petrochemicals that have many adverse effects on natural environment.Moreover, repeated chemical treatments are required for efficient control with increases the initial economic cost (Clay, 1989).And also, use of chemicals to control diseases favors the addition of resistant mutants to pest population.As a result, biocontrol has become an important integrated management strategy (Waage & Greathead, 1988).
The chemical compounds used for the management of plant diseases are not safe for the healthy environment.Consequently, we need to devise integrated managemental strategies.So, biocontrol is employed as a reasonable strategy for disease management (Mejia et al., 2008).
As human pathogens are becoming resistant to antibiotics similarly the plant pathogens have become resistant to many chemicals used for their control.With the excessive use of chemicals pathogens have develop resistant strains.For example, Ustilago, Pythium, Phytophthora, Penicillium, Mycosphaerella, Sphaerotheca, Verticillium, Botrytis, Cercospora, Colletotrichum, Fusarium, Aspergillus and Alternaria are the fungal pathogens that have developed resistant strains against fungicides used against them (Agrios, 2005).Resistant strains of Erwinia amylovora to antibiotic streptomycin, causal agent of fire blight, had been known since the late 1950s (McManus & Jones, 1994).
Evidences
Endophytes are potential biocontrol agent is supported by number of evidences from the literature reviewed.Some members of Acremonium sp. can colonize roots or shoots and can decrease nematodes population; including Acremonium coenphialum (Pedersen et al., 1988), Acremonium lolii (Stewart, 1993) and Acremonium strictum (Goswami et al., 2008).Acremonium implicatum is a fungus that negatively affect Meloidogyne incognita causing root galls.This fungus was isolated from these galls caused due to Meloidogyne incognita (Lin et al., 2013) also from eggs of Meloidogyne hapla (Figure 1).
Greenhouse demonstration has revealed the effective role of F. oxysporum isolates in controlling R. similis in Uganda (Niere et al., 1998).Studies have proved the effectiveness of biological control of R. similis using Fusarium oxysporum in banana cultivars (Figure 2) (Pocasangre et al., 2000).Evidences have proved that Despite of all the progress briefly described here, endophytes still need attention of researchers.As it can become an ultimate tool to handle plant diseases more effectively.Especially, the complex decline diseases that are major threat to the perennial plants and difficult to manage, e.g., destructive mango wilt disease.
At the end, by considering the great potential of endophytes the road map for the future research can be designed.
The understanding of the plant-microbe interaction should be given primary importance because by knowing this interaction better, it could one day leads to develop crop plants that can interact with endophytes/beneficial microbes more efficiently.Eventually, we can move towards gaining our goals of sustainable agriculture. | 2019-04-03T13:11:50.957Z | 2019-03-15T00:00:00.000 | {
"year": 2019,
"sha1": "613e6ab3f087cf06e1f6cc364ee221b033bf16d8",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/jas/article/download/0/0/38716/39379",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "613e6ab3f087cf06e1f6cc364ee221b033bf16d8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
55708427 | pes2o/s2orc | v3-fos-license | Overview: Microbial amendment of remediated soils for effective recycling
In recent years, various methods are being considered with appropriate amendments, not with conventional reclamation to recycle deteriorated soils after remediation as agricultural addition, backfilling and construction materials etc. Among these amendments, microbial amendments with effective microorganism(EMs) are known to improve soil qualities such as fertility, strength and toxicity to be recycled into possible utilizations. This study indicates the possibility of recycling the remediated soils by using these EMs most efficiently. Soil samples will be collected from contaminated sites with either heavy metals or petroleum and will be remediated by bench-scale soil washing and thermal desorption. And then the remediated soils will be treated with easily obtainable inocula, substrates (culture media) near our life and they are compared with commercial EM products in terms of the cost and efficiency. Also, after treating with a number of mixing ratios, soil properties of (1) fresh, (2) contaminated, (3) remediated (4) amended soils will be evaluated based on soil quality indicators depending on demands and the optimal mixing ratios which are effective than commercial EM products will be determined. The ratio derived from pre-tests could be applied on the remediated soils with pilot-scale in order to assess suitability for recycling and characterize correlation between soil properties and microbial amendments regarding contaminants and remediation, and furthermore for modelling. In conclusion, application of the established models on recycling remediated soils may help to dispose the remediated soils in future, including environmental and ecological values as well as economical values. © The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). MATEC Web of Conferences 138, 04001 (2017) DOI: 10.1051/matecconf/201713804001
Introduction
These days, soils have been significantly contaminated with contaminants such as heavy metals or petroleum any times from everywhere since urbanization and industrialization.Various methods like soil washing and thermal desorption etc. are used to remediate the contaminated soils and also aggressive and subsequent remediation methods are being used to increase remediation efficiency.They remove contaminants from the soils by utilizing physiochemical, biological, and thermal processes and can satisfy soil remediation standards within a limited time.However, such process could affect soil properties and degrade the quality of the soils.Effects of soil washing and thermal desorption, which are the most frequently used remediation methods, on the physiochemical and biological properties of remediated soil were being investigated by many previous studies.After soil washing, particle size, water holding capacity, electrical conductivity (EC), exchangeable cations (potassium, calcium, magnesium), organic matter, cation exchange capacity (CEC), total nitrogen, total microbial number and nutrients (manganese, zinc) bioavailability were decreased.And after thermal desorption, acidity, EC, exchangeable cations, organic matter, CEC, total nitrogen, available phosphate, total microbial number were decreased (Yi et al. 2012(Yi et al. , 2013)).With the increase of EDTA concentration, the maximum void ratio, soil cohesion, arrangement directionality and the content of illite, albite and montmorillonite decreased while the consolidation coefficient, compression modulus, internal friction angle, plastic limit, elastic limit and content of quartz increased (Wang et al. 2013).Aggressive remediation with high temperature affect the particle size distribution, mass loss, mineralogy and permeability of the soil and it is very likely to affect the dynamic behavior such as infiltration, permeability and shear behavior depending on the sample composition, sand only or sand-clay mixture (Zihms et al. 2013).The range of maximum dry densities under compaction remained reasonably consistent; however, a clear reduction in the optimum moisture content was observed in the thermally treated soils (Khan et al. 2014).Micro CT images have shown that high temperature smouldering remediation makes the grain surfaces significantly smoother and this change may explain changes that have been observed in dynamic soil grain-grain and grain-water interactions such as permeability, cohesiveness, and shear strength of soils (Switzer et al. 2013(Switzer et al. , 2015)).These results show that the remediation processes used to clean contaminated soils affect the soil qualities.Commonly, the most of remediated soils have been used for backfilling and landfilled.Due to high amount of the remediated soils and treatment costs, the possibility to utilize the remediated soils in for example agricultural soil addition, backfilling and construction materials etc. would be desirable.However, there are some functional problems to recycle the remediated soils as diverse usages.The remediated soil could not satisfy the soil criteria for sources of demands and be managed in insufficient regulation.The remediated soils have several problems which are infertility, low strength and residual toxicity depending on demands.For agricultural addition and landscape, it may be inadequate in terms of fertility and toxicity.For construction materials, strength and toxicity.For replantation in roadside cutting-slope, strength and fertility.
In response to the change in soil properties by remediation, the soil material has been amended technologically and ecologically.To utilize the cleaned soil for healthy and more value-added purposes, soil improvement and amendment are needed.Among the amendments, microbiological techniques have potential capacities not to amend the remediated soils, but also to exert multiple functions.The uniqueness of microorganisms and their often unpredictable nature and biosynthetic capabilities, given a specific set of environmental and cultural conditions, have made them solve particularly difficult problems in ecosystem.For many years, soil microbiologists and microbial ecologists have tended to separate soil microorganism into beneficial and harmful depending on their functions and how they affect soil quality.The microorganisms which are beneficial to soils were suggested by Teruo Higa, University of the Ryukyus, Japan and they were referred to as 'Effective Microorganism (EM)'.The concept of EM means mixed cultures of beneficial and naturally-occurring microorganisms that can be applied as inoculants to increase the microbial diversity of soils.A series of inoculations are made to ensure that the introduced microorganisms continue their dominance over the indigenous populations.Originally, EMs was used for agricultural compost and soil conditioning and these include the suppression of plant pathogens and diseases, conservation of energy in plants, solubilization of soil minerals, soil microbialecological balance, and biological nitrogen fixation (Higa et al. 1991).Furthermore, it has been developed as alternative uses for a more sustainable soil management.
And also EMs could be used for bioremediation to remove residual trace contaminants in the remediated soils.TPH concentration before EM treatment was 323.8 mg/kg, whereas TPH concentrations on 2 days after EM treatment was 102.1 mg/kg and on six days after EM treatment TPH was 91.3mg/kg (Lee et al., 2008).But for inorganic toxic compounds such ad heavy metals, microbes are unable to simplify them into harmless compounds, and they should be used according to their specialization for the type of contaminants.Some specific microorganisms are known to reduce the toxicity by different mechanisms such as biosorption, metal-microbe interactions, bioaccumulation, biomineralisation, biotransformation and bioleaching (Dixit et al. 2015).
According to recent geo-biological researches about Microbially Induced Calcium Carbonated Precipitation (MICCP), an alkalophilic soil bacterium with a highly active urea enzyme consumes urea within the microbe, decomposing it into ammonia (NH3) and carbon dioxide (CO2).These chemicals diffuse through the cell wall of the gram positive microbe and into the surrounding solution.The reactions spontaneously occur in the presence of water; ammonia is converted to ammonium (NH4 + ) and carbon dioxide equilibrates in a pHdependent manner with carbonic acid, carbonate and bicarbonate ions.The net increase in pH is due to hydroxyl ions (OH -) generated from the production of NH4 + which exceeds the available Ca 2+ for calcite precipitation.This provides the alkaline environment and carbonate required for the precipitation of calcite (CaCO3).The negatively charged bacterial cell is attracted to the soil particle surface due to a higher concentration of nutrients adjacent to surfaces in addition to the physicochemical properties of both the bacterial cell and soil particle (Dejong et al. 2010).When diluted commercial EM and EM ceramics are mixed with cement, the compressive strength at 3 days and 7 days was found to be larger by 30 to 50% than that of control (Sato et al. 2000).
Culture-dependent microbes already contribute much to human life, yet the latent potential of vast numbers of uncultured and thus untouched microbes, is enormous (Patil et al. 2014).Indigenous microorganisms are a group of innate microbial consortium that inhabits the soil and the surfaces of all living things.Indigenous microorganisms inhabit the soil with the abilities of biodegradation, bioleaching, biocomposting, nitrogen fixation, and improving soil fertility and they are being used effectively as microbial inoculants that could exist naturally in soil or added as microbial inoculants to soil where they can improve soil quality (Kumar et
Soil
Soil samples will be collected from three different points of soil surface (0-20 cm).The soil samples will be collected from contaminated sites with heavy metals and petroleum respectively.Non-contaminated soil will also be collected from the arable land.The sampling site is located close to an industrial area in Korea and was used for cultivating before.
Microorganism
The commercial EM and EM ceramic will be obtained from the Korean EM Research Organization (Busan, Republic of Korea).
Other beneficial microorganism will be isolated from indigenous solids like cultivated soils, contaminated soils and concrete structure.
Culture media
The commercial molasses will be also obtained from the Korean EM Research Organization.
Chemical agar medium containing nutrient broth and inorganic compounds will be produced by using other researches recipe depending on microorganisms.
Microorganisms are effective only when they are presented with suitable and optimum culture.But, chemical agar medium is expensive to be feasible in real situation.Some organic wastes like food wastes, agricultural by-products and rice water processed with grinding or autoclaving could be used as economical substrates to cultivate microorganism.Addition of an organic compost and liquid swine manure for the removal of soil TPH showed higher efficiency as 84.4% and 92.2% respectively than inorganic nutrients of 80.2% (Kim et al. 2008).
Also, soil solution extracted from indigenous soil by mixing and centrifuging may be effective media to adapt microorganisms to the remediated soil.
Soil washing
The soil sample contaminated with heavy metals will be remediated by batch test with EDTA solution as reported by Lei et al.The air-died soil samples will be sieved using a sieve (2 mm) to remove large particles, then thoroughly mixed to ensure uniformity.The tubes containing 2 g soil samples and 20ml EDTA (0.075mM) will be shaken at a speed of 280 rpm at room temperature for 3 h and 3 cycle.The suspension will be centrifuged at 5000 rpm for 15 min and the supernatants will be filtered through a 0.45 μm membrane for heavy metal analysis.Chen et al. compared the extraction efficiency of several chelating agents for heavy metal contaminated soil, results showed that extraction efficiency of Cd increased with the increasing eluent concentration, extraction efficiency of the same eluent concentration followed the order EDTA> DTPA> NTA> PA> CA> CD> HCl, but only EDTA can achieve the desired leaching effect on Pb and Cd.
Thermal desorption
The soil sample contaminated with petroleum will be also remediated by remedy screening test as reported by US.EPA.Among these tests, static tray test full-scale thermal treatment systems.Briefly, An aliquot of 100-500g of fine earth fraction (<2mm) of contaminate soil is heated at 760℃ in a muffle furnace equipped with an electronic temperature controller.The depth of the soil should be kept at a minimum to eliminate temperature and concentration gradients within the soil bed.The time to reach the target temperature should be minimized to a practical laboratory timeframe such as 5 to 10 minutes.Longer time may be required depending on the specific contaminants present in the soil.
Microorganism Isolation and Cultivation
Base on the collection sites, the process of collection and isolation methods are different as they vary from place to place.
For isolation of bacterial strains, 1g of indigenous soils will be homogenized with 9ml of sterile deionized water.Serial dilutions are prepared in 9ml of sterile deionized water and plated on Nutrient Agar (NA).From the NA plates, representative colonies of all different morphologies are chosen at random, purified by sub-culturing and maintained in slants of NA.All culture works will be conducted aseptically.
Cultivation of the microorganism will be conducted under aerobic conditions in a medium containing Nutrient Broth.Inoculated culture will be incubated at 30℃ in an incubator for 120 hr.
Soil amendment
The bacterial isolate (5 × 107 / ) will be inoculated into 250 ml Erlenmeyer flasks containing diverse culture medium as previously stated and aerobically cultivated at 30℃ by the agitating at the speed of 150 rpm.The cells are harvested from the growth medium at early-stationary phase by centrifugation at 5000 rpm for 10 min or filtering with Whatman No. 1 filter paper.After rinsing in deionized water the cells are again centrifuged or filtered.The cell could be diluted with deionized sterile water prior to use and it can be used as microbial reagent.
Batch incubation will be carried out with 10g of 2-mm sieved remediated soil (oven dried) and different microbial reagents with a number of mixing ratios in 50 ml sterile glass bottle.
Pot incubation will be conducted with spraying the microbial reagents to the remediated soils for 8 weeks with a week interval.
Soil and Microbes analysis
Soil quality indicators selected by KS | ISO15176 depending on source of demands will be classified into fertility, strength, bioremediation according to physical, chemical, biological, ecological and engineering soil properties.
Physical & Engineering properties
Bulk density and water content will be evaluated by USDA method.
Compressive strength will be evaluated with unconfined compression test.Shear strength will be estimated by direct shear test.Plastic index and hydraulic conductivity will be estimated by ASTM D4318-10 and D2434-68.
Chemical properties
Soil pH, EC, texture, exchangeable cations (Ca, K, Mg, Na), bulk density and water content will be estimated by USDA methods.CEC will be evaluated by US.EPA method 9081.Total nitrogen and available phosphate will be evaluated by UV/visible spectroscopic method.Organic matter will be estimated by Walkley-Black method.
Heavy metals in the soils will be extracted using sequential extraction procedure and determined by atomic absorption spectrophotometry.Hydrocarbons in the soils will be extracted using a modified standard protocol of determining soil hydrocarbon content in soil according to ISO/DIS GC-method.
Biological & Ecological properties
The microbial populations in the microcosm soils will be determined by a most probable number (MPN) procedure.
Five mustard (B.alva), and 3 pea (P.sativum) will be added separately in triplicate to glass jars containing 20 g of soils re-wetted to 80% WHC.Lids are loosely screwed on to reduce evaporation but allow aeration and the seeds are left to germinate at 22℃, 80% humidity, 16 h full illuminance and 8hr darkness.When >70% seeds in the non-contaminated soil germinated, the number of seeds germinated in all soil samples are recorded; this is after 4, 6 or 7 days exposure for mustard and pea, respectively.Seedling are dried at 105 ℃ for 24 h and their mean dry weight calculated to assess effects (Dawson et al. 2007).Aboveground length of the mustard and pea will be estimated at 8 th days after seedling.
Statistical analysis
T-test will be conducted to all soils in process for evaluating effects.The experimental results will be statistically analyzed using SAS program with 0.05 % significance level.
Conclusions
This study is aimed to establish a system to enhance effective utilization of the remediated soils reasonably by considering occurrence location, soil properties and soil criteria for source of demands.By doing so, it is intended to develop a soil quality improving device applied to the remediated soil from actual disposal plant, to design an expanded one in field scale, and to derive optimal driving factors.
Ultimately, in this way to a meaningful and significant extent, a correlation of soil properties and microbial amendment will be revealed in bench-and pilot-scale and it could be used to model a function of soil properties and microbial amendments depending on soil demands for effective recycling.
Figure 1 Figure 2
Figure 1 Amounts of excavated soils and remediated soils in Korea ('06-'11) al. 2015).The use of indigenous inoculum isolated from local soils resulted in a higher arbuscular mycorrhizal fungal diversity in the root of plants growing in the remediated soil with EDTA chelating agent extraction compared to the ones amended with the commercial inoculum (Maček et al. 2016).
Figure 3
Figure 3 Expected demands of the microbially amended soils
Figure 4 5 MATEC
Figure 4 Schematic process of remediation and amendment
Figure 5
Figure 5 Schematic process of microbial amendment
Figure 6
Figure 6 Schematic flow for modelling of microbial amendment | 2018-12-13T06:59:20.004Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "5f1a3e71714737f7badd74b55bf5b469fe17408b",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/52/matecconf_eacef2017_04001.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5f1a3e71714737f7badd74b55bf5b469fe17408b",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
85445092 | pes2o/s2orc | v3-fos-license | Contributions of suboolemmal acidic vesicles and microvilli to the intracellular Ca2+ increase in the sea urchin eggs at fertilization
The onset of fertilization in echinoderms is characterized by instantaneous increase of Ca2+ in the egg cortex, which is called 'cortical flash', and the subsequent Ca2+ wave. While the cortical flash is due to the ion influx through L-type Ca2+ channels in starfish eggs, its amplitude was shown to be affected by the integrity of the egg cortex. Here, we investigated the contribution of cortical granules (CG) and yolk granules (YG) to the sperm-induced Ca2+ signals in sea urchin eggs. To this end, prior to fertilization, Paracentrotus lividus eggs were treated with agents that disrupt or relocate CG beneath the plasma membrane: namely, glycyl-L-phenylalanine 2-naphthylamide (GPN), procaine, urethane, and NH4Cl. All these pretreatments consistently suppressed the cortical flash in the fertilized eggs, and accelerated the decay kinetics of the subsiding Ca2+ wave in most cases. By contrast, centrifugation of the eggs, which stratifies organelles but not the CG, did not exhibit such changes except that the CF was much enhanced in the centrifugal pole where YG are localized. Surprisingly, we noted that pretreatment of the eggs with these CG-disrupting agents or with the inhibitors of L-type Ca2+ channels all drastically reduced the density of the microvilli and their individual shapes on the egg surface. Taken together, our results suggest that the integrity of the egg cortex ensures successful generation of the Ca2+ responses at fertilization, and that modulation of microvilli shape and density may serve as a mechanism of controlling ion flux across the plasma membrane.
Introduction
At fertilization, eggs of all animal species display an increase of intracellular Ca 2+ , which is used as a biochemical signal to activate the egg.Historically, starfish and sea urchin eggs have been used as suitable experimental models for studying the universal phenomenon of the Ca 2+ signaling in fertilized eggs in which the sperm-triggered Ca 2+ increase resumes the meiotic cycle, induces protein and DNA synthesis, and initiates the embryonic development.
The sperm-induced Ca 2+ increase in the fertilized egg is mainly due to the ion influx and to the release from internal Ca 2+ stores, e.g. the endoplasmic reticulum (ER) and acidic vesicles.In sea urchin eggs, the intracellular Ca 2+ increase takes place in two sequential events.First, upon the sperm-egg fusion, the egg membrane potential depolarizes and the L-type Ca 2+ channels open (influx).This leads to a synchronized Ca 2+ increase in the entire subplasmalemmal region, which is known as 'cortical flash' (CF).The electrical change of the plasma membrane induced by the fertilizing sperm is thought to render the fertilized egg refractory to the entry of additional spermatozoa: the fast electrical block to polyspermy [1].The CF, which lasts about 13.5 sec in sea urchin eggs, precedes the onset of the Ca 2+ wave Ivyspring International Publisher [2].Thus, the rise of the Ca 2+ wave appears to require a latent period, which can be defined here as the time lag from the beginning of the CF to the onset of the Ca 2+ wave that starts from the sperm-egg interaction site.The Ca 2+ wave then propagates to the antipode, triggering CG exocytosis.As a result, the vitelline layer elevates to form the fertilization envelope (FE), which has been suggested as a mechanism of mechanical block to polyspermy [3,4].
A few second messengers can mobilize intracellular Ca 2+ in sea urchin eggs, mimicking the Ca 2+ wave in fertilized eggs.Inositol 1,4,5-trisphosphate (InsP 3 ) and cyclic ADP-ribose (cADPr) open the cognate InsP 3 and ryanodine receptors on the ER, respectively.On the other hand, nicotinic acid adenine dinucleotide (NAADP) may liberate Ca 2+ from distinct internal stores such as reserve (yolk) granules, the lysosome-related vesicles in sea urchin eggs [5,6].
The cortex of echinoderm eggs, through which the sperm-induced Ca 2+ signals originate and propagate, consists of the ER, and CG which are interconnected and attached to the plasma membrane by a meshwork of F-actin [7].The egg surface, where Ca 2+ influx takes place, is covered with microvilli filled with actin filaments that undergo constant treadmilling [8,9].The tip of the microvilli may assist in recognizing the sperm by means of cognate receptors [10,11].Hence, the actin filaments in the microvilli and in the subplasmalemmal region may have profound impacts on sperm-egg recognition and binding, as well as on the subsequent signal transduction.In sea urchin eggs, the sperm-fusion site forms a fertilization cone made of F-actin, and this structure is thought to be instrumental in engulfing the sperm [12].In echinoderm, in parallel with the Ca 2+ increases and CG exocytosis, fertilized eggs exhibit microvilli elongation into the perivitelline space.Emanating from the egg surface to connect to the FE, the extending microvilli and spikes containing F-actin may provide equidistant separation of the vitelline layer or accommodate the excessive plasma membrane that was produced as a result of the CG exocytosis and membrane fusion [13,9].The structural modification in the maturing oocytes of starfish might also underlie the changes of the electrical properties of the plasma membrane [14,15].In line with that, deprivation of external Ca 2+ during meiotic maturation, which caused significant changes in the cortical region of the oocytes in terms of the positioning of the vesicles, microvillar morphology and the thickness of the F-actin layer, also resulted in much reduced Ca 2+ influx when the eggs were fertilized in the artificial seawater containing Ca 2+ [16].This finding may be ascribed to the fact that ion channels and pumps are often associated with the neighboring actin filaments that may modulate their activity [17][18][19].
Sea urchin eggs contain various types of acidic vesicles with different functions and capacity to store Ca 2+ .CG in a typical sea urchin egg are over 15,000 in number [20].They are normally docked to the plasma membrane and contain enzymes that contribute to the formation of the FE matrix.Besides serving as a calcium store, CG may synthesize Ca 2+ -linked second messengers, as two isoforms of ADP-ribosyl cyclase (ARC) are located in their lumen to generate cADPr in a pH-dependent manner [21].Another class of vesicles are so-called 'white' or 'clear' vesicles (acidocalcisomes), representing acidic calciumstorage compartments with high content of polyphosphates (poly P).These vesicles, which translocate towards the centripetal part after egg centrifugation [22], are not sensitive to the lysosome-disrupting GPN, and their Ca 2+ is released by poly P hydrolysis, but not by NAADP [23].Perhaps the largest and most abundant acidic vesicles are the YG that comprise one third of the total cell volume [24] and provide nutrients for the developing embryo [25].The presence of the lysosome-related proteins in the lumen of the YG makes them a part of the endo-lysosomal system, as they are probably derived from the endosomes acquired during oogenesis [26,27].Despite the mild acidic pH (6.8), YG are easily stained with Acridine orange and Lysotracker-RED DND99 [22,28] and tend to shift to the centrifugal pole when the egg is stratified by centrifugation [29].Consistent with the high Ca 2+ content in the lumen, these vesicles were implicated in NAADP-sensitive Ca 2+ release through two-pore channels (TPCc) [5].Finally, being located near CG, there are pigment granules with unknown concentration of Ca 2+ .These granules are 1 µm in diameter and low in number.As with clear vesicles, the pigment granules tend to shift to the centripetal pole in the stratified egg [20,30].
Acidic vesicles represent a non-ER Ca 2+ store that contributes to the Ca 2+ increase in the sea urchin egg in response to NAADP photoactivation [29,6].In starfish eggs, photoactivation of caged NAADP induces depolarization of the membrane potential and the Ca 2+ influx which is completely inhibited when NAADP uncaging is performed in CaFSW [31][32][33][34].Both membrane current and the CF are severely reduced when the subplasmalemmal actin cytoskeleton was altered prior to fertilization or NAADP uncaging [35].One intriguing unsolved question regarding NAADP is that it appears to stimulate Ca 2+ transport across the plasma membrane in starfish eggs [35,36], whereas the same second messenger can induce a CF in sea urchin eggs mainly from the internal stores such as acidic vesicles [37].However, during sea urchin fertilization in artificial seawater containing low Ca 2+ concentration (1 mM), the sperm-initiated CF is substantially reduced [2], as opposed to when the caged NAADP is photoliberated [38].This observation casts a doubt on the role of NAADP in generating the Ca 2+ influx in the fertilized eggs of sea urchin [39].Nonetheless, it is the NAADP-triggered Ca 2+ increase that resembled the CF most, while the InsP 3 -and cADPr-sensitive Ca 2+ stores in sea urchin eggs may be interconnected to the NAADP-sensitive Ca 2+ stores [6,37].Thus, in view of the contradicting data on how acidic vesicles may contribute to the CF at fertilization of sea urchin eggs [32,39], we examined the effect of their depletion on Ca 2+ signaling at fertilization.We previously reported that a brief preincubation of starfish eggs with the Ca 2+ ionophore ionomycin, which caused interfusion of CG and white vesicles, led to significant suppression of CF at fertilization [40].These results suggested that these vesicles may be somehow linked to CF [41,21].One conceivable model in this regard is a morpho-functional interplay between the Ca 2+ channels in the plasma membrane and the acidic vesicles nearby, which may serve as an internal Ca 2+ store.In the present study, we tested this idea by inducing structural changes to the CG and vesicles in the tight subplasmalemmal zone of sea urchin eggs prior to fertilization.Firstly, we used GPN to induce rupture or interfusion of CG and vesicles.Secondly, we tested the effect of weak base NH4Cl and the anesthetics like urethane and procaine on the CF and Ca 2+ waves in fertilized eggs.The pretreatment with these agents was known to block CG exocytosis in the fertilized eggs of sea urchin by dislocating the granules and vesicles towards the inner cytoplasm [42][43][44][45].Finally, we stratified the eggs by centrifugation so as to only dislodge the vesicles and organelles from their original subcellular location, and not the CG.Being tightly attached to plasma membrane, most CG remain near the plasma membrane, whereas only the YG migrate towards the centrifugal egg pole, and the ER and the nucleus to the opposite centripetal side [22,29,30].By examining the sea urchin egg surface topography and the ultrastructural modifications of the subplasmalemmal regions in relation to their effects on the CF and Ca 2+ waves at fertilization, we drew the conclusion that the natural distribution of microvilli and the correct positioning of CG in the vicinity of the plasma membrane are required for the proper occurrence of sperm-induced Ca 2+ signals.
Preparation of gametes.
Paracentrotus lividus were collected from the Gulf of Naples during the breeding season from November to April and maintained in seawater at 16 °C.Eggs and sperm were collected by the intracoelomic injection of 0.
Confocal microscopy and Ca 2+ measurement.
To stain lysosome-like acidic vesicles, P. lividus eggs were incubated in FSW containing 10 nM LysoTracker-RED DND 99 (Molecular Probes) for 1 h.To assess the effect of GPN, these eggs were subsequently exposed to either 200 µM GPN (Santa Cruz Biotechnology) or control 0.05 % dimethyl sulfoxide (DMSO) in seawater.The fluorescence changes in the stained vesicles were examined in the same eggs before and after 40 minutes incubation with GPN (or DMSO) by use of Zeiss LSM 510 META laser scanning confocal microscope (Jena, Germany).Some eggs were then fertilized, and the fluorescent images were captured 25 minutes later.To monitor the intracellular Ca 2+ changes, a mix of calcium dye (Calcium Green) conjugated to 10 kDa dextran (Molecular Probes) and Rhodamine-Red (35 µM) was prepared in the injection buffer [9] and microinjected into P. lividus eggs by use of air-pressured microinjector (eppendorf FemtoJet).After each microinjection, eggs were left for 20 minutes in FSW before every treatment.Immobilized in a chamber, intact eggs were mounted on a Zeiss Axiovert 200 microscope with a Plan-Neofluar 20x/0.50objective to monitor the cytosolic Ca 2+ changes during fertilization with a cooled CCD camera (MicroMax, Princeton Instruments, Inc., Trenton, NJ).Quantification of the Ca 2+ data was presented as a relative fluorescent unit (RFU) calculated from a given time point and normalized to the baseline fluorescence (F 0 ) following the formula F rel = [F−F 0 ]/F 0 , where F represents the average fluorescence level of the entire oocyte.All the Ca 2+ data were analyzed with the MetaMorph Imaging System software (Universal Imaging Corporation, West Chester, PA).
Chemicals and pharmacological inhibitors.
Diltiazem and Verapamil were purchased from Sigma-Aldrich and dissolved in distilled water.Both inhibitors were used to incubate the eggs at the final concentration of 10, 50 and 100 µM for 40 minutes before spermatozoa were added.Urethane and procaine (Sigma-Aldrich) were dissolved in the FSW.Eggs exposed to 400 mM urethane for 5 minutes were transferred to FSW and incubated for 5 minutes before insemination.In the case of procaine (10 mM final concentration), eggs were treated for 20 minutes and then transferred to FSW and immediately inseminated.The treatment with the NH4Cl was for 30 minutes at the final concentration of 40 mM (pH 9.0), after which the eggs were transferred to FSW and then fertilized.
Egg stratification.
The procedure was adapted from Lee at al 2000 [29].P. lividus eggs were microinjected with the calcium dye mix, incubated in FSW for 10 minutes and then centrifuged in FSW containing 1 % sucrose (dissolved in distilled water) for 30 minutes (12,000 g).After centrifugation, eggs were washed for 5-10 minutes in FSW to remove the sucrose and then inseminated.
Transmission and Scanning Electron Microscopy (TEM and SEM).
Sea urchin eggs were fixed in FSW containing 0.5 % glutaraldehyde for 1 hour.Samples were then post-fixed with 1 % osmium tetroxide and dehydrated in a series of ethanol solutions of increasing concentration.The embedded samples in EPON were polymerized for 2 days at 60˚C in a dry oven.Ultrathin sections for TEM observation (ZEISS LEO 912 AB) were stained with UAR-EMS Uranyl acetate replacement staining and lead citrate.For SEM observation, ethanol-treated samples were further dehydrated by a critical point dryer (CPD300, Leica), and subsequently sputtered with gold and observed with a JEOL 6700F scanning electron microscope.For quantification, microvilli visualized in SEM were counted from three separate regions (26.1 μm 2 ) randomly selected from 4 different eggs for each experiment, and the results were analyzed with t-test and ANOVA by use of Prism 3.0.
The cortical flash in the fertilized eggs of sea urchin is mainly derived from the external source.
To test if the CF depends on external Ca 2+ , P. lividus eggs were fertilized in ASW containing 10 or 1 mM Ca 2+ following 10 minutes preincubation.As sperm motility requires Ca 2+ in the media, fertilization does not take place in Ca 2+ -free seawater.In the 1 mM ASW, however, the time needed for the sperm to induce intracellular Ca 2+ increases in the eggs was more than 5 fold longer (490.7 ± 385.5 sec, n=19) in comparison with the control eggs in ASW containing 10 mM Ca 2+ (82.9 ± 129.9 sec , n=23; P˂0.0001).As shown in (Fig 1A), lowering the Ca 2+ concentration in the medium produced a significant decrease in the amplitude of the CF (0.02 ± 0.006 RFU, n=5) compared to the control eggs (0.09 ± 0.008 RFU, n= 9; P˂ 0.001), whereas the peak amplitude of the Ca 2+ wave was not significantly changed (1 mM ASW, 0.66 ± 0.16 RFU; 10 mM ASW, 0.67 ± 0.14 RFU; P=0.9423).These data confirmed the reports from other species of sea urchin [2].Thus, unlike the Ca 2+ wave, the synchronized Ca 2+ increase taking place in the entire egg surface for the initial 10 seconds of fertilization was heavily dependent upon external Ca 2+ .To further examine if the voltage-gated calcium channels are involved in the generation of the CF, we fertilized sea urchin eggs pretreated with 10, 50 and 100 µM L-type Ca 2+ channels inhibitors diltiazem and verapamil.Both inhibitors caused significant suppression of the CF in a dose-dependent manner (Fig. 1D, red lines) until the higher doses of the drugs also modestly diminished the amplitude of the Ca 2+ wave (see the blue lines Fig. 1D).These data confirmed that sperm-induced CF in P. lividus eggs involves Ca 2+ influx through voltage-gated L-type Ca 2+ channels, as has been suggested in other species of sea urchin and starfish.
Depleting the GPN-sensitive Ca 2+ store in sea urchin eggs leads to substantial inhibition of the cortical flash, abnormal FE elevation, and polyspermy.
When sea urchin eggs microinjected with the caged NAADP are exposed to UV, an abrupt Ca 2+ increase confined to the cortex propagates as a Ca 2+ wave towards the inner cytoplasm [6].This cortical Ca 2+ increase, which closely resembled the CF during fertilization, was later suggested to originate from the acidic vesicles (i.e.YG) [5].To test whether this intracellular Ca 2+ store contributes to the CF, we applied GPN to specifically deplete it before fertilization.Pretreatment of sea urchin (P.lividus) and starfish (Astropecten aranciacus) eggs to osmotically rupture the membrane of acidic vesicles, produced random but localized Ca 2+ bursts in the egg periphery, suggesting that the GPN-susceptible vesicles contain Ca 2+ (Supplemental data 1) as was shown in the other species of sea urchin [5].When fertilized, sea urchin eggs preincubated with GPN (200 µM, 40 min) displayed changes in the peak amplitude of the Ca 2+ wave (0.31 ± 0.07 RFU, n=14) in reference to the control eggs from the same batch pretreated with 0.05 % DMSO (0.38 ± 0.07 RFU, n=12; P˂0.05) (Fig 2B).In addition, GPN pretreatment significantly diminished the planar velocity of the Ca 2+ wave (3.84 ± 0.38 µm/sec, n=14) as compared to control eggs (4.76 ± 0.65 µm/sec, n=12; P˂0.001), and accelerated the decline of the Ca 2+ levels to the baseline values (0.072 ± 0.034 RFU) as opposed to average values in the control eggs (0.254 ± 0.051 RFU; P˂0.001) measured 3 minutes after the initial Ca 2+ signal was detected (Fig. 2B).These observations might suggest that the Ca 2+ ions stored in the GPN-susceptible vesicles somehow contribute to the Ca 2+ wave at fertilization.However, the strongest effect of GPN was observed on the amplitude of the CF.In the same GPN-pretreated eggs, the CFs were barely detectable at fertilization, as their average amplitude was reduced to merely 0.004 ± 0.005 RFU, whereas the average CF detected in the batch-matching control eggs was 0.064 ± 0.024 RFU, P˂0.001.
Furthermore, with the loss of GPN-susceptible cortical vesicles that would otherwise participate in exocytosis, these eggs often failed to fully elevate the FE, and incorporated supernumerary spermatozoa in a dose-dependent manner (Fig. 2C and D).Whereas the frequency of the eggs displaying fully elevated FE was 100 % in the control eggs pretreated with 0.05 % DMSO (n=17), the rates in the eggs pretreated with 100 and 200 µM GPN were reduced to 36.8 % (n=19) and 0 % (n=7) (Fig. 2C), respectively.In the same experiment, the control eggs were mostly monospermic, as the average number of the egg-incorporated sperm at fertilization was 1.06 ± 0.24 (n=17).By contrast, in the eggs pretreated with 100 and 200 µM GPN, the internalized sperm counts were greatly increased to 4.21 ± 3.20 (n=19; P<0.001 compared with the control) and 37.0 ± 14.9 (n=7; P<0.01 compared with the eggs pretreated with 100 µM GPN), respectively (Fig. 2C and D).Thus, depletion of the GPN-susceptible Ca 2+ store caused inhibition of the CF, a reduction of the peak amplitude of the Ca 2+ wave, as well as an increased rate of polyspermy and failure of the FE elevation.
GPN disrupts the acidic vesicles stained with LysoTracker-RED and causes fusion of the CG with the adjacent vesicles.
In Lytechinus pictus eggs, it was reported that GPN eliminates the acidic vesicles stained with LysoTracker-RED [5].To test if GPN actually disrupts a population of acidic vesicles in P. lividus eggs as well, sea urchin eggs exposed to 10 nM LysoTracker-RED for 1 h were incubated in the presence of 200 µM GPN.After 40 minutes, some of these eggs were fertilized to follow the fate of the LysoTracker-RED-positive vesicles.As shown in Fig. 3A, GPN eliminated the fluorescent signals coming from the LysoTracker-RED-stained vesicles (see the red dots) in comparison with the control eggs exposed to DMSO, which retained red dots all over the cytoplasm.Curiously, the number of acidified vesicles visualized by LysoTracker-RED was increased throughout the cytoplasm following fertilization.This tendency was evident in both control and GPN-treated eggs by 25 minutes after fertilization, which might represent the changes linked to the uptake of H + into the vesicles during the course of alkalization of the cytosol after fertilization [22].We then examined the effect of GPN on the ultrastructure of the egg cortex before and after fertilization.In line with the loss of a population of acidic vesicles visualized by LysoTracker-RED, the egg cortex viewed by transmission electron microscopy (TEM) displayed corresponding changes.First, GPN dislodged CG from the vicinity of the plasma membrane in unfertilized eggs.The membrane-bound CG in the control eggs (red arrow) apparently underwent exocytosis and were not visible anymore after fertilization (Fig. 3B).In the eggs pretreated with GPN, some CG were either positioned deeper in the cytoplasm or displayed signs of fusion among themselves (white arrows) or with other vesicles (black arrow) (Fig. 3B).The EM image of the fertilized eggs after GPN pretreatment showed that the FE was elevated only partially, and that the microvilli extension in the perivitelline space was barely detectable (Fig. 3B).Hence, these data suggest that GPN-induced changes in CF and the Ca 2+ wave in the fertilized eggs of sea urchin (Fig. 2) might be linked to the alteration of the CG and microvilli structures in the cortex.
GPN, verapamil and diltiazem cause drastic reorganization of microvilli on the egg's surface.
As the partial FE elevation in the GPN-pretreated eggs (Fig. 3B) can be due to the incomplete CG exocytosis or to the failed extension of microvilli [13], we examined microvillar morphology on the egg surface by scanning electron microscopy (SEM).We found that the microvilli covering the surface of the P. lividus eggs pretreated with 200 µM GPN were nearly twofold lower in number and bore irregular shapes that may signify bending, overextension, or fusion.In contrast, the DMSO-treated control eggs manifested microvilli with rather uniform length and distribution (Fig. 4A, Table 1).After fertilization, the control eggs pretreated with DMSO showed elevation of an intact FE on the surface, to which a spermatozoon is seen to be attached (Supplementary data S2).On the other hand, the eggs fertilized after GPN pretreatment manifested multiple cracks on the surface of FE (Supplementary data S2, arrow).Hence, GPN not only changes the vesicles and granules in the egg cortex, but also alters the egg surface and functionality of the microvilli.As the treatment with GPN caused significant changes in the shape and number of microvilli together with specific inhibition of the CF, we tested if the conventional inhibitors of L-type voltage-gated Ca 2+ channels would have the same effect on microvilli.Interestingly, we found that diltiazem and verapamil both increased the percentage of microvilli with the irregular shapes as in the GPN-pretreted eggs, and that the numbers of microvilli in the unit area were significantly reduced in comparison with the control eggs from the same batch (Fig. 4B, Table 1).Hence, the structural changes in microvilli may add to the pharmacological effects of these two inhibitors of L-type Ca 2+ channels at least in the sea urchin eggs.Not having affected cortical vesicles, eggs pretreated with verapamil and diltiazem underwent normal CG exocytosis at fertilization and exhibited full elevation of the FE (data not shown).
SEM and TEM observations of P. lividus eggs treated with agents dislocating CG.
As an experimental paradigm slightly different from GPN which ruptures CG, we dislodged CG and clear vesicles from the plasma membrane to see if such changes would have the same effects on the Ca 2+ signaling.To this end, P. lividus eggs were exposed to procaine, urethane and ammonia and the morphological modifications of the egg surface were examined by electron microscopy.The TEM image of the untreated eggs normally displayed orderly monolayer of CG underneath the plasma membrane (Fig. 5A, red arrow).By contrast, as with Arbacia punctulata eggs, P. lividus eggs treated with urethane (400 mM) or procaine (10 mM) showed numerous CG somewhat smaller and displaced away from the plasma membrane, often forming multiple layers (e.g.compare the size and arrangement of the CG marked with red arrows in Fig. 5).In addition, unlike the control eggs, the surface of the eggs pretreated with urethane or procaine was highly corrugated or undulated with numerous invaginations and protrusions (Fig. 5B and C, black arrow).Similarly, dislocation of the CG from the outer region of the egg cortex (red arrows) was also observed in the eggs pretreated with NH4Cl (pH 9.0), but this treatment did not much change the egg's contour nor the size of CG.Nonetheless, it has been demonstrated that urethane, procaine, and NH 4 Cl induce structural changes in the microvilli and subplasmalemmal layer of sea urchin eggs (Strongylocentrotus purpuratus and Arbacia punctulata), as judged by fluorescent probes for F-actin [45].In line with that, our SEM data revealed significant changes in the structure of microvilli in the eggs treated with the same agents (Fig. 5, SEM).After being exposed to 400 mM urethane for 5 minutes, the microvilli on the corrugated surface of P. lividus eggs took irregular shapes (Fig. 5B, SEM).On the other hand, eggs exposed to 10 mM procaine for 30 minutes showed considerable reduction in the microvilli number with increased frequency of the irregularly shaped microvilli (Fig. 5C, Table 1).Finally, the microvilli of the eggs treated with 40 mM NH4Cl (pH 9.0) for 30 minutes displayed morphological alteration with slightly elongated or twisted shape, but again with reduced density per unit area (Fig. 5D, Table 1).Thus, besides CG dislocation, which represented the common feature of the three pretreatments, microvilli were reduced in number while the frequency of the irregularly-shaped microvilli was increased (Table 1).Hence, in these conditions, the fertilizing sperm is met by a strikingly different egg with completely reorganized cell surface.
Effects of urethane, procaine, and NH 4 Cl on the sperm-induced Ca 2+ signals.
Once dislodged and relocated, the CG lose their contact with the plasma membrane in an altered cytoskeletal environment, as was seen in the microvilli.Thus, we examined the physiological consequence of such structural changes.When fertilized, the eggs pretreated with urethane, procaine and NH 4 Cl exhibited consistent changes in certain aspect of intracellular Ca 2+ signaling (Table 2 and Fig. 6).In short, neither of the treatment affected the amplitude of the global Ca 2+ increase, indicating that CG dislocation and the alteration of microvilli do not substantially perturb the generation of the Ca 2+ wave, whereas rupturing CG with GPN did so.Nonetheless, the eggs pretreated with procaine or NH 4 Cl exhibited much faster declining kinetics of the intracellular Ca 2+ level after fertilization (Fig. 6).In addition, eggs pretreated with urethane and procaine displayed slightly faster and slower planar velocity of the Ca 2+ wave, respectively (Table 2).These observations might have several implications about the possible rearrangement of the components constituting the 'excitable cytoplasmic media' that either transmit or buffer the Ca 2+ wave after CG dislocation (see Discussion).More importantly, in all these eggs with CG relocation and microvilli changes, the amplitude of the sperm-induced CF was significantly suppressed.Extending the data with GPN, these results suggest that close association of CG with the plasma membrane and the normal shape and distribution of microvilli may be important for the generation of the CF and the Ca 2+ wave.Whereas SEM images of the control egg (FSW) display microvilli of regular distribution and length, the microvilli of the eggs treated with the two L-type Ca 2+ channel inhibitors show irregular shapes and reduced quantity (also see Table 1).Scale bar 1 µm.Now that CG are important for the generation of CF at fertilization, we asked whether other Ca 2+ -storing acidic vesicles (e.g.YG) could contribute to the sperm-initiated Ca 2+ increase in sea urchin eggs.To that end we stratified P. lividus eggs by centrifugation in a physical condition that dislodge and redistribute the egg organelles to certain subcellular regions [30].While CG still remain bound to the plasma membrane in this condition, clear vesicles, the nucleus and the ER translocate towards the centripetal pole, whereas the YG and the mitochondria to the centrifugal egg pole [22].Firstly, we noted that the surface of the elongated eggs after centrifugation was corrugated (Fig. 7B) and the density of microvilli significantly decreased (Table 1).1).Some of the thickened and elongated microvilli are indicated with arrows.(C) TEM image of a centrifuged egg at the centripetal (left) and centrifugal side (right).While CG are still aligned underneath the plasma membrane in both sides, the cytoplasm at the centripetal and centrifugal sides are predominantly occupied with clear vesicles (CV) and yolk granules (YG), respectively.A minor class of less characterized vesicles enriched with highly electron-desne contents are marked with red arrow.N, nucleus.
Furthermore, the microvilli in the stratified eggs appeared thicker and longer with irregular shapes (Fig. 7B, Table 1), whereas the CG attained their apparent attachment to the plasma membrane as observed with the TEM (Fig. 7C).When the stratified eggs were fertilized, the Ca 2+ increase at the time of CF was most prominent in the narrow centrifugal pole where YG are localized and ER is scarce (Fig. 7C and 8A).As a whole, however, the average amplitude of the CF manifested by the stratified eggs was marginally reduced with high variability (0.09 ± 0.04 RFU, n=11) in comparison with the control (0.14 ± 0.01 RFU, n=13; P=0.0598) (Fig. 8B).This modest decrease in the amplitude of CF despite the considerably large Ca 2+ increase localized in the centrifugal pole (Fig. 8A) suggests that there might be a general tendency of suppressing CF in the rest of the cell surface, as implied by the significantly reduced microvilli density on the surface (Table 1).On the other hand, the peak amplitude of the Ca 2+ wave in the stratified eggs (0.43 ± 0.08 RFU, n=11) was significantly lower than the values in the intact eggs from the same batch (0.67 ± 0.16 RFU, n=13; P˂0.001), but the accelerated decline of the intracellular Ca 2+ level was not observed unlike the eggs pretreated with agents dislocating or disrupting CG (Fig. 8B and C).Interestingly, the polarized redistribution of the Ca 2+ stores such as ER in the stratified eggs did not lead to clear preference with respect to the origin of sperm-induced Ca 2+ wave.
When fertilized, 53.9 % of the stratified eggs (n=21) triggered the sperm-induced Ca 2+ wave from the centripetal pole where the nucleus and ER are located (Fig. 9A).Surprisingly, nearly half (46.1%) of the stratified eggs (n=18) initiated the Ca 2+ wave from the centrifugal side where ER is relatively scarce (Fig. 9A and B).As ER is assumed as the primary internal store affording the Ca 2+ wave in fertilized eggs, this result suggests that at least the initial Ca 2+ being liberated during the wave generation might not necessarily derive from ER. Nonetheless, the stratified eggs with polarized distribution of the internal Ca 2+ stores exhibited differential Ca 2+ rise at fertilization.When the CF was further assessed in the two parts of the stratified eggs regardless of the site of sperm fusion, the amplitude of the CF was again always significantly higher in the centrifugal half where YG are preferentially located (Fig. 9B).Importantly, we noted that the way CF is generated in the two halves were apparently distinct from each other.While the CF in the centripetal half appears to proceed in one phase, the initial burst of the Ca 2+ beneath the plasma membrane in the centrifugal half is immediately followed by a secondary Ca 2+ increase in the subjacent cortex (Fig. 8A; Fig. 9A and B).This in part explains why the amplitude of the CF in centrifugal half (0.16 ± 0.06 RFU, n=27), where YG are located, was significantly higher than that in the centripetal half (0.11 ± 0.05 RFU, n=27; P˂0.001) (Fig. 9B).This observation raises the possibility that, in theory, YG can enhance the Ca 2+ increase during the later stage of CF at fertilization.
Discussion
The cortex of sea urchin egg is crowded with various granules and vesicles that are known to contain Ca 2+ in their lumen.In this manuscript, we have examined their potential contribution to the intracellular Ca 2+ signaling during the initial phase of fertilization.While CF mainly represents Ca 2+ influx from outside through L-type Ca 2+ channels (Fig. 1), the data presented in this communication suggest that the sperm-induced Ca 2+ signals are somehow affected by cortical structural modifications such as ablation or dislocation of CG and other vesicles in the subplasmalemmal region, as well as by the changes in the number and morphology of the microvilli.Firstly, the fusion and disruption of vesicles in the egg cortex by GPN led to severe reduction in the amplitude of the CF and a faster Ca 2+ wave decline (Fig. 2B).This treatment led to no or partial elevation of the FE, indicating that GPN may rupture not only YG, as previously reported [5,29], but also the CG.Unlike GPN, treatment of P. lividus eggs with urethane, procaine, and NH4Cl simply dislodged CG away from the plasma membrane (Fig. 5).Also in these eggs, the amplitude of the CF was severely reduced, while the peak of the Ca 2+ wave was not much affected (Fig. 6).Nonetheless, as with the GPN-pretreated eggs, the decay phase of the Ca 2+ wave in these eggs (especially the ones pretreated with procaine and NH 4 Cl) was notably faster in dropping to the basal level (see the brown curves in Fig. 6 after the Ca 2+ peak, Table 2).This consistent observation might be attributed to the spatial and structural modifications of the vesicles and granules which might have rendered these vesicles and granules more efficient as Ca 2+ -re-uptaking stores.Alternatively, the same result might reflect failure of a secondary Ca 2+ -releasing mechanism whose potential action during the late phase of Ca 2+ increase is masked by the massive Ca 2+ increases caused by InsP 3 or cADPr.A potential candidate player in the latter scenario is the ion channels involved in store-operated Ca 2+ entry (SOCE) [46].Although such a mechanism has been controversial in sea urchin eggs [47,48], it is conceivable that subtle changes introduced in the subplasmalemmal region by these agents might have decoupled the Ca 2+ -induced Ca 2+ entry system.In view of the fact that starfish oocytes and eggs are endowed with a fair amount of mRNA encoding molecules homologous to orai-2 and STIM1 [49], whether or not SOCE is at work during the late phase of Ca 2+ increase in the fertilized eggs of sea urchin eggs would be an interesting topic for future study.
Previous work in starfish eggs showed that photoliberation of NAADP induced a Ca 2+ influx accompanying depolarization of the plasma mebrane, which was negated in CaFSW [31,32].By contrast, in sea urchin eggs, this effect was strongly inhibited by GPN [5], underscoring the role of GPN-sensitive vesicles in shaping CF [50,51].In starfish eggs, however, 100 µM GPN did not much affect the NAADP-induced membrane depolarization [36].Thus, species-specific difference may exist on this matter.Furthermore, in view of the finding that opening of the TPCs in sea urchins eggs homogenates may be inhibited by verapamil and diltiazem [52], the supression of CF by the same drugs (Fig. 1) might be in part due to the inhibition of TPC.However, a study in starfish (Patiria miniata) indicated that knockdown of all three TPCs isoforms in the eggs did not suppress the CF, but only altered the timing and pattern of the Ca 2+ responses at fertilization [53].Hence, it is not likely that, at least in starfish eggs, the generation of CF is contributed by TPC, which might as well represent a sodium channel [54].
Our observation that disruption or dislocation of vesicles and granules in the egg cortex prior to fertilization consistently suppresses the CF raised the possibility that the integrity of these vesicles beneath the plasma membrane is essential for the production of CF.Obviously, it is not conceivable that these vesicles and granules per se are exclusively accountable for the synchronized Ca 2+ increase over the entire surface of the egg during the CF.As a Ca 2+ store, however, CG and the acidic vesicles may either release Ca 2+ that primes Ca 2+ influx [55], or augment the CF by providing the aforementioned secondary Ca 2+ increase after the Ca 2+ influx (Fig. 8 and 9).In either case, the physical contact or vicinity between the CG and the plasma membrane would be important for the generation of a full-fledged CF.
During sea urchin fertilization, the sperm fusion with the egg elicits the initial inward current and the membrane depolarization [56,57].In P. lividus eggs, the two phases of the membrane potential depolarization (i.e. a small step potential at the beginning and the large and long-lasting fertilization potential at the later stage) precisely mirror the sperm-induced Ca 2+ signals, i.e. the CF and Ca 2+ wave, respectively [58,59].Given that CF is largely dependent upon the membrane potential, our data would not rule out the possibility that the pretreatment of the eggs with GPN and other agents disrupting or dislocating CG and acidic vesicles might have changed the resting potential of the eggs and thereby exhibited compromised CF at fertilization.The resolution of this issue awaits electrophysiological studies in the future.
The common denominator in the eggs pretreated with GPN, procaine, urethane and NH4Cl is that the microvilli were drastically changed in their number and shape (Fig. 3, 4 and 5).These changes imply that the actin filaments within the microvilli are drastically rearranged, and it may have considerable impacts on the generation of the CF.Firstly, the reduction in the microvilli number will lead to reduced presentation of the Ca 2+ channels mediating Ca 2+ influx through the plasma membrane.Secondly, in view of the fact that actin serves as a Ca 2+ buffer and thereby as a diffusion barrier [60][61][62][63][64], irregularly elongated microvilli are expected to impede Ca 2+ influx (Table 1).In support of this idea, hyperpolymerization of subplasmalemmal actin cytoskeleton with jasplakinolide significantly repressed the amplitude of the CF in A. aranciacus and P. lividus eggs at fertilization [65,66].Furthermore the amplitude of the Ca 2+ current through the plasma membrane after the uncaging of NAADP in starfish was also inhibited by the actin drugs [35].Likewise, treatment of A. aranciacus eggs with latrunculin-A, which shifts the actin dynamics towards net depolymerization, induces fertilization-like Ca 2+ waves and CF as well as plasma membrane depolarization [8,64,67].
It is worth mentioning that stratification of the P. lividus eggs disclosed the potential existence of a secondary mechanism that augments the CF (Fig. 8 and 9).This delayed Ca 2+ increase was observable only when the eggs were fertilized after stratification.This short-lived Ca 2+ increase lagging the CF is discernible only in the cortex of the centrifugal half where YG are concentrated, raising the possibility that these organelles may also contribute to the enhancement of the CF in P. lividus eggs.In a sense that this Ca 2+ puff takes place a few seconds after the CF, and not before, such a Ca 2+ increase temporally linked to the CF is considered distinct from the NAADP-induced Ca 2+ increase that triggers membrane depolarization and Ca 2+ increase.In this context, our experiments with stratified sea urchin eggs may provide some new insights into the mechanism of Ca 2+ increase in the fertilized eggs.Contrary to the report that the sperm initiates the Ca 2+ increase only from the centripetal part of the stratified Lytechinus pictus egg where the ER and the nucleus migrate [29], we noted that the initiation of the sperm-induced Ca 2+ wave does not discriminate between the egg poles in the stratified eggs of P. lividus (Fig. 9).This discrepancy could be due to the species difference or to the variation of the experimental conditions; i.e., a longer time of centrifugation (30 min as opposed to 8-15 min) to have elongated cells in our case.However, even in the study of Lee et al 2000 [29], when multiple sperm fertilized the stratified eggs, the Hoechst-stained sperm nuclei were evident also in the centrifugal egg pole.
In summary, we have shown that the integrity and the physical contact of vesicles and CG beneath the plasma membrane are a prerequisite for a normal Ca 2+ response at the fertilization of sea urchin eggs.Furthermore, the results obtained from the stratified eggs suggest that the reserve YG [29,5] may also serve as functional Ca 2+ stores substantiating the CF (Fig. 8 and 9).Clarification of the mechanism by which the the YG-derived local Ca 2+ increase is functionally linked to the Ca 2+ influx in either chronological order would be important in understanding the physiological significance of these non-ER Ca 2+ stores in the egg cortex.
Figure 1 .
Figure 1.The cortical flash depends on the Ca 2+ influx via L-type Ca 2+ channels.(A) The eggs fertilized in the ASW with 1 mM Ca 2+ (brown curves) displayed virtually the same Ca 2+ response as in control eggs (ASW with 10 mM Ca 2+ , green curves), but the CFs were severely suppressed.(B-C) P. lividus eggs were pretreated with either diltiazem or verapamil for 40 minutes, and their Ca 2+ responses at fertilization were compared with that of control eggs (FSW) in the same batch.The trajectories of the Ca 2+ levels in the fertilized eggs that had been pretreated with 50 µM diltiazem or 10 µM verapamil were shown in brown curves, whereas the responses in the control eggs were presented in green curves.(D) The dose-dependent effects of diltiazem and verapamil on the Ca 2+ signalling in fertilized eggs.The peak amplitudes of the CF (red line) and the global Ca 2+ increase (blue line) in the eggs pretreated with varied amount of diltiazem and verapamil were normalized with the corresponding average levels in the control eggs of the same batch.
Figure 2 .
Figure 2. Fertilization of the sea urchin eggs pretreated with GPN.P. lividus eggs were fertilized following the pretreatments described in the Materials and Methods.(A) Pseudocolor images of the instantaneous changes of the Ca 2+ signals at several key moments of the propagating wave in representative eggs.The arrow indicates the intracellular Ca 2+ increase at the spot of the sperm-egg interaction site.(B) The quantified Ca 2+ increases in the control (green curves) and GPN-pretreated (200 µM, brown curves) eggs in one of the four independent experiments.(C) Dose-dependent effects of GPN on the elevation of the FE and polyspermy.Sperm inside the fertilized eggs were visualized with the Hoechst 33342 (arrows), scale bar 20 µm.(D) The average numbers of the egg-incorporated spermatozoa were provided as Mean ± SD (bars) and the FE elevation as a frequency (line).
Figure 3 .
Figure 3.Effect of GPN pretreatment on the CG and vesicles.(A) P. lividus eggs were stained with LysoTracker-RED before and after (40 min) the treatment with 200 µM GPN or DMSO (control), and viewed in confocal microscopy.The changes of the fluorescent signals in the LysoTracker-RED-stained vesicles were examined in the same eggs 25 minutes after fertilization.Control eggs displayed full elevation of the FE, but most eggs treated with GPN failed at the FE elevation despite the formation of multiple fertilization cones (blue arrowheads).(B) Electron micrographs of the cortex of the eggs treated with 200 µM GPN or DMSO.In control eggs, cortical granules (red arrow) positioned underneath the plasma membrane were all exocytosed into the perivitelline space 5 minutes after fertilization (right panel).In GPN-pretreated eggs, CG appeared to fuse with each other (white arrow) or with the adjacent vesicles (black arrow), and to be displaced from the plasma membrane.After insemination, FE elevates only partially while some CG fused with other granules (black arrow) are still visible inside the eggs.
Figure 4 .
Figure 4. Ultrastructural changes of the egg surface after the treatment with GPN, diltiazem and verapamil.(A) SEM images showing the microvilli of the unfertilized eggs treated with GPN (200 µM, 40 min) and DMSO.Note the reduced density of microvilli and their elongated shape in GPN-treated eggs in comparison with the control egg exhibiting regularly dispersed and shorter microvilli.(B) Effects of verapamil (50 µM, 40 min) and diltiazem (50 µM, 40 min) on the microvilli structure and quantity.Whereas SEM images of the control egg (FSW) display microvilli of regular distribution and length, the microvilli of the eggs treated with the two L-type Ca 2+ channel inhibitors show irregular shapes and reduced quantity (also see Table1).Scale bar 1 µm.
Figure 5 .
Figure 5. Effects of urethane, procaine and NH4Cl on the ultrastucture of the egg surface.(A) TEM and SEM images of P. lividus control eggs show dense microvilli with nearly uniform shape and size.In the TEM image (left panel), CG are apparently attached to the plasma membrane.(B) After urethane treatment (400 mM, 5 min), egg surface shows a sign of corrugation and significant loss of microvilli (SEM).The treatment induces invaginations on the egg surface and dislocation of some CG from the plasma membrane (TEM, black arrow).(C) Incubation with procaine (10 mM, 20 min) gives rise to microvilli elongation as well as reduction in their number.Some parts of the cell surface are deprived of microvilli (SEM, right panel).Procaine-pretreated eggs show bulges on the surface (black arrow) and some CG lost their attachment to the plasma membrane (red arrow) to form a secondary layer.(D) NH4Cl-treatment (40 mM pH 9.0, 30 min) shows loss of microvilli and changes in their shape (SEM) as well as translocation of some CG (TEM, red arrows).
Figure 6 .
Figure 6.Effects of urethane, procaine and NH4Cl pretreatment on the Ca 2+ signaling at fertilization.(A) Ca 2+ response in the P. lividus eggs pretreated with 400 mM urethane for 5 minutes, (B) 10 mM procaine for 20 minutes and (C) 40 mM NH4Cl (pH 9.0) for 30 minutes prior to fertilization.The Ca 2+ trajectories after fertilization of the same batch of control and treated eggs are presented, and various parameters of the intracellular Ca 2+ increase are shown with histograms as average values from three independent experiments.The experimental data are summarized in Table2.
Figure 7 .
Figure 7. Effects of sea urchin egg stratification on microvilli structure and egg surface.(A) Intact sea urchin eggs viewed by SEM in low (left panel) and high magnification (right).Note the microvilli covering the surface.(B) Centrifugation streches the eggs (left panel), corrugates the surface and diminishes the density of microvilli (right panel) (see Table1).Some of the thickened and elongated microvilli are indicated with arrows.(C) TEM image of a centrifuged egg at the centripetal (left) and centrifugal side (right).While CG are still aligned underneath the plasma membrane in both sides, the cytoplasm at the centripetal and centrifugal sides are predominantly occupied with clear vesicles (CV) and yolk granules (YG), respectively.A minor class of less characterized vesicles enriched with highly electron-desne contents are marked with red arrow.N, nucleus.
Figure 8 .
Figure 8.The Ca 2+ response in the stratified eggs at fertilization.(A) Pseudo-colored images showing the CF and the beginning of the sperm-initiated Ca 2+ wave in intact and centrifuged eggs of P. lividus.Note the secondary calcium increase during the CF (at 5.3 sec) in the centrifugal pole of the centrifuged egg (arrow).Scale bar 20 µm.(B) Comparison of various aspects of the Ca 2+ signaling in the intact and stratified eggs.See the text in Results for detail.(C) Calcium curves at fertilization in intact (green) and stratified eggs (brown) from one batch of experiments demonstrating the suppression of the Ca 2+ wave in stratified eggs (brown).
Figure 9 .
Figure 9.Comparison of the Ca 2+ responses at fertilization in the centrifugal and centripetal halves of the stratified eggs.(A) Pseudocolor images of stratified eggs at fertilization showing the sperm's ability to induce Ca 2+ increase from both centripetal (CP) and centrifugal (CF) sides of the eggs.(B) Representative Ca 2+ curves of the sperm-initiated CFs in the centrifugal (blue line) and centripetal (red line) poles of stratified eggs.The histograms represent the average amplitude of the CF in stratified eggs, which was significantly higher in the centrifugal (0.16 ± 0.06 RFU, n=27) compared to the centripetal pole (0.11 ± 0.05 RFU, n=27; P˂0.001).
Table 1 .
Microvilli count on the surface of the P. lividus eggs.
* Microvilli counted from three separate regions (26.1 μm 2 ) randomly selected from 2 to 4 different eggs for each experiment.n/a, nonapplicable.
Table 2 .
Intracellular Ca 2+ increase in fertilized sea urchin eggs pretreated with urethane, procaine and NH4Cl. | 2019-03-26T13:02:50.176Z | 2019-01-29T00:00:00.000 | {
"year": 2019,
"sha1": "b498d234de8f9b4939a72919227c6295a9fe57fe",
"oa_license": "CCBYNC",
"oa_url": "https://www.ijbs.com/v15p0757.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b498d234de8f9b4939a72919227c6295a9fe57fe",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
248865084 | pes2o/s2orc | v3-fos-license | A Switchable Dual-Mode Actuator Enabled by Bistable Structure
Soft actuators are favored due to their fl exibility and adaptability, but are limited to single actuation mode. Herein, a novel soft actuator with a dual-kinestate performance is proposed. By subtly integrating a bistable compliant mechanism with arti fi cial muscles, the soft actuator is capable of switching between binary motion and continuous motion with accurate output. Functions of overcoming external interference and regulating the snapping time are realized with programmed voltages. Two applications utilizing the switchable kinestate are illustrated, including a mechanical encryption display system with one actuator and an amplitude modulation system with two actuators in parallel. This novel soft actuator exhibits potential applications for the multi-mode actuation of soft robots.
Introduction
A soft bistable actuator utilizing electroactive polymer is a new type of mechatronic device that is self-actuated with two predefined positions. [1] Unlike the conventional soft actuator that is composed of intrinsic soft materials, this type of soft bistable actuator integrates a compliant structure for mechanical motion output and voltage-activated artificial muscle materials for fast actuation. Benefiting from both soft and compliant elements, the soft bistable actuator features large stroke, fast response, and high transient power density. These advantages are favored by new soft robots with unprecedented performances in grasping, [2] swimming, [3] and running. [4] To name a few, by several bistable actuators in connections, a digital-like discretization in a continuous working space is achieved, which, for the first time, links the mechanical system to binary robotics. [5] However, there are challenging issues in actuation considering the integration of both active materials and mechanical structures. For the dielectric elastomer (DE) artificial muscle-based bistable actuator, previous studies have well established a coupled field theory for DE, [6] but there is a lack of theoretical guidance linking the strain in the material to the motion in the mechanical structure. Consequently, to trigger the bistable motion, high-level pulse voltage, at the risk of electrical breakdown, was applied on the DE for sufficient acceleration to induce a moment in a shuttle mass on the tip of the actuator structure. [5,7] This method was obtained after a sophisticated match between the flexible mechanical frames and the hyper-elasticity DE by extensive experimental combinations. In addition, other positions, except for the two terminates of the stable states, are not maintainable as they are unstable states in snapping, which means a continuous motion is sacrificed.
To address this challenge, in this article, we present a new type of soft bistable actuator capable of two types of actuation modes: binary/continuous motion that is switchable on-demand upon a voltage. We employ two types of artificial muscles, where the DE is responsible for continuous motion actuation covering the entire half-workspace (from a stable position to the flat state) and the twisting and coiled polymer fibers (TCPFs) for binary motion actuation that can overcome external interference. Through programmed voltage actuation, we achieve an accurate motion output with high repeated precision in the dual-mode actuation, without causing electrical breakdown. Utilizing this voltage-adjustable performance, we demonstrate two applications in 1) an electromechanically encrypted display with one actuator and 2) an amplitude modulation system with two actuators in parallel. Figure 1 is the schematic illustration of the bistable actuator. In the principle design, Figure 1a, we use two kinds of electroactive artificial muscles: a piece of DE and two TCPFs, each of which is responsible for one type of actuation mode. TCPFs are used to control the binary motion due to their large force in one degree of Soft actuators are favored due to their flexibility and adaptability, but are limited to single actuation mode. Herein, a novel soft actuator with a dual-kinestate performance is proposed. By subtly integrating a bistable compliant mechanism with artificial muscles, the soft actuator is capable of switching between binary motion and continuous motion with accurate output. Functions of overcoming external interference and regulating the snapping time are realized with programmed voltages. Two applications utilizing the switchable kinestate are illustrated, including a mechanical encryption display system with one actuator and an amplitude modulation system with two actuators in parallel. This novel soft actuator exhibits potential applications for the multi-mode actuation of soft robots.
Design of the Soft Bistable Actuator
freedom displacement output. [8] DE has a fast response with large in-plane expansion, [9] so we can utilize its stable electromechanical response to control the continuous motion of the actuator. In Figure 1b, by prestretching DE to a moderate level, we program the stored energy ΔE total in the actuator and the assembled actuator bends in its beams to reach a self-stabilized equilibrium state, a stable state. Due to the symmetric design of the structures, two stable states in symmetry are obtained. The beams are the main compliant components in the actuator, whose deployment is characterized by the bending angles of the beams. In Figure 1c, the total energy landscape has a typical bistable character with two local minimum valleys and one Figure 1. Illustration of the principle of bimodal switching in the soft bistable actuator. a) The actuator is composed of a piece of DE, two TCPFs, and a compliant structure. b) With a prestretch on DE, the assembled actuator bends to reach an equilibrium state, a stable state. Due to the symmetry of the structures, two stable states are obtained. c) The total energy landscape has two local minimums, corresponding to the stable states ② and ③ denoted by the bending angles θ eq and Àθ eq . The local peak represents the energy barrier that the bistable motion should overcome when snapping the unstable state ①. When powered by DE only, the actuator can be operated in a continuous motion, in either direction ② ! ① or ③! ①, by the deployment of the beams in the mechanical structure. When powered selectively by TCPF, the landscape of the energy is altered to lower the energy barrier to enable a reversible binary motion ② ↔ ③. The two motions are switchable on-demand.
www.advancedsciencenews.com www.advintellsyst.com maximum peak. The valleys are the stable states ② and ③ as denoted by the bending angles θ eq and Àθ eq . The peak represents the energy barrier that the binary motion should overcome when snapping through the unstable state ①. Through applying different voltage and prestretch, we can manipulate the height of the energy barrier and the shape of the landscape (Figure 1c) so that a dual-mode motion in actuation is attained. For example, when powered by DE only, the actuator operates in a continuous motion, following the path of either ②!① or ③!①, in terms of a bending deployment as DE expanding under a voltage. When powered alternatively by TCPFs with linear contraction, the landscape of the energy is altered to lower the energy barrier (ΔE TCPF ) to enable a binary motion following ② ↔ ③. The two motions are switchable when different kinds of artificial muscles are powered. Note that, unlike previous only DE-based bistable actuators, [10] here, we enable the binary motion with two TCPFs, whose voltage is below 20 V in a safe and available range in most mechatronics.
Actuation Mode of Continuous Motion
We next verified the proposed scheme by experiments and theoretical study. The materials and fabrication of the actuator are described in the Experimental Section as well as in the Supporting Information. To obtain the relationship between the bending angle and the voltage, we applied a series of triangle waves of voltage (0-5500 V) on DE, with a rise and declining rate of 366.6 V s À1 , while the TCPF was powered off. The actuator's bending angles in 15 cycles were recorded ( Figure 2a). It is well acknowledged that some DE-based actuators using VHB films have a strong time-dependent performance resulting from the viscoelastic deformation in DE. [11] To improve the precision of such a soft actuator, either feed-forward programming or hysteresis compensation has been proposed. [12] Here, we present another control-free methodology through the mechanical design for attainable high precision in the same material. Figure 2b illustrates the repeatability of the actuator in 15 loading-unloading cycles, and the curves are identical. The insert of Figure 2b is the bending angles at the maximum voltage level, which are overlapped, showing good repeatability. The detailed data are collected in Figure 2c at the listed voltage levels, and their standard deviations are analyzed. The maximum standard deviation is 0.34 , which suggests a high precision in repeated positioning (accuracy within 97.95-99.78%). We attribute the advances to the design of the actuator using compliant beams that restrict the deformation of DE within its stable electromechanical coupling that exhibiting a quasi-linear actuation strain range. The experimental results at 0, 500, 1500, 2500, 3500, 4500, and 5500 V are compared ( Figure 2d) with finite element method (FEM) simulation, displaying an ideal coincidence (Movies S1 and S2, Supporting Information). In the FEM study, ABAQUS (ABAQUS 2017) was used to simulate the stable state and continuous motion process under the high voltage of the actuator. [13] We used the subroutine UHYPER to describe the electromechanical model of DE. Voltage was programmed in the simulation using the subroutine UDFLD, which is timedependent. (The details of the FEM and experimental setup are described in the Supporting Information.)
Actuation Mode of Binary Motion
To characterize the bistable behavior, a binary motion between the two stable states was measured when the actuator was powered by two TCPFs alternatively. A voltage of 17 V (current of 0.105 A) was applied on one TCPF for quick actuation and powered off when the actuator was fully flattened to induce a consequent snap toward the other stable state. With the same actuation and control strategy on TCPF of the other side, the actuator snapped back, completing one actuation period (Movie S3, Supporting Information). The actuator snapped reversibly in 15 cycles between two stable states. Because TCPF has a training cycle, we started the recording from the second cycle. The cycles of the actuation are plotted in Figure 3a.
During the snapping, the actuator self-stabilized after a short-damping oscillation because of structural compliance. The peak and stable positions (þ and À for each side) are marked to characterize the damping time in Figure 3b. Mechanical bistable mechanism favors a constant displacement defined by its stable states, but when integrated with DE for actuation, its accuracy has seldom been reported. In Figure 3c, even after 15 cycles, the actuator is able to maintain its accuracy in deflection displacement with a standard deviation less than 1.06 (i.e., accuracy over 97.3%). In an actuation period, five boundary positions are highlighted as the stable positions for the terminates in binary actuation mode and unstable equilibrium positions during the snapping (Figure 3d), which validate the proposed scheme.
During these experiments, we found that by regulating the voltage form in one type of active material, the TCPF, we can manage to realize dual-motion as well. However, in this case, we failed to stabilize the actuator when it is close to its energy maximum point (the fully flatten state), as a sudden snapping would occur, which means the continuous deformation was unable to cover the whole workspace. But using DE to generate continuous motion is a different scheme. DE gradually expands under the action of voltage to eliminate the force generated by its prestretching, and the internal stress of the actuator gradually disappears until the actuator is fully flattened. Under this scheme, all the angles from a stable position to the flat state are attainable (Figure 2), covering the entire workspace (between one energy minimum state and another minimum state). Therefore, we use two types of actuation materials, for switchable motion performance.
Moreover, when the actuator is under the bistable actuation mode, TCPF can generate a greater output force under an increasing voltage to afford an external load, offering an improved block force at a stable state. This feature enables the actuator to resist external interference, which is hardly achieved in only DE-based bistable actuators. To illustrate this ability, we placed a block as a barrier in the snapping path of the actuator, and the actuator could not snap under the same voltage in the previous bistable actuation. However, when the voltage on the TCPF was increased, the block force at the end of the actuator was improved. Consequently, the barrier was swept away and the actuator was able to snap in a cleaned path. In addition, this increased voltage level will not affect the stable state position of the actuator.
The snapping time of the actuator changes with the voltage powered on TCPF. The higher voltage on TCPF promotes speed, but will cause thermal failure. When DE is actuated, it expands and amplifies the bending angle in the structure. Meanwhile, the required deformation from TCPF is reduced, which accelerates the snapping between two stable states. Figure 4a illustrates the process of the binary actuation mode with a fixed voltage U2 on TCPF and an increasing voltage U1 on DE. Two snapping times www.advancedsciencenews.com www.advintellsyst.com are defined in Figure 4b, which are asymmetric due to the intended mismatching of TCPF by the two sides of the actuator so that direction-identified bistable actuation performance is attainable. With the increase of U1, the snapping time ΔT1 declines and then rises, while the ΔT2 decreases monotonously (Figure 4c). This result offers an extending application of the bistable actuator whose snapping performance can be electrically adjusted.
An Electromechanical Encryption Display System
The bending angle of the actuator is only related to the level of voltage, either positive or negative, and two stable states determine the bending direction of the actuator. Taking advantage of these features, we illustrated an electromechanical encryption display system with two sets of encryption rules (Figure 5a,b) to demonstrate the function of the switchable actuation mode in the www.advancedsciencenews.com www.advintellsyst.com actuator. In the electromechanical encryption display system, the actuator was activated by an input voltage, and then it bent to reflect a laser light to a different location area. We first defined the encryption algorithm (Figure 5c), that is, the relationship between the voltage and the displayed letter. In the encryption algorithm, the positive voltage is programmed by one set of user-defined rules, the negative voltage is in another set of rules, and the sign of the voltage determines which set of rules shall be used. When the electromechanical encryption display system receives the voltage signal, an SCM (Arduino MEGA2560) identifies whether the voltage signal is positive or negative. If the voltage is positive, SCM powers on the left TCPF. The laser light is reflected to the half-space with platform 1 through the bending of the actuator. If the voltage is negative, the actuator will be switched to platform 2. Then the value of the input voltage determines the bending angle of the actuator, which can be interpreted by each set of rules. The bending angle and the bending direction of the actuator together decode the incoming information. We name such a display platform as an electromechanical encryption display system, and its performance is demonstrated in Figure 5d and Movie S4, Supporting Information. www.advancedsciencenews.com www.advintellsyst.com configuration. Binary actuation mode is first selected for composing the initial configuration of the platform, and the actuators then work in continuous actuation mode for amplitude output. We fixed two actuators in parallel and used two connecting rods to link the ends of the two actuators. The displacement output in the X direction consists of two factors: 1) the bending angle of the actuator and 2) the angle between the connecting rods. Each actuator has two stable states, denoted as Left and Right, so the amplitude modulation mechanism (AMM) has four configurations: Left-Left, Right-Right, Left-Right, and Right-Left ( Figure 6). As the Left-Left and Right-Right configurations are symmetrical, their displacement in the X direction is the same, so only three configurations are illustrated. By selecting different configurations, we modulate the output displacement of the hinge in the same input voltage signal form. When the AMM is in the configuration of Left-Right, the angle between the connecting rods decreases as the voltage increases and the displacement of the hinge increases, resulting in the largest amplitude (Movie S5, Supporting Information). In the configuration of Left-Left, the angle between the connecting rods does not change with voltage changes, so its amplitude of motion is the medium (Movie S6, Supporting Information). In the configuration of Right-Left, the angle increases as the voltage increases, so its amplitude of motion is the smallest (Movie S7, Supporting Information). As the range of amplitude is related to the distance between the two actuators and the link length, different Figure 5. Demonstration of the switching capability of the soft actuator as an electromechanical encryption display system. a) The experimental setup when the reflected laser light is regulated by the actuator's motion. b) The platform with the actuator. c) The encryption algorithm. In step 1, the sign of the input voltage determines the direction of binary motion; in step 2, the actuator is switched to the continuous motion, and the value of the input voltage determines the bending angle of the actuator, which is listed in the encryption algorithm Table. d) The results of the coding and display when the actuator is switched.
Amplitude Modulation Utilizing Two Actuators
www.advancedsciencenews.com www.advintellsyst.com modulation performances are programmable and deliverable by adjusting the parallel connection of D and L in Figure 6.
Conclusions
A new soft bistable actuator is designed by coupling two artificial muscles (TCPF and DE) and a flexural structure for switchable dual-mode actuations: binary motion and continuous motion. Through the applied voltage, we manipulate the energy curve by tuning the energy barrier for the actuation strategy. The actuator design is studied by FEM, which guides the fabrication. Experimental results verify the design purpose and high repeated position accuracy (>97%) in either motion mode powered by a specific artificial muscle. Applications are illustrated that utilize the switchable actuation mode in a single actuator for an electromechanical encryption display system, as well as an amplitude modulator with two soft actuators in parallel. The study reveals that through mechanical design of a compliant structure, a new type of soft actuator, with dual kinestates and a high precision motion, shall offer new insight into developing high-performance soft robots. Figure 6. An amplitude modulation platform utilizing two actuators in different configurations. Two actuators are connected in parallel for three configurations, and each configuration has an output amplitude. The binary actuation mode of the actuator is used to determine the configuration, and the continuous motion of the actuator is used to modulate displacement.
Experimental Section
Materials: The materials for fabricating the bistable soft actuator are as follows: DEs (VHB 4910, 3M company), carbon electrodes (ELASTOSIL 3162/AB, Wacker), nylon fishline (#6 Transparent strand, Φ0.38 mm, NORTH VIKINGS), silver-plated line (140D, SANMAU), polyethylene terephthalate (PET) (0.3 mm), and polymethyl methacrylate (PMMA) (2 and 5 mm). A laser cutting machine (CMA-6040, GD HAN'S YUEMING LASER GROUP) was used to cut PET and PMMA into the desired shape. Elements of the actuator before the assembling are shown in Figure S7, Supporting Information. The fabrication processes of the TCPF and the DE are listed in the Supporting Information.
Methods for Bending Motion Characterization: To control the bistable soft actuator, we used the high-voltage amplifier (AMP-20B20, Matsusada) and DC voltage source (HLR-3660D, Henghui) to realize the continuous motion and the binary motion, respectively. We used two laser displacement sensors (LK-G80, KEYENCE) to record the deformation of the actuator. All other control and recording were done through a DAQ card (USB-6363, NI) on a PC by MATLAB. The Supporting Information contains detailed methods.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 2022-01-22T16:41:07.412Z | 2022-01-19T00:00:00.000 | {
"year": 2022,
"sha1": "24b5b48b755c1f927b3f62b731498099b12dd754",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aisy.202100188",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "7bea2153e338f6507e92c97f0b0b998e7bbdce48",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
3568145 | pes2o/s2orc | v3-fos-license | Comparison of Clinical Effects between Percutaneous Transluminal Septal Myocardial Ablation and Modified Morrow Septal Myectomy on Patients with Hypertrophic Cardiomyopathy
Background: Percutaneous transluminal septal myocardial ablation (PTSMA) and modified Morrow septal myectomy (MMSM) are two invasive strategies used to relieve obstruction in patients with hypertrophic cardiomyopathy (HCM). This study aimed to determine the clinical outcome of these two strategies. Methods: From January 2011 to January 2015, 226 patients with HCM were treated, 68 by PTSMA and 158 by MMSM. Both ultrasonic cardiograms and heart functional class were recorded before, after operations and in the follow-up. Categorical variables were compared using Chi-square or Fisher's exact tests. Quantitative variables were compared using the paired samples t-test. Results: Interventricular septal thickness was significantly reduced in both groups (21.27 ± 4.43 mm vs. 18.72 ± 4.13 mm for PTSMA, t = 3.469, P < 0.001, and 21.83 ± 5.03 mm vs. 16.57 ± 3.95 mm for MMSM, t = 10.349, P < 0.001, respectively). The left ventricular outflow tract (LVOT) pressure gradient (PG) significantly decreased after the operations in two groups (70.30 ± 44.79 mmHg vs. 39.78 ± 22.07 mmHg for PTSMA, t = 5.041, P < 0.001, and 74.58 ± 45.52 mmHg vs. 13.95 ± 9.94 mmHg for MMSM, t = 16.357, P < 0.001, respectively). Seven patients (10.29%) in the PTSMA group required a repeat operation in the follow-up. Eight (11.76%) patients were evaluated for New York Heart Association (NYHA) III/IV in the PTSMA group, which was significantly more than the five (3.16%) in the same NYHA classes for the MMSM group at follow-up. Less than 15% of patients in the PTSMA group and none of the patients in the MMSM group complained of chest pain during follow-up. Conclusions: Both strategies can not only relieve LVOT PG but also improve heart function in patients with HCM. However, MMSM might provide a more reliable reduction in gradients compared to PTSMA.
However, experience with myectomy is limited at many centers.
PTSMA, regarded as a less invasive treatment than surgical procedure, is considered as an alternative to myectomy and is usually performed on patients who are not optimal surgical candidates or have a strong desire to avoid surgery. [9,10] However, previous studies have also reported a greater need for pacemaker implantation and a higher rate of re-interventions after PTSMA compared to myectomy. [11] Handbook released by the American College of Cardiology/American Heart Association supplies a Class I recommendation to myectomy for patients with severe drug-refractory symptoms and LVOT obstruction in experienced centers with comprehensive hypertrophic cardiomyopathy (HCM) clinical program (Level of Evidence: C). PTSMA has been given a Class IIa recommendation for adult patients with an unacceptable surgical risk (Level of Evidence: B) in an experienced center. [9] However, data on the effectiveness of these two septal reduction therapies in China are lacking. We report our experience in a comprehensive study of both procedures including periprocedural complications, re-interventions, long-term symptomatic status, and clinical outcome.
Ethical approval
The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Anzhen Hospital (No. AZ00305). Informed written consent was obtained from all patients or their guardians (in the case of children) before their enrollment in this study.
Patients
Tw o -h u n d r e d a n d t w e n t y -s i x p a t i e n t s w i t h HCM (aged >18 years) with significant LVOT obstruction were selected who first visited the Department of Cardiac Surgery of Beijing Anzhen Hospital, Beijing, China, between January 2011 and January 2015. The following exclusion criteria included (1) concomitant moderate or greater aortic/mitral stenosis; (2) maximal (including provokable) LVOT pressure gradient (PG) <50 mmHg (1 mmHg = 0.133 kPa); (3) apical HCM variant; and (4) hypertensive heart disease in elderly patients. HCM was diagnosed by experienced cardiologists based on typical features, with ventricular myocardial hypertrophy (LV wall thickness >15 mm) in the absence of any other disease responsible for the hypertrophy. [9,10] Resting/provokable LVOT obstruction (LVOT gradient >50 mmHg) was also included in the study.
Clinical data collection
The clinical data and demographic information were taken from the medical records of each patient including demographics, clinical outcomes, and echocardiographic parameters. Complications including the need for permanent pacing between two groups were recorded. The echocardiographic parameters were analyzed before and after the operation.
Percutaneous transluminal septal myocardial ablation
PTSMA was performed using previously described techniques. [12] Different catheters were inserted in the LV and aorta to measure pressures and PG of LV tract. Then, another catheter was placed in the selected branch of the left anterior descending artery. After balloon inflation, angiographic contrast was injected through the balloon catheter together with simultaneous transthoracic two-dimensional myocardial contrast echocardiography to determine the extent of the myocardium supplied by the selected septal artery. After delineation of the size of targeted myocardium, 1-4 ml of alcohol was slowly (1 ml/min) injected. The balloon was left inflated for 10 min after alcohol injection to prevent a retrograde spill of alcohol. During the procedure, patients without permanent pacemakers received a temporary pacemaker.
Modified Morrow myectomy
Standard cardiopulmonary bypass and myocardial preservation techniques were used. After aortotomy, the resection was started by making two parallel longitudinal incisions in the septum, the first beneath the nadir of the right coronary cusp and the second beneath the commissure between the right and the left coronary cusps. The classic incision was extended with a midventricular resection, beginning with continued resection leftward toward the mitral valve annulus and apically to the bases of the papillary muscles. All areas of papillary muscle fusion to the septum or ventricular free wall were divided, and anomalous chordal structures, muscle bundles, and fibrous attachments of the mitral leaflets to the ventricular septum or free wall were divided or excised. [13] PTSMA was usually selected in elderly patients who have a high risk of surgical therapy. When patients need receive valvular surgery or coronary artery bypass operation, they were excluded from the study. All patients gave informed consent before these respective procedures.
Follow-up study
Physical examination including assessment of New York Heart Association (NYHA) functional class and echocardiography was recommended during follow-up. The follow-up was carried out by subsequent clinic visits to the outpatient departments and telephone interviews with the patients and their relatives.
Statistical analysis
Continuous variables were expressed as mean ± standard deviation (SD), and categorical variables were expressed as frequencies or percentages. SPSS V.22 (SPSS, Inc., IBM, Chicago, IL, USA) was used for the statistical analysis. Categorical variables were compared using Chi-square or Fisher's exact tests. Quantitative variables were compared using the paired samples t-test. A value of P < 0.05 was considered statistically significant for comparison of clinical outcomes.
Baseline clinical profiles
Of the 226 HCM patients, 68 (31.1%) patients were treated with PTSMA and 158 (69.9%) with MMSM. The patients in the PTSMA group were older than those in the MMSM group. A significantly higher proportion of patients in the MMSM group had hypertension as a presenting feature. No significant differences of other baseline clinical or echocardiographic profiles were observed between the two groups [ Table 1].
Echocardiographic outcomes
The ejection fraction pre-and postprocedure was within normal ranges for both groups [ Table 2]. The mean of the LV outflow gradient decreased from 70.30 ± 44.79 mmHg to 39.78 ± 22.07 mmHg in the PTSMA group (t = 5.041, P < 0.001) and from 74.58 ± 45.52 mmHg to 13.95 ± 9.94 mmHg in the MMSM group (t = 16.357, P < 0.001), indicating a significant hemodynamic improvement with both procedures. The residual PGs at rest were significantly reduced in the MMSM group than that in the PTSMA group after the operation. The septal thickness reduced from 21.27 ± 4.43 mm to 18.72 ± 4.13 mm in the PTSMA group (t = 3.469, P < 0.001) and from 21.83 ± 5.03 mm to 16.57 ± 3.95 mm (t = 10.349, P < 0.001) in the MMSM group. There was a significant difference in the absence of SAM between the two groups. SAM decreased from 91.18% to 45.59% in the PTSMA group (χ 2 = 32.682, P < 0.001) and from 93.67% to 5.06% in MMSM group (χ 2 = 248.141, P < 0.001; Table 2).
Procedural, clinical outcomes, and follow-up
Four patients in the PTSMA group failed to complete the procedure, two for multiple side branches, one for a severe blood pressure drop after balloon inflation, and one for acute left heart failure in operation. The mean duration of hospitalization was shorter in the PTSMA group (t = −4.462, P < 0.001). Two patients in the MMSM group needed extracorporeal membrane oxygenation (ECMO) for postoperative low cardiac output. Forty (58.82%) patients in the PTSMA group had a right bundle branch block, compared with 22 (13.92%) of the patients in the MMSM group (χ 2 = 48.142, P < 0.001). Interestingly, left bundle branch blocks were more common in the MMSM group (χ 2 = 13.182, P < 0.001). Permanent pacemakers because of complete heart block were required in five patients (7.35%) of the PTSMA group and in four patients (2.53%) of the MMSM group (χ 2 = 2.809, P = 0.089; Table 3).
The mean follow-up was 44.19 ± 15.19 months for the PTSMA group and 38.48 ± 15.93 months for the MMSM group. Six (8.8%) patients in the PTSMA group and 12 (7.6%) patients in the MMSM group died before long-term follow-up. Two patients in the PTSMA group died of acute heart failure in the early postoperative period (<1 month). Two patients died of low cardiac output in the MMSM group. There was one later death (>12 months) in the PTSMA group with the patient dying of chronic heart failure. Two patients died 1 year after the procedure in the MMSM group; one of noncardiac cause and one of stroke. There was no significant difference between groups in the level of survival. Kaplan-Meier Curves demonstrated no difference in long-term survival between the PTSMA and MMSM groups (χ 2 = 0.190, P = 0.663 by log-rank test). Seven patients received PTSMA required a repeat operation after the procedure; two of whom received second septal myocardial ablation and five patients received MMSM. Echocardiography examination demonstrated that the MMSM group had a lower LVOT PG 1 year after the operation [ Table 3]. Eight (11.76%) patients were evaluated for NYHA III/IV in the PTSMA group, which was significantly more than 5 (3.16%) in the MMSM group at the latest follow-up. Less than 15% of patients in the PTSMA group and none of the patients in the MMSM group complained of any chest pain during follow-up.
dIscussIon
Hypertrophic cardiomyopathy (HCM) is characterized by hypertrophy of the myocardium and is associated with various clinical presentations ranging from complete absence of symptoms to sudden, unexpected death. LVOT obstruction is present in 20-30% of HCM patients and a number of patients remain symptomatic despite optimal medical therapy, and surgical myectomy is usually recommended for HCM patients. Myectomy procedures were first reported by Morrow in 1975, and many variations of this procedure have been reported with varied efficacies since. [14,15] Myectomy reduces or eliminates LVOT obstruction in most individuals, and its effects are usually sustained. [15,16] Sigwart reported that inflating an angioplasty balloon catheter in the septal perforator resulted in a significant decrease in LVOT obstruction. [17] Subsequently, intracoronary alcohol injection gained popularity in treating patients with HCM who are refractory to medical therapy. In this study, we report our experience of two strategies for treating refractory HCM.
The two groups of patients involved in this investigation had identical baseline gradients and achieved a similar hemodynamic improvement after both procedures. This might be related to the similar mechanisms for eliminating obstructions that are common to both procedures. As noted in the previously, a similar reduction in basal septal thickness was noted after both MMSM and PTSMA. These observations are similar to the mechanisms by which successful septal myectomy relieves LVOT obstruction.
The two patient groups had similar baseline NYHA and Canadian Cardiovascular Society classes, and a similar number in each group suffered from syncope at baseline examination. After surgery, nearly 97% of the patients were in NYHA Class I or II, and after PTSMA, 88% of the patients were in these two classes. Angina disappeared in the MMSM group after surgery; however, 10 (14.71%) patients in the PTSMA group still suffered relevant symptoms. Complete heart block necessitating permanent pacing occurred in 7.35% of patients in the PTSMA group, which was higher than the MMSM group (2.53%), but the data are not statistically significant.
Two deaths occurred in the PTSMA group, both of them suffering acute left heart failure after balloon inflation. This may have been because of heart failure after extensive myocardial infarction. As expected, percutaneous interventions in older patients carry more risk due to the presence of atherosclerotic lesions. Two deaths were observed in the MMSM group because of low cardiac output syndrome postoperatively.
Seven patients suffered second interventions because of recurrent symptoms after the procedure in the PTSMA group, but none required it in the MMSM group. Two patients chose repeated PTSMA because they refused the thoracotomy operation, and five patients received the MMSM procedure. The results showed that MMSM was a more reliable strategy for treating LVOT obstruction.
Like most studies on HCM, this was a retrospective, nonrandomized study, with the common limitations inherent in this type of study. Although the PTSMA and MMSM groups were well matched in terms of baseline clinical profiles, there may have been bias that led patients to select MMSM instead of PTSMA. We just recorded the NYHA class and angina of patients in the follow-up.
In conclusion, both the PTSMA procedure and the MMSM procedure can reduce LVOT obstruction and alleviate symptoms in patients with HCM. However, the MMSM is superior to the PTSMA in reducing the LVOT gradient and alleviating the symptoms associated with HCM.
Financial support and sponsorship
This work was supported by grants from the National Natural Science Foundation of China (No. 81370328, and No. 81770371).
Conflicts of interest
There are no conflicts of interest. | 2018-04-03T01:34:46.700Z | 2018-03-05T00:00:00.000 | {
"year": 2018,
"sha1": "5e949032284070a1e03b2db1af3d80a588683655",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0366-6999.226075",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5e949032284070a1e03b2db1af3d80a588683655",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2798670 | pes2o/s2orc | v3-fos-license | Association of oestrogen receptor beta 2 (ERβ2/ERβcx) with outcome of adjuvant endocrine treatment for primary breast cancer – a retrospective study
Background Oestrogen receptor beta (ERβ) modulates ERα activity; wild type ERβ (ERβ1) and its splice variants may therefore impact on hormone responsiveness of breast cancer. ERβ2/ERβcx acts as a dominant negative inhibitor of ERα and expression of ERβ2 mRNA has been proposed as a candidate marker for outcome in primary breast cancer following adjuvant endocrine therapy. We therefore now assess ERβ2 protein by immunostaining and mRNA by quantitative RT-PCR in relation to treatment outcome. Methods ERβ2-specific immunostaining was quantified in 141 primary breast cancer cases receiving adjuvant endocrine therapy, but no neoadjuvant therapy or adjuvant chemotherapy. The expression of mRNA for ERβ2/ERβcx was measured in 100 cases by quantitative RT-PCR. Statistical analysis of breast cancer relapse and breast cancer survival was performed using Kaplan Meier log-rank tests and Cox's univariate and multivariate survival analysis. Results High ERβ2 immunostaining (Allred score >5) and high ERβ2 mRNA levels were independently associated with significantly better outcome across the whole cohort, including both ERα positive and negative cases (Log-Rank P < 0.05). However, only ERβ2 mRNA levels were significantly associated with better outcome in the ERα + subgroup (Log-Rank P = 0.01) and this was independent of grade, size, nodal status and progesterone receptor status (Cox hazard ratio 0.31 P = 0.02 for relapse; 0.17 P = 0.01 for survival). High ERβ2 mRNA was also associated with better outcome in node negative cases (Log Rank P < 0.001). ERβ2 protein levels were greater in ERα positive cases (T-test P = 0.00001), possibly explaining the association with better outcome. Levels of ERβ2 protein did not correlate ERβ2 mRNA levels, but 34% of cases had both high mRNA and protein and had a significantly better outcome (Log-Rank relapse P < 0.005). Conclusion High ERβ2 protein levels were associated with ERα expression. Although most cases with high ERβ2 mRNA had strong ERβ2 immunostaining, mRNA levels but not protein levels were independently predictive of outcome in tamoxifen-treated ERα + tumours. Post-transcriptional control needs to be considered when assessing the biological or clinical importance of ERβ proteins.
Background
Oestrogen Receptor alpha (ERα) is an accepted prognostic marker in breast cancer and is used to plan adjuvant endocrine treatment (e.g. use of the anti-oestrogen tamoxifen). The majority of the breast cancers are positive for ERα (ERα+), but not all patients with ERα+ cancer respond to endocrine therapy and many subsequently succumb to local relapse or metastasis. The failure of some breast cancers to respond to tamoxifen, currently the most common adjuvant endocrine treatment, is a major clinical problem and several resistance mechanisms have been elucidated [1,2].
ERβ and its splice variants are differentially expressed in a variety of normal tissues and cancers including breast [3,4], but not all published studies are in agreement about the role of ERβ isoforms in breast cancer [4][5][6][7][8][9][10][11][12][13]. The ERβ2/ERβcx variant arises from alternative splicing of the last ERβ exon. This produces a truncated ERβ protein unable to bind oestradiol as a result of a disorientated helix12 in the ligand binding domain [14,15]. ERβ2 acts as a dominant negative modulator of ERα [14,16] and therefore might be expected to have a protective effect in breast tumorigenesis or outcome, at least for ERα + breast cancer.
Further verification of ERβ variants, including ERβ2, as potential clinical markers is still required. Many previous studies make use of mRNA levels as a surrogate marker for ERβ protein expression and few have attempted to relate mRNA to protein levels. Other studies that do assess the expression of ERβ protein use techniques that rely on detection of N-terminal epitopes that are shared by most variants. A good proportion of studies also fail to take into account menopausal status, stage of the disease or the treatment given.
We have previously identified ERβ2 mRNA levels as being more closely associated with treatment outcome than mRNA levels of ERβ1 or ERβ5 [17] in a treatment-specific cohort of postmenopausal women receiving adjuvant endocrine treatment but not chemotherapy. However in the same setting, mRNA levels for the wild-type ERβ1 isoform do not correlate well with protein levels [9]. With the aim of clarifying the significance of ERβ2 expression in tamoxifen response and investigating the relationship between ERβ2 mRNA and protein levels, we have therefore set out to evaluate both expression of ERβ2 protein by immunostaining and expression of ERβ2 mRNA by quantitative RTPCR (qRTPCR). Our hypothesis was that ERβ2 may be associated with outcome following adjuvant tamoxifen treatment of breast cancers and therefore be useful as a predictive marker or give some insight into mechanisms of resistance.
We were able to confirm a significant association of both high ERβ2 protein and high mRNA levels with good outcome, but ERβ2 protein levels were not a useful marker of outcome. Strong ERβ2 staining was associated with better outcome, but not independently of ERα. Although ERβ2 mRNA and protein levels did not correlate with each other, approximately one third of cases (34%) were seen to have both high mRNA and high protein levels; these had a significantly better outcome than the other cases, so ERβ2 protein may have a role in improved outcome for a subset of breast cancers.
Patients and specimens
Patients undergoing treatment for invasive breast cancer during the period 1993 and 1999 at the Royal Liverpool University Hospital were identified from a database at the Cancer Tissue Bank Research Centre (CTBRC), University of Liverpool [9]. A total of 141 postmenopausal patients (Table 1) with primary breast cancer were selected, median age was 68 years (range 47-87). They had been treated by surgery (47 mastectomy, 94 wide local excision) and radiotherapy (70 cases), but had not received systemic chemotherapy or primary endocrine therapy. All patients received adjuvant endocrine therapy; either tamoxifen (n = 133) or as part of the ATAC trial (n = 8). Clinical and histological characteristics are summarized in Table 1. ERα and progesterone receptor (PgR) status was obtained from review of histopathology notes or determined immunohistochemically using a cut-off of 10% positive cells [9]. Ki67 immunostaining was reported previously as % positive tumour cells [9]. Clinical followup data was recorded by retrospective case-note review with data from surviving patients censored at the date last seen. Outcome measures were breast cancer relapse (BCR) and breast cancer survival (BCS). Median follow-up was 71 months for BCR (range 9 to 113) and 79 months for BCS (range 11 to 113). Ethical approval for the study was obtained from The Liverpool Adult Research Ethics Committee (Reference 01/116), who also approved the collection of samples by the CTBRC with informed consent.
Based on estimates of proportions of ERβ2 positive cases from our previous study [17] and available outcome data, we determined that this study would have 80% power with an α value of 0.05 to detect a hazard ratio below 0.73 or above 1.40 in the whole cohort (below 0.63 or above 1.74 in the ERα + cohort), which we considered appropriate to give an indication of clinical utility.
Immunohistochemistry
Histological sections (4 µm) were cut from the formalinfixed, paraffin-embedded specimens, placed onto 3-aminopropyltriethoxysilane-coated slides and endogenous peroxidase activity was blocked using 3% (v/v) hydrogen peroxide. Tissues were subjected to antigen retrieval by microwaving for 10 minutes in antigen unmasking solution (H3300, Vector Laboratories Ltd., Peterborough, UK). Slides were pre-incubated in Protein Block Serum-Free (DakoCytomation, Ely, UK) for 10 minutes. Immunostaining for ERβ2 was performed overnight at 4°C with mouse anti-human ERβ2 monoclonal antibody MCA2279S (clone no 57/3, Serotec; raised to the unique C-terminal region [18] and previously used for breast tumour staining [4]), diluted 1 in 25 in 0.1% (w/v) BSA, 50 mM Tris, 15 mM NaCl, pH7.6. The bound antibodies were detected using the DAKO LSAB2 system, according to manufacturer's recommendations (DakoCytomation). The bound antibodies were visualized as a brown stain by incubating with DAB chromogen (Sigma-Aldrich, Gillingham, UK). Sections were counterstained with Mayers' Haemotaxylin (Sigma-Aldrich) and mounted in DPX (Merck, Dorset, UK). In controls the ERβ2 antibody was preincubated with a molar excess of immunising synthetic peptide (CMKMETLLPEATMEQ [18]) prior to application to sections from positively-staining specimens. Nuclear staining was abolished in these blocked controls, but some cytoplasmic staining remained. Scoring of tumour sections was performed for nuclear staining only. Stained slides were analysed independently by two observers (RV and VA) using light microscopy; the percentage of positively stained malignant cells was estimated (%+) as was the staining intensity, an immuno-score (Allred) was calculated according to the Allred system [19].
qRTPCR
RNA of suitable quality for 100 cases was obtained from the CTBRC; testis RNA (Promega, Southhampton, UK) and MCF7 cell line RNA were used as positive controls. Cases were selected for RNA analysis following independent histological review of adjacent sections, so as to avoid high levels of tissue heterogeneity. Samples from all cases consisted of at least 75% tumour cells and 67% of cases had at least 90% tumour cells. Inflammatory infiltrates were present in a minority of cases (at 10% in 15 cases and at 25% in 4 cases).
Reverse transcription was performed in duplicate as described previously with oligo-dT primers [17], but using 1.5 µg total RNA and Superscript III reverse transcriptase (Invitrogen, Paisley, UK). Quantitative PCR for ERβ2 was performed on a Bio-Rad Icycler Real-Time PCR machine (Bio-Rad Laboratories Ltd., Hertfordshire, U.K.) using 4 µ1 of a 1/2 dilution of cDNA per reaction (equivalent to cDNA from approximately 150 ng of total RNA). ERβ2 PCR reactions included 1× IQ Supermix (Bio-Rad) and PCR primers and a Taqman probe (as given in Table 2). For control gene PCR (HPRT, GAPDH) and ERα 4 µl of a 1/50 dilution was used and the reaction contained IQ SYBR Green Supermix (Bio-Rad). The PCR reactions consisted of a hot-start Taq Polymerase activation step of 95°C for 3 minutes, followed by conditions shown to be produce unique, specific bands for each mRNA ( Table 2). Expression levels of mRNA for each gene were calculated using standard curves produced with the relevant cloned cDNAs and correcting for the control genes (HPRT and GAPDH). All amplicons crossed introns to avoid amplification of genomic DNA and the identity of PCR products was confirmed by agarose gel electrophoresis and DNA sequence analysis, as described previously [17].
Statistical analysis
Power calculations were performed using the PS program [20] with survival analysis implementation of Schoenfeld and Richter [21]. All other statistical analyses were performed using the SPSS package (Windows, v.11). The degree of agreement for immunostaining between observers was assessed using the Kappa statistic. Pearson correlation and Spearman's rank correlation were used as
Results and Discussion
Previous assessments of the role of ERβ2 in breast cancer treatment outcome have been limited, with most clinical studies being performed in broader groups of patients and focussing on other associations, largely related to pathology. Our own previous data [17] was based on a semiquantitative RTPCR analysis using an assay in which ERβ5 is co-amplified with ERβ2 and distinguished based on size of the PCR product, similar to the triple-primer assay used elsewhere [22,23]. We found that, using the arbitrary cutoff imposed by detection sensitivity, ERβ2 mRNA expression was more closely associated with survival benefit than ERβ1 or ERβ5 mRNA expression. We therefore set out to establish whether ERβ2 protein levels similarly predicted patient outcome. We defined discriminatory cutpoints of ERβ2 levels in a non-arbitrary manner, using ROC analysis, and used these to assess the relationship between ERβ2 expression and outcome.
Immunohistochemical staining for ERβ2
A cohort of 141 cases were stained by immunohistochemistry for ERβ2 (Table 1) Immunohistochemical staining for ERβ2 ERβ2 immunostaining significantly correlated with that for ERα (%+ Pearson 0.42 P = 7.8 × 10 7 , Allred Spearman 0.40 P = 4.1 × 10 6 ) and to a lesser extent PgR (%+ Pearson 0.18 P = 0.035). ERβ2 immunostaining was greater in ERα + cases (mean %+ = 69) than in ERα-cases (mean %+ = 52; P = 0.00001 T-test) and ERβ2 Allred score was greater in PgR+ cases than PgR-cases (P = 0.033 MW). The percentage of ERβ2 positive cells were somewhat lower in grade 3 cases (P = 0.042 MW), in keeping with the association with ERα status. There was no association with Ki67 staining, vascular invasion, nodal status, age or size, or with ERβ1-specific immunostaining [9]; most previous studies have similarly failed to show clear links to many clinical and pathological parameters.
The association seen here between ERβ2 and ERα has not always been seen by others. Although case selection and clinical setting may have some bearing on this, it is also possible that such correlations are due to better tissue preservation of antigens in some blocks of tissue. We do not think that this is the case here, as in the same cohort ERα but not ERβ2 inversely correlated with p53 immunostaining (unpublished data) and ERα did not correlate with ERβ1 [9]. If antigen preservation was a major influence on immunostaining patterns it is unlikely that such complex inter-relationships would be evident.
Association of ERβ2 protein with patient survival
Using the Allred scoring system, tumours were designated as either ERβ2 low (score 5 or lower, n = 39) or ERβ2 high (score 6 or higher, n = 97, 71%). ERβ2 status significantly associated with ERα status (P = 0.001 Chi square) and within the subgroup of ERα positive women who received adjuvant tamoxifen there were 18 ERβ2 low cases and 67 ERβ2 high cases (79%).
Within the group as a whole (ERα + and ERα-cases), high ERβ2 protein levels were significantly related to a better relapse free survival (BCR P = 0.049 Log Rank, Figure 2), but not breast cancer survival (BCS P = 0.16, Figure 3). However, in both cases the survival curves converge at later time-points; with shorter follow-up time a stronger relationship with outcome was seen ( One previous study of only 50 ERα positive cases using immunostaining with a different antibody raised to the same ERβ2-specific epitope [7] similarly failed to show any predictive association with adjuvant tamoxifen treatment. However this analysis was based on detecting differences in staining between "sensitive" and "resistant" cases using the crude measure of relapse within 5 years of tamoxifen therapy. Unpublished observations [12] also failed to show any predictive value in an adjuvant setting and a similar lack of association between ERβ2 immunostaining and outcome has recently been demonstrated in the neoadjuvant setting [4]. Hence the early outcome benefit seen with strong ERβ2 immunostaining was not identified previously. However, an association of ERβ2 protein with a favourable outcome has been seen in a metastatic and locally advanced setting [10]. In this case, not only was the clinical setting different, but ERβ2 was assessed by western blot. Therefore the present study is the largest to date to assess immunostaining of ERβ2 as a predictive marker of outcome in the postmenopausal, adjuvant endocrine setting. Results indicate that ERβ2 protein levels did not apparently relate closely to outcome for ERα + cases. Rather there was some association of ERβ2 immunostaining with better outcome in broader cohorts of patients (including ERα-cases), due in part to a correlation between ERα and ERβ2 protein levels.
Association of ERβ2 mRNA with patient survival
ERβ2 immunostaining results are at odds with previous semi-quantitative RTPCR results. We therefore performed a repeat RTPCR analysis on a larger series of patients, but with fully quantitative RTPCR using independent cDNA synthesis reactions and different splice variant specific PCR conditions. A subgroup of 100 cases (Table 1) with suitable quality mRNA available were used in qRTPCR for ERβ2, ERα and control genes HPRT and GAPDH. Expression of ERβ2 mRNA (mean 0.006 attomoles per µg total RNA) was significantly lower (P < 10 6 paired T-test) than that of ERα (mean 25 attomoles per µg total RNA). These low levels of ERβ mRNA (also seen with ERβ1 and ERβ5, results not shown) may contribute to technical difficulties in reproducibly measuring ERβ variants and hence to the lack of consistency between different studies.
In the 100 case (ERα + and ERα -) qRTPCR cohort ( These two measures of ERβ2 expression therefore seem to behave differently in relation to ERα status and treatment outcome. Further outcome analysis was limited to ERα positive women who received adjuvant tamoxifen and had a defined breast cancer related outcome (n = 62 BCR, n = 58 BCS). High grade (BCR P = 0.006, BCS P = 0.0008) and positive nodal status (BCR P = 0.003, BCS P = 0.007) maintained their association with worse outcome (Log Rank). ROC plots for BCR and BCS at 5 years indicated a significant relationship between good outcome and high qRTPCR values for ERβ2 (BCR area under curve 0.68 CI 0.52-0.84, P = 0.036) and the optimal cut-point was 0.0039 attomoles per µg total RNA. There were significant associations between outcome and ERβ2 mRNA level using the ROC-derived cut-point (Figures 2 and 3). High ERβ2 mRNA was significantly associated with better outcome (BCR P = 0.0095 Log Rank, HR 0.32 CI 0.13-0.79; BCS P = 0.011 Log Rank, HR 0.25 CI 0.08-0.79). The 5year cumulative relapse-free population was 81% in the ERβ2-high group (standard error 8%), compared to 55% in the ERβ2-low group (standard error 10%); the 5-year cumulative BCS was 89% in the ERβ2-high group (standard error 6%), compared to 62% in the ERβ2-low group (standard error 10%).
Our results indicate that ERβ2 isoform mRNAs may be an independent marker for ERα + cases that respond well to adjuvant tamoxifen treatment. In node negative cases, where the need for additional markers of response is greatest, our study shows that low ERβ2 mRNA levels are significantly related to worse outcome; as the cases in this subgroup analysis was small, a larger study of node negative patients is warranted. The fully quantitative nature of the qRTPCR results allows comparison of mRNA levels between different ER isoforms and of variant levels between tumours, but necessitated selection of optimal cut-points (in this case using ROC analysis) for the dichotomization required for standard outcome analysis. It should be noted that, whilst such dichotomization is useful in demonstrating associations with outcome, true utility of ERβ variant mRNA measurement will only be demonstrated with larger patient cohorts and may be better achieved by treating mRNA quantitation as a continuous variable, as in other RTPCR based outcome predictors [24].
Association of staining for ERβ2 protein with mRNA expression
Associations between high levels of ERβ2 protein (immunoscore) or mRNA (qRTPCR) and improved outcome have been seen, but only the qRTPCR results are significant in the clinically relevant ERα + cohort. It is therefore important to establish the relationship between mRNA and protein levels in clinical samples. Notably, most previous RTPCR-based analyses have failed to take into account the possible translational control when assigning biological or clinical relevance to ERβ isoform expression.
When assessing the relationship between immunostaining and qRTPCR for paired samples from each case, no correlation was seen between levels of protein and mRNA for ERβ2 [Pearson (%+) -0.12 P = 0.24; and Spearman (Allred) -0.08 P = 0.40]. This is in contrast to ERα in the same cohort [Pearson (%+) 0.30 P = 0.003; Spearman (Allred) 0.50 P = 1.0 × 10 -6 ], but a similar lack of correlation was seen previously for ERβ1 [9]. Due to tissue heterogeneity, any mRNA analysis of tissue homogenates without selection can contribute to discordance with immunostaining results that are scored on specific cell types. In order to minimise the impact of such artefacts, we selected cases for mRNA analysis that had high proportions of tumour cells (see Methods). It is known that lymphocytes express ERβ2 mRNA, but when 14 cases with inflammatory infiltrates were excluded there was still no significant correlation between ERβ2 mRNA and protein expression. A major factor in the discordance is that many cases express high levels of protein, but low mRNA levels; a situation that is not likely to arise from expression of mRNA in non-tumour cells. It is however possible that heterogeneity of expression in the different parts of the tumour specimen used for mRNA and protein analysis contributes to the lack of correlation and in situ analysis of mRNA and protein in adjacent sections might address this. Although ERβ2 protein levels are apparently not directly related to mRNA levels, expression of ERβ2 protein may be important because good outcome was observed for those cases assessed as having both high mRNA and protein levels and this was independent in multivariate analysis. It is possible therefore that the relatively poor utility of ERβ2 protein assessment by immunostaining as a measure of outcome prediction may be due to high levels of ERβ2 protein in some cases (with lower levels of ERβ2 mRNA) being related to some form of protein stabilization, or detection of inactive ERβ2. The disparity between protein and RNA expression for ERβ2 is even suggestive of an inverse relationship. Nevertheless a significant proportion of cancers (34%) had both high protein and high mRNA levels and these had a significantly better outcome than the remaining cases. This suggests that transcription of ERβ2 mRNA drives ERβ2 protein levels in some cases, and these cases do particularly well on tamoxifen treatment. It is perhaps unsurprising that previous studies of ERβ2 protein expression did not find significant associations between ERβ2 and outcome in ERα + tamoxifen treated cases as these did not include measurement of ERβ2 mRNA levels. They were thus unable to distinguish between ERB2 protein associated with increased transcription and that possibly present due to some form of posttranscriptional control (or perhaps the breakdown of normal control).
Conclusion
Whilst our data would suggest that high ERβ2 levels could contribute to an improved outcome in a subgroup of patients, it provides further evidence that determination of ERβ2 protein by immunostaining is unlikely to provide the predictive test that is needed for better targeting of additional therapy in those women for whom adjuvant tamoxifen is not likely to be sufficient. The failure to link protein expression to outcome measures does not preclude the use of ERβ2 mRNA levels in a clinical setting. Low ERβ2 mRNA was significantly associated with worse outcomes in ERα + tamoxifen-treated patients independently of other factors such as grade and nodal status. Larger trials to validate ERβ2 mRNA as a biomarker are needed and should be extended to alternative adjuvant endocrine therapies such as aromatase inhibitors.
analysis. MD, DRS and CH participated in conception, design and coordination of the study. All authors helped to draft the manuscript and approved the final version. | 2017-08-03T01:41:19.305Z | 2007-07-18T00:00:00.000 | {
"year": 2007,
"sha1": "d9a63923f0d340eb4f54df8f32b4661d483eb1ee",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-7-131",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9a63923f0d340eb4f54df8f32b4661d483eb1ee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214461244 | pes2o/s2orc | v3-fos-license | Evaluation of a receptor gene responsible for maternal blood IgY transfer into egg yolks using bursectomized IgY-depleted chickens
In avian species, maternal immunoglobulin Y (IgY) is transferred from the blood to the yolks of maturing oocytes; however, the mechanism underlying this transfer is unknown. To gain insight into the mechanisms of maternal IgY transfer into egg yolks, IgY-depleted chickens were generated by removing the bursa of Fabricius (bursectomy) during egg incubation, and their egg production and IgY transport ability into egg yolks were determined. After hatching, blood IgY concentrations of the bursectomized chickens decreased gradually until sexual maturity, whereas those of IgA remained low from an early stage of growth (from at least 2 wk of age). Chickens identified as depleted in IgY through screening of blood IgY and IgA concentrations were raised to sexual maturity. At 20 wk of age, both blood and egg yolk IgY concentrations in the IgY-depleted group were 600-fold lower than those of the control group, whereas egg production did not differ between the groups. Intravenously injected, digoxigenin-labeled IgY uptake into the egg yolk was approximately 2-fold higher in the IgY-depleted chickens than in the controls, suggesting that IgY depletion may enhance IgY uptake in maturing oocytes. DNA microarray analysis of the germinal disc, including the oocyte nucleus, revealed that the expression levels of 73 genes were upregulated more than 1.5-fold in the IgY-depleted group, although we could not identify a convincing candidate gene for the IgY receptor. In conclusion, we successfully raised IgY-depleted chickens presenting a marked reduction in egg yolk IgY. The enhanced uptake of injected IgY into the egg yolks of the IgY-depleted chickens supports the existence of a selective IgY transport mechanism in maturing oocytes and ovarian follicles in avian species.
INTRODUCTION
Immunoglobulin Y (IgY), the functional equivalent of mammalian immunoglobulin G, is present in avian egg yolks and plays a crucial role in the protection of newly hatched chicks against infectious pathogens (Kowalczyk, et al., 1985). The process of avian maternal IgY transfer comprises 2 steps, the first being the transfer from circulating maternal blood to the yolks of maturing oocytes in ovarian follicles, whereas the second involves IgY transfer from the egg yolks to the embryonic circulation through the yolk sac membrane (Linden and Roth, 1978;Tressler and Roth, 1987). Whereas the second step relies on the IgY Fc receptor, FcRY (West, et al., 2004), the relevant receptor involved in IgY transport is unknown. In our previous study, we observed that an Fc domain of IgY was essential for effective IgY transport into egg yolks (Kitaguchi, et al., 2008). A study using quail IgY Fc and chicken IgY Fc mutants showed that a single substitution, namely that of the Tyr 363 residue located on the Fc domain to an Ala 363 , greatly impairs IgY transport into egg yolks. In addition, the absence of an N-glycosylated carbohydrate chain at Asn 407 on the Fc domain also reduces IgY transport into egg yolks Takimoto, et al., 2013). These results support the existence of a specific IgY receptor mediating blood IgY uptake into egg yolks.
The absence of endogenous IgY is a suitable model to gain insight into how IgY is incorporated into egg yolks. Although in mammals the bone marrow is the source of B-dependent lymphocytes for immunoglobulin production, immunoglobulin gene rearrangement for B lymphocyte production in birds takes place in the bursa of Fabricius, located between the cloaca and sacrum (Glick, et al., 1956;Scott, 2004). Surgical removal of the bursa of Fabricius before and after hatching, called bursectomy, depletes B cells in peripheral lymphoid organs and practically abolishes IgY production (Cooper, et al., 1969). Yasuda et al. (1998) succeeded in obtaining sexually mature, IgYdepleted chickens after performing bursectomy on day 18 of incubation, and these IgY-depleted chickens produced eggs with very low IgY levels. The IgY concentration in the egg yolks of IgY-depleted chickens was 2 mg/mL of yolk, which is 1,000-fold lower than that in IgY-producing chickens. Because IgY-depleted chickens are viable, evaluation of blood IgY transport ability into egg yolks can provide a clue as to how the IgY receptor contributes to maternal IgY transfer. However, the characteristics of egg production and ovarian IgY transport ability in sexually mature, IgYdepleted chickens are unknown.
In the present study, we generated IgY-depleted chickens by surgical bursectomy at days 17 to 18 of incubation and characterized egg production performance and exogenously injected IgY uptake into egg yolks to gain insight into the mechanisms of maternal IgY transfer. We further assessed which genes were upregulated in oocyte nuclei of IgY-depleted chickens by DNA microarray to determine the candidate IgY receptor that contributes to maternal IgY transfer.
Chickens
Fertilized White Leghorn-type commercial chicken eggs were purchased from a local supplier (Julia Light; Japan Layer, Gifu, Japan). Animal care complied with the applicable guidelines of the Nagoya University Policy on Animal Care and Use (approval nos: 2015022605, 2016022605, and 2017030220).
Surgical Bursectomy
The fertilized eggs were incubated at 37 C with a relative humidity of 60 to 70%, with turning once per h, until day 17 of incubation. On day 17 to 18 of incubation, surgical bursectomy was performed according to the method of Yasuda et al. (1998). Briefly, eggs were sterilized with tincture of iodine before drilling. A section of the eggshell (1 ! 2 cm 2 ) was cut out with a dental drill without damaging the shell membrane and underlying chorioallantoic membrane. The eggshell was removed, and 3 sides of the shell membrane square were cut off to close the egg after treatment. The chorioallantoic and amniotic membranes were cut to allow gripping the tail of the embryo with forceps. Bursectomy was carried out basically according to the method described in Aitken and Penhale (1986). After treatment, the removed shell was placed in the same position and sealed with surgical tape, following which the eggs were returned to the incubator until hatching. The control eggs were continuously incubated without any operation.
Experimental Design
Experiment 1 After allowing 24 h for hatching, female chicks were moved to battery cages and housed there until 3 wk of age. The birds were provided with free access to water and a commercial starter diet. The photoperiod was continuous lighting. At 3 wk of age, the birds were housed in cages placed in a room under controlled temperature (25 6 2 C) and with a 16 h:8 h light-dark photoperiod with lights on at 08:00. The diets were changed to grower pullet ration, finisher pullet ration, and layer ration at 3, 10, and 15 wk of age, respectively.
Body weight measurement and blood sample collection were performed periodically (2,4,6,8,12,16, and 20 wk of age; n 5 5). The blood samples were collected via the wing vein and centrifuged at 16,000 ! g for 4 min at 4 C. The supernatant was collected and stored as the serum sample for measurement of IgY and IgA concentrations. Experiment 2 Seventeen bursectomized chickens were raised, and their blood IgY and IgA concentrations were measured at 8 wk of age for selection of IgYdepleted chickens (,10 mg/mL for IgY and ,1 mg/ mL for IgA). After 20 wk of age, hen-day egg production (%) of both the control and IgY-depleted groups (n 5 5) were measured for 25 D. At 21 wk of age, blood, and eggs were collected and their IgY concentrations were measured as described below. For the injection study, chicken IgY (Rockland, Limerick, PA) was labeled with digoxigenin (DIG) (Roche Diagnostics, Indianapolis, IN) according to the manufacturer's recommendations. At 25 wk of age and within several h of oviposition, each bird was injected intravenously with 100 mg of DIG-labeled IgY. Laid eggs were collected for 7 D after injection and stored at 4 C until analysis. In general, concentration of IgY in the egg yolks is essentially constant throughout the entire maturation of the oocyte, from the small (0.05 g) to the largest (20 g) oocyte (Kowalczyk et al., 1985). Importantly, IgY transfer into egg yolks maximize at F3 ovarian follicle (3-4 D before laying) because of its maximum growth at hierarchical stage. Therefore, we decided to collect eggs for 2 to 7 D after the injection (equivalent to F1-F6 ovarian follicles at the timing of injection). Yolk extract of IgY was prepared as described in Takimoto et al. (2013), which is a modified version of the water dilution method in Akita and Nakai (1993). Experiment 3 Control and IgY-depleted chickens (n 5 3) were prepared as described in Experiment 2. At 25 wk of age, the germinal disc region, including the oocyte nucleus, was collected from F5-F7 yellow ovarian follicles. The collected samples (9 germinal discs of 3 birds) were preserved in RNAlater (Thermo Fisher Scientific, Waltham, MA) for DNA microarray analysis.
Quantitation of Chicken IgY, IgA, and DIG-Labeled IgY by ELISA Serum and yolk IgY and IgA concentrations were determined using chicken IgG and IgA ELISA Quantitation kits (Bethyl Laboratories, Montgomery, TX). The concentrations of DIG-labeled IgY in the egg yolk extracts were quantified using an original ELISA (Bae, et al., 2009).
DNA Microarray Analysis of Oocyte Germinal Discs (Experiment 3)
Total RNA was isolated from pooled F5-F7 germinal discs using an RNeasy Micro kit (Qiagen, Hilden, Germany). Total RNA from 3 birds per treatment was pooled for each chip. Whole transcripts from germinal discs were measured using a GeneChip Chicken Genome Array (Affymetrix). Raw data were normalized using the MAS5 algorithm in Affymetrix GeneChip Operating Software ver. 1.4. The microarray data have been deposited in the NCBI Gene Expression Omnibus (GSE134668).
Data Analysis
Mean values for body weight, egg production, and DIG-labeled chicken IgY uptake were compared by the Student's t-test. All error bars represent the standard error of the mean, and differences between means were considered significant at P , 0.05. Statistical analyses were performed in the Excel Statistics package.
Experiment 1: Body Weight and Blood IgY and IgA Concentrations
Body weight did not differ significantly between the IgY-depleted and control groups during the 20 wk after birth (Figure 1; P . 0.05). Blood IgY and IgA concentrations were both sequentially measured to evaluate the success of bursectomy. There was no difference in blood IgY concentrations between the control and the IgYdepleted group at 2 wk of age (Figure 2A), but differences were observed after 4 wk of age. The blood IgY concentration in the control group increased gradually until sexual maturation, peaking at 16 wk of age, followed by a decrease that was likely because of egg production. In contrast, the blood IgY concentration in the IgY-depleted group declined continuously until sexual maturation. A difference in the blood IgA concentration between the control and the IgY-depleted groups could already be observed at 2 wk of age, and the lowered blood IgA levels in the IgY-depleted group was maintained in all ages ( Figure 2B). Newly hatched chicks normally retain maternally derived IgY until their acquired immunity is developed. In contrast, relatively little maternally derived IgA is transferred to the next generation. Measuring the blood IgA concentration to evaluate the success of bursectomy soon after hatching proved to be useful.
Experiment 2: Egg Production and Blood and Egg Yolk IgY Concentrations
Egg production at 20 wk of age in both groups was high and did not differ between the 2 groups (Table 1 IgY-depleted group (1.69 6 1.7 mg/mL; undetectable in three birds) was 600-fold lower than that in the control group (959 6 102 mg/mL) ( Figure 3A). The egg yolk IgY concentration in the IgY-depleted group (3.78 6 2.3 mg/mL; undetectable in one bird) was also lower in the IgY-depleted group than in the control group (712 6 56 mg/mL), similar to that observed for the blood IgY concentration ( Figure 3B).
Uptake of Injected IgY Into the Egg Yolk
Following injection, IgY uptake into egg yolks was undetectable 1 D after injection (data not shown) but peaked at 3 and 4 D after injection in both treatments ( Figure 4). The uptake of injected IgY into egg yolks was significantly higher in the IgY-depleted group than in the control group at 2 to 7 D (2, 6, and 7 D at P , 0.01; 3, 4, and 5 D at P , 0.05). The total uptake of injected IgY into the egg yolk for 7 D was 1.8-fold higher in the IgY-depleted group compared with that of the control (425 ng/g of yolk in the control vs. 242 ng/g of yolk in the IgY-depleted group; P , 0.05). These results raised a possibility that IgY-depleted chickens might enhance IgY uptake by upregulation of an as yet unidentified IgY receptor. Therefore, we performed microarray analysis to screen genes upregulated in maturing oocytes of IgY-depleted chickens to identify the putative IgY receptor responsible for maternal blood IgY transfer.
Experiment 3: Microarray Analysis for Screening of a Candidate IgY Receptor
The GeneChip Chicken Genome Array was used for high-throughput screening of genes upregulated in the oocytes of IgY-depleted chickens. The germinal discs of F5-F7 yellow follicles of 3 birds were pooled, and their total RNA was used for microarray analysis. The sample of germinal discs used here included both germinal disc and adherent granulosa cells, because they were not separated. In advance, we had confirmed that our germinal disc samples expressed oocyte nucleus marker gene (DAZL and WEE2; Elis et al., 2008) at levels 100-fold to 300-fold higher and expressed granulosa cell marker gene (ZPC) at 6-fold lower than granulosa cells alone by real-time PCR (data not shown). These results suggest that germinal disc sample analyzed here were rich in mRNAs originated from oocyte, but included mRNAs originated from granulosa cells.
The array comprises 37,703 probe sets representing 32,773 genes. Among the control and IgY-depleted groups, 278 genes from 331 probe sets were differentially expressed with a significant detection call at P , 0.002 (Table 2). A total of 73 genes from 92 probe sets were upregulated . 1.5-fold (data not shown), with 19 genes presenting . 2-fold upregulation in the IgY-depleted group (Table 3). Among the 73 genes, 3 coded for membrane receptors, namely, PROCR, ADRA2C, and IL1RL1. We also focused on already known IgY receptors, PLA2R1 (FcRY), IHSF1 (ggFcR), and very lowdensity lipoprotein receptor (VLDLR) (LR8), a major endocytotic receptor expressed in the oocyte plasma membrane. No change was observed except for the . Egg yolk uptake of intravenously injected IgY in control and IgY-depleted chickens at 22 wk of age. DIG-labeled IgY was injected at 100 mg/bird into the wing vein. Laid eggs were collected daily until day 7 after injection, and IgY uptake into the yolks was measured by ELISA. Bars indicate the mean 6 SEM of 5 chickens. Significantly different from the control at *P , 0.05, **P , 0.01. IgY, immunoglobulin Y.
PLA2R1 gene, which presented a 1.5-fold higher value in the IgY-depleted group compared with the control group.
DISCUSSION
The current study reaffirmed that removal of bursa of Fabricius depletes both IgY and IgA production (Kincade and Cooper, 1973). The changing patterns of blood IgY and IgA concentrations in the control and IgY-depleted groups are in good agreement with characteristics of maternal antibody transfer in newly hatched chicks. Maternal IgA is not transferred into the blood circulation of fetus and hatched chicks (Rose et al., 1974;Rose and Orlans, 1981). In addition, plasma IgA concentration starts to increase within a week after hatching (Hamal et al., 2006). Thus, a difference in the blood IgA concentrations at 2 wk of age between the control and the IgY-depleted groups ( Figure 1B) is quite convincing. Concerning about blood IgY, there was no difference in blood IgY concentrations between the 2 groups at 2 wk of age, but differences became wider after 4 wk of age ( Figure 1A). These observations are also in agreement with the report of Hamal et al. (2006); the blood IgY concentration of newly hatched chicks decreased up to 2 wk of age because of catabolism of maternal IgY in chicks, and then, it started to increase by 3 wk of age because of appearance of IgY synthesized in chicks.
In this study, we successfully raised bursectomized IgY-depleted chickens producing markedly lowered IgY levels in egg yolks. We observed that blood and egg yolk IgY concentrations in the IgY-depleted chickens were 600-fold less compared with those of the control chickens. Yasuda et al. (1998) successfully developed IgY-depleted hens producing eggs free of IgY, but the efficiency of egg production was not reported. The present IgY, immunoglobulin Y. Upregulated (.2-fold) genes between control and IgY-depleted chickens identified by DNA microarray analysis were listed. The already-known endocytotic receptor gene expressing in oocyte and 2 IgY receptor genes were also listed.
Fold change was calculated by the gene expression level in IgY-depleted chickens relative to that in control chickens. study showed that under conventional conditions, IgYdepleted chickens grow normally and exhibit an eggproducing capacity equivalent to that of the control group, suggesting that IgY depletion does not directly impair the reproductive ability of laying birds.
To characterize the IgY transport capacity of IgYdepleted chickens, we intravenously injected DIGlabeled IgY into the chickens and determined its uptake into the egg yolk. The quantity of DIG-labeled IgY taken up into egg yolks was approximately 2-fold higher in the IgY-depleted chickens than that in the control. Although the precise mechanism of the enhanced uptake of injected IgY into egg yolks of the IgY-depleted chickens is not known, 2 possibilities have been proposed: the reduced levels of endogenous blood IgY may increase expression of IgY receptor in oocytes; or the reduced levels of endogenous blood IgY may be outcompeted by the injected IgY during the transfer process into oocytes. To explore the former possibility, we performed a microarray analysis to determine which genes were upregulated in the oocytes of IgY-depleted chickens.
Among the 73 . 1.5-fold upregulated genes, 3 coded for membrane receptors. PROCR, a cellular receptor for protein C expressed mainly in endothelial cells, is a type 1 transmembrane protein that exhibits sequence and 3-dimensional structural homology with the major histocompatibility class 1/CD1 family (Oganesyan, et al., 2002). PROCR is primarily responsible for protein C activation and plays a crucial role in the protein C anticoagulant pathway (Fukudome and Esmon, 1994). Additionally, PROCR also binds to other ligands such as factor X, factor VIIa, and the gd T-cell antigen receptor (Mohan Rao, et al., 2014), but no information is available for PROCR binding to immunoglobulins in any animal species. The alpha-2C adrenergic receptor, ADRA2C, is predominantly expressed in the central nervous system and adrenals. As the ADRA2C protein binds to epinephrine (MW 183) and norepinephrine (MW 169). Meanwhile, IL1RL1 (interleukin 1 receptor like 1) is an IL33 receptor belonging to the IL1 cytokine family and is expressed as both a membrane-anchored receptor (called ST2L) activated by IL33 and a soluble variant (called sST2) exhibiting antiinflammatory properties (De la Fuente, et al., 2015). ST2 was considered an orphan receptor for many years, until its association with IL33 was demonstrated (Schmitz, et al., 2005). Together, these findings indicate that PROCR, ADR2AC, or IL1RL1 are unlikely to be the receptor participating in maternal IgY transfer in maturing oocytes.
We also focused on the gene expression of a critical multiligand receptor for yolk precursors, the VLDLR, as well as that of the already known IgY receptors IHSF1 and PLA2R1. Chicken VLDLR (LR8) is a single 95 kDa protein expressed in the oocyte plasma membrane, and mainly binds yolk lipoproteins, very lowdensity lipoproteins, and vitellogenin (Bujo, et al., 1994). The IHSF1 receptor, called ggFcR in the chicken, is an IgY receptor with 4 extracellular Ig domains and transmembrane regions expressed predominantly in blood cells and binds the Fc region of IgY (Schreiner, et al., 2012). However, the expression of these 2 receptors in germinal discs did not differ between the control and IgY-depleted groups (Table 3). Finally, PLA2R1, called FcRY in the chicken, is the chicken counterpart of the mammalian muscle type phospholipase A2 receptor. FcRY is a 180 kDa protein with a single transmembrane region responsible for the transfer of yolk IgY to the embryonic circulation in developing eggs (West et al., 2004). In our previous study, FcRY gene expression was detected in the theca layer of the ovarian follicle, although only at low levels in the granulosa cell layer, including the germinal discs (Kitaguchi, et al., 2010). Indeed, the present microarray data showed that FcRY gene expression was relatively low compared with that of the major oocyte receptor, VLDLR, although FcRY expression was slightly induced (1.52fold) by IgY depletion (Table 3). In the present study, therefore, we could not identify a convincing candidate receptor from the microarray analysis. FcRY gene expression levels in the theca layer of ovarian follicles in IgY-depleted chickens, as well as the physiological function of FcRY in the thecal layer of the chicken ovary, should be investigated in the future.
As mentioned above, the mechanism responsible for the enhanced uptake of injected IgY into the egg yolks of IgY-depleted chickens is unknown. The latter possibility may be that the reduced levels of endogenous blood IgY are outcompeted by the injected IgY during the transfer process, resulting in enhanced uptake of injected IgY into the egg yolk. In our previous study, we determined the blood IgY concentration and exogenously injected IgY Fc uptake into egg yolks in 6 quail strains. Interestingly, IgY Fc uptake was inversely correlated with endogenous blood IgY levels when all the strain data were pooled (Murai, et al., 2016). The most plausible explanation is that the injected IgY Fc competed with endogenous blood IgY during the transfer process, resulting in reduced IgY Fc uptake into eggs. The enhanced uptake of injected IgY into the egg yolks of IgY-depleted chickens provides indirect evidence of the existence of a specific IgY transfer system, likely receptor-mediated, in avian oocytes.
In conclusion, using IgY-depleted chickens with a normal egg production capacity, we found that IgY depletion increases intravenously injected IgY uptake into the yolks of maturing oocytes. Although we were unable to identify a putative candidate gene for the IgY receptor by microarray analysis, we found that the expression levels of 73 genes were increased . 1.5-fold in response to IgY depletion in the germinal discs of oocytes. Based on the present and recent results, we again propose that an IgY receptor involved in maternal IgY transfer exists in the ovaries of avian species.
ACKNOWLEDGMENTS
This work was supported by a Grant-in-Aid (No. 16H05020 to A.M.) from the Japan Society for the Promotion of Science. The authors are thankful to Dr. Tsudzuki, M., Dr. Horiuchi, H., and Dr. Furusawa, S. at Hiroshima University for the technical assistance with bursectomy. They are thankful to the National Bio-Resource Project-Chicken and Quail (http://www.agr. nagoya-u.ac.jp/wNBRP/) for providing the WL-G chickens for the preliminary experiment. | 2020-02-20T09:13:58.846Z | 2020-02-13T00:00:00.000 | {
"year": 2020,
"sha1": "6173fcc111d7d0365fc35900e9ffe24000b1cb40",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.psj.2019.11.045",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b876518f9e8358f1f3cea89a985c45f5f944b861",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
218659240 | pes2o/s2orc | v3-fos-license | The Effect of Acute Erythromycin Exposure on the Swimming Ability of Zebrafish (Danio rerio) and Medaka (Oryzias latipes)
Erythromycin is a widely used antibiotic, and erythromycin contamination may pose a threat to aquatic organisms. However, little is known about the adverse effects of erythromycin on swimming ability. To quantify erythromycin-induced damage to fish swimming ability, Oryzias latipes and Danio rerio were acutely exposed to erythromycin. The swimming ability of the experimental fish was measured after exposure to varying doses of erythromycin (2 µg/L, 20 µg/L, 200 µg/L, and 2 mg/L) for 96 h. Burst speed (Uburst) and critical swimming speed (Ucrit) of experimental fish significantly decreased. In addition, gene expression analysis of O. latipes and D. rerio under erythromycin treatment (2 mg/L) showed that the expression of genes related to energy metabolism in the muscle was significantly reduced in both species of fish. However, the gene expression pattern in the head of the two species was differentially impacted; D. rerio showed endocrine disruption, while phototransduction was impacted in O. latipes. The results of our study may be used as a reference to control erythromycin pollution in natural rivers.
Introduction
Antibiotics are frequently used worldwide for sterilization and to maintain health [1,2]. It has been reported that 80% of antibiotics enter the aquatic environment in their original form [3,4]. Moreover, studies have shown that antibiotics are widely detected in aquatic systems [5]. Antibiotics are considered to be a pollutant with a sustained adverse effect on the ecological environment [6]. Adverse effects on fish have been widely reported; for example, antibiotics have been shown to delay the hatching of fish eggs [7], damage gills and liver [8], and destroy the antioxidant defenses in muscle, which affects fish metabolism [1,9] and injures neurons [1,9]. Antibiotics are usually detected in rivers and wastewater at the level of ug/L [3]. However, previous studies have found concentrations of antibiotics at mg/L levels in swine wastewater [10][11][12], and erythromycin is commonly used for livestock farming [13].
Erythromycin, a semisynthetic antibiotic bacteriostatic, has been a widely used antibiotic since the 1950s [14]. Sewage treatment systems do not efficiently dispose of organic pollutants; thus, erythromycin is ubiquitous in the aquatic environment [15]. It takes a long time for erythromycin to degrade naturally [16]; in addition, previous studies have demonstrated that erythromycin has adverse effects on fish [16][17][18]. However, these studies focused on the physiological impacts on fish, rather than the negative effects on behavior [19].
The swimming behavior of fish is closely related to all the life activities of fish. Burst speed (U burst ) and critical swimming speed (U crit ) are important aspects of swimming ability because they play significant roles in the life activities of fish. U burst is vitally important for activities such as eating, avoiding predators, and competitive interaction [20,21], while U crit may be critical for seasonal behaviors associated with migration and reproduction. In addition, U crit and U burst represent the aerobic and anaerobic capacity of fish, respectively. Red muscle contains hemoglobin, myoglobin, and mitochondria, and is connected to the vascular system. This muscle is thought to have a metabolic function associated with aerobic exercise. White muscle provides a strong but limited burst of movement [22]. The movement of fish depends directly on the energy expenditure of muscles. In addition, the nervous system in the brain is stimulated by environmental pollutants, which can also lead to abnormal behavioral patterns and swimming activity [23]. Swimming patterns may be affected by gene expression, which is obviously regulated by the stress response of fish to erythromycin.
In this experiment, Oryzias latipes and Danio rerio were exposed to different concentrations of erythromycin (2 µg/L, 20 µg/L, 200 µg/L, and 2 mg/L) for 96 h, and the U crit and U burst of each treatment group were compared. Based on the swimming ability results, the gene expressions of O. latipes and D. rerio treated with erythromycin (2 mg/L) were analyzed. According to the results of gene expression, the expression of selected genes in each treatment group was verified by qRT-PCR. This study provides a factual basis for studying the effects of erythromycin on fish gene expression and swimming ability.
Experimental Fish
D. rerio and O. latipes are classic model species. Four-month-old D. rerio and 6-month-old Singaporean O. latipes were obtained from Shanghai Feixi Biotechnology Co., Ltd. (Shanghai, China). The average weight (± SD) and average length (± SD) of the D. rerio used in this study were 0.56 ± 0.08 g and 2.83 ± 0.10 cm, respectively. O. latipes had an average weight of 0.28 ± 0.03 g and an average fork length of 3.23 ± 0.05 cm. After the experimental fish were transported to the laboratory, they were put into a plexiglass tank ( Figure 1) (25 cm long, 25 cm wide, 35 cm deep) filled with tap water aerated in advance. In addition, these fish were temporarily kept in the tank to eliminate the stress of the transport process. During this period, the fish were kept under a 14 L/10 D photoperiod, and they were hand-fed a commercial diet (Chengdu, China) containing > 40% protein and > 7% lipids. Moreover, the water temperature, dissolved oxygen and pH were maintained at 26.5 ± 2°C, 99.2 ± 0.3% and 7.7 ± 0.2, respectively. Approximately one-third of the water in the tank was replaced twice per day with pre-aerated tap water. Before the start of the test, the dissolved oxygen level and temperature conditions in the device were checked, and the motor was gradually turned to remove bubbles from the test device. Then, a D. rerio was placed in the test area of the swimming device, and the test device was sealed. The flow rate was adjusted to 10 cm/s and then increased by 25 cm/s at 20 min intervals. For O. latipes, the flow rate was increased to 15 cm/s over 20 min and then increased by 10 cm/s every 20 min. Finally, an experimental fish swimming fatigue net was used to end the test. The following formula was used to calculate Ucrit: is the time the test fish remained at this flow
Experimental Design
D. rerio and O. latipes were exposed to different concentrations of erythromycin (Sigma Aldrich, CAS: 114-07-8) at 0 µg/L, 2 µg/L, 20 µg/L, 200 µg/L, or 2 mg/L. The concentration gradient was chosen based on the following two considerations. The minimum concentration was based on the concentration at which antibiotics are usually detected. The highest concentration was the concentration that may cause obvious changes in swimming ability based on the results of preliminary experiments. All fish were fed twice per day in line with the adaptation period. The erythromycin solution in the water tank was replaced once per day to prevent changes in concentration due to photolysis and evaporation.
After 96 h of exposure, experimental fish from each treatment group were randomly selected to measure U crit or U burst . Moreover, the mRNA of muscle and head was extracted from the experimental fish (control group and 2 mg/L treatment group). Each test concentration and the control were performed in triplicate.
Measurement of U crit and U burst
The equipment used to measure fish swimming ability was a medium-sized swimming tank (SW10150) produced by Loligo Systems (Denmark). The volume of the sealing part of the sink was 30 L. The test area specifications were 55 cm x 14 cm x 14 cm, and the flow rate of the test area ranged from 5 to 175 cm/s. The flow rate in the sealed sink was generated by the rotation of a motor, which could be changed by adjusting the drive, while the regulator was connected by the inverter and the cellular stabilizer on the left side of the test area to produce a uniform and constant flow field. In addition, a YSI Ecosense D0200A dissolved oxygen meter, digital flow speedometer (AC10000) and 30 mm vane wheel flow probe (AC10002) were used.
Before the start of the test, the dissolved oxygen level and temperature conditions in the device were checked, and the motor was gradually turned to remove bubbles from the test device. Then, a D. rerio was placed in the test area of the swimming device, and the test device was sealed. The flow rate was adjusted to 10 cm/s and then increased by 25 cm/s at 20 min intervals. For O. latipes, the flow rate was increased to 15 cm/s over 20 min and then increased by 10 cm/s every 20 min. Finally, an experimental fish swimming fatigue net was used to end the test. The following formula was used to calculate U crit : where ∆t 2 (min) is the duration of each flow rate, t 2 (min) is the time the test fish remained at this flow rate, ∆v 2 (cm/s) is the velocity increment, and v 2 (cm/s) is the maximum swimming speed of the test fish that was reached during ∆v 2 .
At the end of the experiment, the flow rate was decreased, the fatigued test fish were removed, and their body length, weight and conventional morphological parameters were measured. During the test, if the dissolved oxygen concentration in the sink was less than 7 mg/L, the water in the sealed sink was exchanged with a water pump. U burst was also calculated according to the "incremental flow rate method". A fish was placed in the test segment before the test and was adapted to a low flow rate (10 cm/s) for 20 min to eliminate the stress of the transfer process on the fish. After the test started, the flow rate in the test segment was gradually increased by 1 cm/s. When the test fish were fatigued and could not continue to swim, the test was stopped. The critical swimming ability and burst swimming ability tests were repeated three times.
RNA Sequencing
Experimental fish from the control group and the 2 mg/L treatment group were randomly selected and placed in MS222 at a concentration of 0.2 mg/L until anesthetized. The tail muscle tissue and head tissue were collected, processed and immediately placed in liquid nitrogen. The processed samples were then sent to Nanjing Personal Gene Technology Company for RNA determination and sequencing.
Total RNA was isolated with a RNeasy Mini Kit (Qiagen, Germantown, MD, USA). Then, additional DNase I (Qiagen) was added to digest contaminating genomic DNA. One microgram of integrated RNA per sample was prepared and sequenced using an RNA-Seq library. The mRNA library was built by the TruSeq RNA Sample Preparation Kit (Illumina, San Diego, CA, USA) with reference to the manufacturer's instructions. The prepared mRNA samples were then clustered on an Illumina HiSeq 2500 for sequencing. The sequences from each treatment group were evaluated after 100 cycles. The RNA-Seq reads were assessed for quality control with FastQC (version 0.10.1; Babraham Bioinformatics, Cambridge, UK). All reads were evaluated by Bridger (r2014-12-01) (-pair_gap_length 50-min_kmer_coverage 4-min_ratio_non_error 0.15). The transcripts scored per million fragments per thousand bases of external subfragments mapped (RSM) were calculated according to Trinity script variance expression (false discovery rate (FDR) ≤ 0.05) using the blind dispersion method and Cuffdiff analysis, which yielded lists of upregulated and downregulated genes. Fisher's exact test with FDR correction (FDR ≤ 0.05) was used to analyze gene functions and pathways by Gene Ontology (GO) functions and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways, respectively. The ratio of differentially expressed genes (DEGs) to the total number of genes in the associated pathways was considered an enrichment factor.
Quantitative Real-Time PCR (qRT-PCR) Validation
qRT-PCR was used to verify the results of RNA-Seq. Based on their position, function and expression level in the genome, eight differentially expressed transcripts were selected (6 upregulated and 2 downregulated). mRNA transcripts were aliquoted using a RealPlex4S qRT-PCR system (Eppendorf, Germany). RNA samples were extracted using an miRNeasy Mini Kit (50) (Qiagen). The final volume of the RT-PCR reaction was 25 µL. The thermocycler settings were as follows: 95 • C, 2 min; and 40 cycles of 95 • C for 10 s, 68 • C for 30 s, and 68 • C for 5 min. The relative expression level was calculated using the 2 −∆∆CT method with β-actin as a reference gene. Three independent samples were analyzed in triplicate.
Statistical Analyses
The U crit and U burst of experimental fish were analyzed with Origin 8.0 software (Origin Lab Corporation, Northampton, MA, USA) and SPSS Statistics 20 (SPSS Inc., Chicago, IL, USA). Significant differences between the treatment groups were determined with one-way analysis of variance (ANOVA), and the significance level was set at p < 0.05. Transcriptomic data were analyzed and visualized by R software (Core Team, 2014).
Ethics Statement
The animal study proposal was approved by the Ethics Committee for Animal Experiments of Sichuan University (ethics code is 2019062101). All experimental procedures were performed in accordance with the Regulations for the Administration of Affairs Concerning Experimental Animals approved by the State Council of the People's Republic of China.
Swimming Performance and Muscle Fibers
The mg/L). Significant differences between the control groups and erythromycin exposed groups, * repersents p < 0.05.
The appearance of the exposed fish and the appearance of the control group were not observed to be different. the size of the muscle fibers changed significantly, as shown in Figure 3. The muscle fibers of D. rerio and O. latipes became thinner and weaker. Additionally, the pores of the muscle fiber bundle also became larger following erythromycin-induced stress. The appearance of the exposed fish and the appearance of the control group were not observed to be different. the size of the muscle fibers changed significantly, as shown in Figure 3. The muscle fibers of D. rerio and O. latipes became thinner and weaker. Additionally, the pores of the muscle fiber bundle also became larger following erythromycin-induced stress.
mRNA Expression Levels of Genes
Deep sequencing data were analyzed for each obtained sample from the treatment groups. We considered read quality scores above Q30 (correct base recognition rate greater than 99.9%) to indicate clean reads, and the muscle gene expression results of the experimental fish showed that more than 94.2% of the D. rerio reads were clean, while more than 93.4% of the O. latipes reads were clean. The proportion of mapped reads in D. rerio sequences was higher than 97.86%, and was higher than 94.09% in O. latipes. With erythromycin treatment, there were 503 upregulated genes and 541 downregulated genes in O. latipes muscle, while 463 upregulated and 319 downregulated genes were found in the muscle of D. rerio, as shown in Figure 3.
To gain a better understanding of the effects of erythromycin on genes in D. rerio muscles, we further analyzed the KEGG pathways. Based on the KEGG enrichment analysis of the expressed genes, the top 20 pathways with the smallest p-values, indicating the most significant enrichment, were selected for presentation, as shown in
mRNA Expression Levels of Genes
Deep sequencing data were analyzed for each obtained sample from the treatment groups. We considered read quality scores above Q30 (correct base recognition rate greater than 99.9%) to indicate clean reads, and the muscle gene expression results of the experimental fish showed that more than 94.2% of the D. rerio reads were clean, while more than 93.4% of the O. latipes reads were clean. The proportion of mapped reads in D. rerio sequences was higher than 97.86%, and was higher than 94.09% in O. latipes. With erythromycin treatment, there were 503 upregulated genes and 541 downregulated genes in O. latipes muscle, while 463 upregulated and 319 downregulated genes were found in the muscle of D. rerio, as shown in Figure 3.
To gain a better understanding of the effects of erythromycin on genes in D. rerio muscles, we further analyzed the KEGG pathways. Based on the KEGG enrichment analysis of the expressed genes, the top 20 pathways with the smallest p-values, indicating the most significant enrichment, were selected for presentation, as shown in Figure 4. The most significant KEGG pathway in D. rerio and O. latipes was oxidative phosphorylation. Specifically, D. rerio had 47 downregulated genes (cox6a2, ndufs6, cox5b2 etc.), and O. latipes had 53 downregulated genes (ndufs4, ndufa12, ndufb3 etc.).
The gene expression profiles of the heads of the experimental fish showed that D. rerio had more than 93.13% clean reads and O. latipes had more than 92.7%. The proportion of mapped reads in the D. rerio sequences was higher than 95.1%, and the proportion of mapped reads in O. latipes was more than 94.09%. Genes in the fish head were impacted by erythromycin treatment, and there were 342 upregulated genes and 1106 downregulated genes in the head of O. latipes. For D. rerio, 461 genes were upregulated, and 551 genes were downregulated, as shown in Figure 5. The gene expression profiles of the heads of the experimental fish showed that D. rerio had more than 93.13% clean reads and O. latipes had more than 92.7%. The proportion of mapped reads in the D. rerio sequences was higher than 95.1%, and the proportion of mapped reads in O. latipes was more than 94.09%. Genes in the fish head were impacted by erythromycin treatment, and there were 342 upregulated genes and 1106 downregulated genes in the head of O. latipes. For D. rerio, 461 genes were upregulated, and 551 genes were downregulated, as shown in Figure 5. To obtain insight into the alteration in gene expression in the head of D. rerio induced by erythromycin, KEGG pathways were analyzed. We selected the top 20 pathways with the smallest p-values and the most significant enrichment for presentation based on KEGG pathway enrichment analysis, as presented in Figure 6. For O. latipes, the KEGG analysis results showed that the two most significant pathways in organismal systems were the adipocytokine signaling pathway (npy and ppara) and the PPAR signaling pathway (lpl). In addition, for D. rerio, the most significant pathway in the organismal systems category was phototransduction (guca1a, grk7b and grk1a). To obtain insight into the alteration in gene expression in the head of D. rerio induced by erythromycin, KEGG pathways were analyzed. We selected the top 20 pathways with the smallest pvalues and the most significant enrichment for presentation based on KEGG pathway enrichment analysis, as presented in Figure 6. For O. latipes, the KEGG analysis results showed that the two most significant pathways in organismal systems were the adipocytokine signaling pathway (npy and ppara) and the PPAR signaling pathway (lpl). In addition, for D. rerio, the most significant pathway in the organismal systems category was phototransduction (guca1a, grk7b and grk1a).
Validation of RNA-Seq DEG Expression Profiles in Danio rerio and Oryzias latipes by qRT-PCR
Fifteen DEGs in the RNA-Seq results were selected for expression pattern verification by qRT-PCR using cDNA from the remaining RNA samples from the different erythromycin treatment groups. These 15 genes were identified in the D. rerio head (4 DEGs) and muscle (4 DEGs) or in the O. latipes head (3 DEGs) and muscle (4 DEGs). The 15 genes reported in the RNA-Seq data (2 upregulated and 13 downregulated) are shown in Figure 7. All of these genes were significantly changed in the high-concentration exposure group compared with the control group because the adverse effects were amplified. However, at low concentrations, these genes had unstable expression patterns. Nevertheless, all of these genes were significantly changed in the same direction as in the other groups, which verified the results of RNA-Seq (Figure 8).
Validation of RNA-Seq DEG Expression Profiles in Danio rerio and Oryzias latipes by qRT-PCR
Fifteen DEGs in the RNA-Seq results were selected for expression pattern verification by qRT-PCR using cDNA from the remaining RNA samples from the different erythromycin treatment groups. These 15 genes were identified in the D. rerio head (4 DEGs) and muscle (4 DEGs) or in the O. latipes head (3 DEGs) and muscle (4 DEGs). The 15 genes reported in the RNA-Seq data (2 upregulated and 13 downregulated) are shown in Figure 7. All of these genes were significantly changed in the high-concentration exposure group compared with the control group because the adverse effects were amplified. However, at low concentrations, these genes had unstable expression patterns. Nevertheless, all of these genes were significantly changed in the same direction as in the other groups, which verified the results of RNA-Seq (Figure 8).
Swimming Performance
Antibiotics had a greater negative effect on the Ucrit of D. rerio than on that of O. latipes. The Ucrit of both species was significantly impaired at high exposure concentrations. Other environmental pollutants have been shown to reduce the swimming ability of other species of fish. For example, the Ucrit of juvenile Florida pompano was significantly reduced from 90.10 ± 1.35 cm/s to 84.20 ± 1.36 cm/s under the toxic influence of methanol [24]. Another study found that Ucrit of Erimyzon sucetta was reduced by approximately 50% when they were exposed to ash [25]. Prolonged swimming activities (such as Ucrit) may be sensitive to changes in maximum aerobic capacity, cardiac output, muscle fiber function, and anaerobic metabolism. The results showed that the aerobic capacity of the fish was impaired. There are several possible reasons for this impairment. Fish are poisoned by environmental pollutants and are forced to detoxify a large amount of oxygen [26,27], which reduces the oxygen supplied for exercise [27]. In addition, water pollution can change the shape of gill tissue, resulting in an impaired tissue oxygen supply during exercise [28,29]. Moreover, sublethal exposure to contaminating elements can also lead to increased hemoglobin and plasma protein concentrations, resulting in increased blood concentrations and local tissue hypoxia [30].
Swimming Performance
Antibiotics had a greater negative effect on the U crit of D. rerio than on that of O. latipes. The U crit of both species was significantly impaired at high exposure concentrations. Other environmental pollutants have been shown to reduce the swimming ability of other species of fish. For example, the U crit of juvenile Florida pompano was significantly reduced from 90.10 ± 1.35 cm/s to 84.20 ± 1.36 cm/s under the toxic influence of methanol [24]. Another study found that U crit of Erimyzon sucetta was reduced by approximately 50% when they were exposed to ash [25]. Prolonged swimming activities (such as U crit ) may be sensitive to changes in maximum aerobic capacity, cardiac output, muscle fiber function, and anaerobic metabolism. The results showed that the aerobic capacity of the fish was impaired. There are several possible reasons for this impairment. Fish are poisoned by environmental pollutants and are forced to detoxify a large amount of oxygen [26,27], which reduces the oxygen supplied for exercise [27]. In addition, water pollution can change the shape of gill tissue, resulting in an impaired tissue oxygen supply during exercise [28,29]. Moreover, sublethal exposure to contaminating elements can also lead to increased hemoglobin and plasma protein concentrations, resulting in increased blood concentrations and local tissue hypoxia [30].
The U burst of D. rerio and O. latipes showed decreases of 38% and 31%, respectively, under treatment with 2 mg/L erythromycin. Explosive activity occurs rapidly in a short period of time, and burst swimming performance is mainly affected by anaerobic metabolism; thus, some functions related to anaerobic metabolism in both species may be hindered. U burst reflects the ability of a fish to perform short-term anaerobic movements while foraging or avoiding danger [31]. This result reveals that the ability of experimental fish to hunt and evade predators was weakened, which may reduce the survival rate of experimental fish in the long run. Previous studies have also found that toxic contamination can adversely affect the U burst of fish. For instance, the U burst of ash-exposed E. sucetta was decreased by 30%-104% compared to that of control fish [25][26][27].
The swimming performances of D. rerio and O. latipes were weakened by 2 mg/L erythromycin. Our results indicate that the weakening of the muscle fibers of the experimental fish is a possible reason for the decline in swimming ability. Previous studies on toxin exposure can lead to body dysfunction and muscle structure [32]. Under the treatment of contaminants, zebrafish somatic muscle fibers affect the arrangement or integrity of muscles, and the abnormality of muscles directly affects exercise ability. Specifically, the impairment of myogenesis and the destruction of myofilament tissue caused by contaminants impedes the ability of muscles to contract and thus significantly reduces swimming performance of zebrafish [33]. Another possible reason is that aerobic and anaerobic metabolism is regulated by different physiological mechanisms, and the results of the swimming ability assay showed that erythromycin had adverse effects on these two physiological mechanisms. The effect of other antibiotics on fish mobility also supports these results. Studies showed that high concentrations of antibiotics, even for short periods of time, can induce behavioral disturbances in fish [34]. Both aerobic and anaerobic metabolic capacities were inhibited by erythromycin exposure. This suggests that erythromycin affects the energy metabolism of fish.
Analysis of mRNA in Muscle
Swimming behavior is closely related to muscle contraction. A decline in swimming ability reflects a decline in muscle function. Moreover, aerobic metabolism involves the transport of oxygen and carbohydrates through respiration and circulation. It reflects the metabolic processes throughout the organism, from skin to muscle tissue, which may affect the absorption and transport of oxygen. Additionally, aerobic activity requires a continuous supply of adenosine triphosphate (ATP) from fish in different organs and muscles [34,35]. Anaerobic movement is a temporary explosion with a limited range of rapid metabolism [36]. It allows carbohydrates and oxygen to enter the muscle, which consumes glycogen and phosphocreatine during explosive movement. The power of anaerobic activity depends on ATP in the absence of oxygen [37,38]. Both types of exercise depend on the ability of muscles to produce and release ATP. The mRNA expression results of the muscles from both species confirm their behavior and further explain the decline in swimming ability.
The markedly enriched KEGG pathway shown in Figure 4 was oxidative phosphorylation. Additionally, both species had downregulated genes in this pathway, which suggests that the energy metabolism of D. rerio and O. latipes was inhibited by erythromycin. Oxidative phosphorylation is a metabolic pathway in cells. This process occurs in the mitochondrial inner membrane of eukaryotes or in the cell membrane of prokaryotes and uses the enzymes and the energy released by the oxidation of various nutrients to synthesize ATP. ATP is the primary molecule that directly provides energy to anabolic organisms. For most aerobic organisms, the tricarboxylic acid cycle-oxidative phosphorylation is the main process that produces ATP. The first four significantly altered genes in this pathway in D. rerio and O. latipes were validated. In addition, these genes encode NADH dehydrogenase, cytochrome c oxidases, and F-type ATPase (eukaryotes). NADH dehydrogenase is used in the electron transport chain to generate ATP. It is a receptor oxidoreductase that catalyzes the following chemical reactions: NADH + H+ + acceptor NAD+ + reduced acceptor [39]. Cytochrome c oxidases have beneficial effects on exercise, such as increasing oxygen levels in vascular tissues. Enzymes cannot reduce oxygen, resulting in oxygen accumulation and the diffusion of oxygen into surrounding tissues [40]. The downregulation of this gene means the loss of these beneficial effects. Another study suggested that the suppression of cytochrome c oxidases decreases the rate of cellular respiration [41]. F-ATPase, also known as F-Type ATPase, is involved in many basic cellular metabolic activities (such as acidosis, alkalosis and respiratory gas exchange). The gene expression results provide an explanation for the decrease in the swimming ability of D. rerio and O. latipes under erythromycin stress. Antibiotics adversely affected the energy metabolism pathways of these two fish. Specifically, ATP synthesis and ATP release were inhibited, which inevitably damaged their swimming ability.
Analysis of mRNA in Head
Muscle energy supply is directly related to swimming behavior, and the central nervous system can also indirectly negatively impact fish swimming ability. Abnormal functions related to the central nervous system may cause sensory organ dysfunction and movement disorders. It can also lead to hormone disorders that block energy metabolism. The sensitivity of fish to stress induces changes in behavior, and the fields of behavioral ecology and toxicology provide a clear explanation of this connection. Biochemical disorders, such as neurotransmitter and thyroid changes, affect the behavior of fish [42]. D. rerio phototransduction was subdued under the pressure of erythromycin. Phototransduction is the conversion of the distribution and wavelength of photons into neuron activity patterns, which then induce movement and endocrine responses. In D. rerio and in mice with mutations in a light-sensing gene, researchers found both motor and motor coordination mutations [43,44]. The optokinetic response (OKR) requires the retina as a photosensitive organ and employs motor thrust, as does swimming [43,44]. Mutant D. rerio have slow eye movement and impaired motor ability [45]. In addition to the extreme situation, a more common phototransduction function is to regulate circadian rhythm through visual pigments [46]. Impaired light transmission can disrupt the body clock, which reverses day and night and affects metabolism. The mRNA expression results from the head of O. latipes revealed neuroendocrine disruption under the pressure of erythromycin. Biological neuroendocrine systems are used to regulate food intake, metabolism, and energy distribution to ensure a steady supply of energy [46,47]. Metabolism and aerobic exercise are intertwined because of a common connection: they both depend on the intake of food, which is the source of chemical energy for these processes. In the short term, there is a match between energy intake and expenditure. In the long term, they are carefully balanced and regulated by several endocrine systems that work together to ensure energy homeostasis [48,49]. The hypothalamus is crucial for monitoring energy balance [50]. The hypothalamus regulates energy by signaling fat storage. Another theory is that the hypothalamus regulates energy balance by storing signals from carbohydrates [51].
The most affected biological system in D. rerio was phototransduction, a sensory transduction pathway in the visual system. Through this process, light is converted into electrical signals in the rods, cones, and photosensitive ganglion cells of the retina in the eye. In the dark, the main function of the downregulated genes is to control the transmission of Ca 2+ ions, thereby inhibiting neurotransmitters. In light, the downregulated genes primarily affect the function of retinal porphyrin. The inactivation of rhodopsin may cause the loss of dark adaptation. These downregulated genes indicate that the vision of D. rerio is impaired in light and darkness, and that visual adaptability is damaged when light and dark alternate. The visual sensitivity of fish to light and dark requires much energy [52], but phototransduction is very important for fish to hunt and avoid danger. D. rerio had weakened visual ability under the pressure of erythromycin, which means that the chances of survival were reduced. In addition to erythromycin, other environmental pollutants may also cause the visual function of fish to be suppressed. For example, tributyltin has also been reported to block this pathway in fish [53].
For O. latipes, the upregulated genes related to a biological system were Neuropeptide Y (NPY) and PPAR-α. NPY is a 36 amino acid neuropeptide that participates in various physiological processes and in the homeostasis of the central nervous system and peripheral nervous system [54][55][56][57]. In the head, NPY is produced in different parts of the hypothalamus. In addition, NPY is thought to have multiple functions, including changing the storage of fat energy and reducing anxiety and stress [57,58]. NPY regulates the neuroendocrine release of various hypothalamic hormones, such as luteinizing hormone [59]. In particular, NPY is considered to be an endogenous anxiolytic peptide. The level of NPY can be regulated by stress, and it is considered necessary for stress regulation [60]. Higher levels of NPY may be self-regulating to alleviate fear responses [61]. The upregulation of NPY in O. latipes is a sign of fear and anxiety. In addition, there may be disorders in fat metabolism. PPAR is an important transcription factor. PPAR-α regulates many genes involved in various aspects of lipid metabolism. The upregulation of this gene may activate functions related to lipid metabolism. However, the downregulation of Lpl demonstrates that fatty acid transport and a component of lipid metabolism was inhibited [62]. That means erythromycin can cause many lipoprotein metabolism abnormalities.
Conclusions
The swimming ability of the Oryzias latipes and Danio rerio was measured after exposure to varying doses of erythromycin (2 µg/L, 20 µg/L, 200 µg/L, and 2 mg/L) for 96 h. U burst and U crit of the experimental fish did not change significantly at low concentrations(2 µg/L, 20 µg/L, 200 µg/L). The swimming ability of both O. latipes and D. rerio was reduced by exposure to high concentrations of erythromycin, and U crit decreased to 53% and 71%, while U burst decreased by 39% and 23%, respectively. This finding indicates that the aerobic capacity and anaerobic capacity of these fish were reduced. mRNA expression analysis of the muscle confirmed that ATP production-and ATP release-related functions were inhibited. Erythromycin has different effects on gene expression in the head of the two species, but the results from both species provide indirect evidence of behavioral changes. Phototransduction of O. latipes was inhibited, which may lead to abnormal behavior or body clock disorders. The hormone imbalance in O. latipes may lead to energy metabolism disorders, and its fat metabolism abnormalities. | 2020-05-17T13:03:39.206Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "c973ebc148f5026a801afd4172eea92da59f8c6a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/10/3389/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cca63f163a74f1d859314835e804472bd894c66e",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
230507897 | pes2o/s2orc | v3-fos-license | Single-Cell RNA Sequencing of Tocilizumab-Treated Peripheral Blood Mononuclear Cells as an in vitro Model of Inflammation
COVID-19 has posed a significant threat to global health. Early data has revealed that IL-6, a key regulatory cytokine, plays an important role in the cytokine storm of COVID-19. Multiple trials are therefore looking at the effects of Tocilizumab, an IL-6 receptor antibody that inhibits IL-6 activity, on treatment of COVID-19, with promising findings. As part of a clinical trial looking at the effects of Tocilizumab treatment on kidney transplant recipients with subclinical rejection, we performed single-cell RNA sequencing of comparing stimulated PBMCs before and after Tocilizumab treatment. We leveraged this data to create an in vitro cytokine storm model, to better understand the effects of Tocilizumab in the presence of inflammation. Tocilizumab-treated cells had reduced expression of inflammatory-mediated genes and biologic pathways, particularly amongst monocytes. These results support the hypothesis that Tocilizumab may hinder the cytokine storm of COVID-19, through a demonstration of biologic impact at the single-cell level.
INTRODUCTION
Coronavirus disease 2019 , caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has posed a significant threat to global health since emerging at the end of 2019. Although the spectrum of symptomatic infection ranges significantly, and most infections are not severe (Chan et al., 2020;Huang et al., 2020;Wang et al., 2020), the overall global burden of the disease has been significant with up to nearly 20% mortality in certain geographic/demographic groups (Onder et al., 2020;Wu and McGoogan, 2020). While notable progress has been made in the understanding the virology and disease process, the abrupt onset and lack of effective vaccination has made treatment of COVID-19 difficult (Ahmed et al., 2020;Mitja and Clotet, 2020).
Interleukin (IL)-6 is a key regulatory cytokine for the innate and adaptive immune response and is a growth factor for B cell proliferation and differentiation, an inducer of antibody production, and a regulator of CD4 + T cell differentiation (Hunter and Jones, 2015;Jordan et al., 2017). Early data from the COVID-19 outbreak has shown that the complications from the disease are partly due to increases in various cytokines, including IL-6 (Chen et al., 2020;Mehta et al., 2020;Vaninov, 2020;Yang et al., 2020), and that elevated IL-6 levels may be associated with worse outcomes (Chen et al., 2020;Li et al., 2020;Wan et al., 2020). Tocilizumab is an IL-6 receptor antibody, which binds to both the membrane-bound and soluble forms of the IL-6 receptor (IL-6R), thereby inhibiting the action of the cytokine/receptor complex and interfering with the cytokine's effects (Lee et al., 2014). It is a well-studied and accepted therapy for rheumatoid arthritis (Campbell et al., 2011;Singh et al., 2011;Smolen and Aletaha, 2011), and has also been studied in giant cell arteritis (Kwan and Thyparampil, 2020) and organ transplantation Jordan et al., 2017;Shin et al., 2020). As such, multiple global investigators are currently undertaking clinical trials to further assess the efficacy of Tocilizumab in the treatment of COVID-19 and its complications (ClinicalTrials.gov). Thus far, it has been shown that COVID-19 patient plasma inhibits the expression of HLA-DR which may be partially restored by Tocilizumab treatment, and that treatment with Tocilizumab may also improve lymphopenia associated with COVID-19 (Giamarellos-Bourboulis et al., 2020). Preliminary data for Tocilizumab treatment on COVID-19 outcomes has shown improvement in clinical outcomes Xu et al., 2020). While the clinical effects of Tocilizumab in inflammatory and autoimmune disease has been well-studied, there is a paucity of data on the mechanistic/biologic impact of the drug on our immune system.
Given the current state of the COVID-19 epidemic and possible efficacy of IL-6/IL-6R inhibition with the use of Tocilizumab, we believed a deeper analysis of the mechanistic/biologic effects of Tocilizumab could further elucidate the effects of the drug on our immune system. Herein we present an analysis of the impact of Tocilizumab on immune cells using single-cell RNA sequencing (scRNAseq). We map the response of peripheral blood mononuclear cell (PBMC) subsets to cellular activation using CD3/CD28 stimulation (Luo et al., 2019;Miragaia et al., 2019;Pizzolato et al., 2019;Szabo et al., 2019;Cai et al., 2020). Relevant to understanding the impact of Tocilizumab in suppressing immune activation and inflammation, as seen in the COVID-19 response, we examined the effect of Tocilizumab on stimulated cells, as part of an investigator-initiated clinical trial in kidney transplant (KT) recipients with subclinical rejection (NIAID U01 AI113362-01 1 ). Given that our samples are from transplant recipients with subclinical graft rejection, we believed that by utilizing PBMCs from these patients, we would be looking at cells from an environment that at baseline has an increased inflammatory burden. By further stimulating these cells, we hoped to best recreate an in vitro model to represent the presence of a cytokine storm. We provide a resource characterizing the effect of Tocilizumab on immune cells at a single-cell level, and demonstrate the unique and unexpected impact of Tocilizumab on monocytes, and how its effect on suppressing inflammation may be further augmented based on the resting 1 https://grantome.com/grant/NIH/U01-AI113362-06 versus activated state of PBMCs before exposing the cells to IL-6R inhibition.
Sample Collection
This study was performed as part of an ancillary to a randomized controlled clinical trial of 15 KT recipients that were diagnosed with subclinical rejection on their 6-month posttransplant protocol biopsy and randomized to either continue standard of care (Tacrolimus, mycophenolate, and steroid) immunosuppression (control arm, 8 patients) or standard of care plus Tocilizumab (Tocilizumab treatment arm, 7 patients). There were 10 male and 5 female patients included in the study, with a roughly equal proportion of males/females within each arm of the study (5 of 8 patients in the control arm were males, and 5 of 7 patients in the Tocilizumab arm were males). Patients in the treatment arm were given Tocilizumab at a dose of 8 mg/kg IV every 4 weeks, for a total of 6 doses. Patients in both arms of the study had blood collected at baseline prior to the initiation of Tocilizumab (in the treatment arm patients), then at 3, 6, and 12 months after the start of the study, for a total of 4 blood samples per all 15 patients in the study. PBMCs were isolated from blood samples by Ficoll-Paque TM PLUS density gradient centrifugation (GE Healthcare, Chicago, IL, United States) and frozen in fetal bovine serum (Gibco, Waltham, MA, United States) containing 10% (vol/vol) dimethyl sulfoxide (Sigma-Aldrich, St. Louis, MS, United States). Cells were frozen and not thawed until the day of the experiment when they were used directly for in vitro stimulation.
Stimulation With Anti-CD3 and Anti-CD28 Antibodies
Frozen PBMCs were thawed, four vials at a time to ensure maximum cell recovery, in a water bath at 37 Celsius. Cells were counted using a hemocytometer, split in half, and were then adjusted to 2 × 10 5 cells/well and triplicate plated in multiscreen 96-well plates (Falcon, Corning, NY). Cells were stimulated with soluble anti-CD3 (5 µg/mL; MabTech, Cincinnati, OH, United States) and anti-CD28 antibodies (10 µg/mL; MabTech, Cincinnati, OH, United States) at 37 Celsius, 5% CO 2 for 24 h. Unstimulated PBMCs were incubated under identical conditions to reduce any confounding from incubation conditions other than stimulation. Since all PBMCs were split in half prior to any downstream processing, all samples from control and Tocilizumab-treated patients at all study time points were both stimulated and not stimulated as part of the study design.
Sample Processing
After overnight stimulation/incubation, the cells were harvested and counted using a hemocytometer and orange acridine solution. Any cell suspension that was less than 25 cells/µL was disqualified from multiplexing due to low cell counts. A total of 90 samples were collected over the 2 days of experiments with 4 samples being disqualified due to low cell counts. Multiplexing cell pools were designed such that no pair of stimulated and unstimulated samples from the same patient were in the same pool and such that no samples from the same collection time point were in the same pool. The same number of cells from each patient and experimental condition were multiplexed into their respective pools to make a final total of 300,000 cells per pool. Any remaining non-pooled cells were resuspended in RNAlater (Thermo-Fisher, West Sacramento, CA, United States) and saved for SNP array. Cell pools were then centrifuged at 400 g for 5 min and media was aspirated. Cell pellet was resuspended in a small volume of Wash Buffer (0.4% BSA in 1XPBS) and the suspension was filtered through a 40 µM cell strainer (Falcon, Corning, NY, United States).
Library Construction and Sequencing
scRNA-seq libraries were prepared using the 10× Chromium Single Cell 3 Reagent Kits v3, according to the manufacturer's instructions. Briefly, the isolated cells were washed once with PBS + 0.04% BSA and resuspended in PBS + 0.04% BSA to a final cell concentration of 1000 cells/µL as determined by hemocytometer. Cells were captured in droplets at a targeted cell recovery of 4000-8000 cells, resulting in estimated multiplet rates of 0.4-5.4%. Following reverse transcription and cell barcoding in droplets, emulsions were broken and cDNA purified using Dynabeads MyOne SILANE (Thermo-Fisher, West Sacramento, CA, United States) followed by PCR amplification (98 • C for 3 min; 12-16 cycles of 98 • C for 15 s, 67 • C for 20 s, 72 • C for 1 min; 72 • C for 1 min). Amplified cDNA was then used for 3 gene expression library construction. For gene expression library construction, 2.4-50 ng of amplified cDNA was fragmented and end-repaired, double-sided size selected with SPRIselect beads (Beckman Coulter, West Sacramento, CA, United States), PCR amplified with sample indexing primers (98 • C for 45 s; 14-16 cycles of 98 • C for 20 s, 54 • C for 30 s, 72 • C for 20 s; 72 • C for 1 min), and double-sided size selected with SPRIselect beads. Pooled cells were loaded in a 10× chip in three replicate wells such that each well contained 50,000 cells. Given the large number of cells and large number of patient samples, the entire experiment and sequencing was performed in 2 separate batches to prevent cell death during counting. Each day resulted in 4 unique pools with each pool run in triplicate wells for sequencing. Sequencing single-cell RNA libraries were sequenced on an Illumina NovaSeq S2 to a minimum sequencing depth of 50,000 reads/cell using the read lengths 26 bp Read1, 8 bp i7 Index, 91 bp Read2.
Demultiplexing
To assign cells to donors of origin in our multiplexed design, we used the genetic demultiplexing tools freemuxlet and sample matching script, each being part of the popsicle suite of population genetics tools 2 . Freemuxlet leverages the genetic polymorphisms present in transcripts and clusters the droplet barcodes to assign each to a given donor (or assign them as doublets between donors). The algorithm returns these droplets with donor assignments and a set of variants per donor. These sets 2 https://github.com/statgen/popscle of variants are then matched using genotypic similarity to those from an external genotyping SNP array to determine which patient is which donor.
Due to memory constraints, the freemuxlet algorithm was run in 3 batches, divided by the experimental day and pool of patients processed. We show the distribution of singlets across the batches (Supplementary Figure 1A; bar plots, top) and the genotypic similarity between freemuxlet-annotated donors and patients (Supplementary Figure 1A; heatmaps, bottom). Upon initially examining the data, we noted two inconsistencies between the data and experimental design: (Chan et al., 2020) two patients (patients 5 and 6, both healthy control patients) had identical genotypes, and (Huang et al., 2020) patient 2 had very low cell numbers. The first inconsistency is almost certainly due to human error during the sample submission or running of the genotyping array, since these patients were not identical twins nor related in any way. The second inconsistency is likely due to low cell viability or inaccurate cell counting or pooling of patient 2 s cells. To rectify the first inconsistency, we recognized the absence of patient 5 in the design of experimental day 2 pools. Because patient 5 was not included in the pools and patient 6 was, and because the patient 5/6 genotype was still detected, we concluded that that genotype assayed in the array is actually patient 6 s. Given this, through process of elimination, we were able to assign donor 10 s cells in the Day 1 data to patient 5. To rectify the second inconsistency, we opted to input one less donor into the freemuxlet algorithm, such that it would not attempt to cluster patient 2 s cells and would instead identify them as ambiguous. We show that these remediations do not change the expected linear relationship between doublet rate and total cell-containing droplets (Supplementary Figure 1B). Apart from those inconsistencies, there was a 1-to-1 mapping of donors to patients, and through those remediations we were able to definitively assign a detected genotype to all detected individuals. After assignment of each droplet barcode to patients, droplet barcodes were then filtered to remove doublet droplets containing cells from multiple individuals, and the remaining singlets were analyzed as described below.
Data Analysis
Raw FASTQ files were processed using CellRanger (v 3.0.1) to map reads against human genome 38 as a reference, filter out unexpressed genes, and count barcodes and unique molecular identifiers (UMIs). Subsequent analyses were conducted with Seurat (v 3.1.2) (Butler et al., 2018) in R (v 3.6.2). We compared stimulated control cells, to stimulated Tocilizumab-treated cells from 3 to 6 months post-treatment with Tocilizumab. Utilizing Seurat, we first filtered cells to only keep those that had less than 10% mitochondrial genes and cells with numbers of features greater than 200 and less than 2,500. Cells were assigned patient identification based on the freemuxlet output described above, and once patients were identified, additional treatment/stimulation/time metadata could be applied. Given that our experiment was divided over 2 days given the high number of samples/cells, we applied Seurat's SCTransform function for data integration to account for any possible batch effects from experiment days (Hafemeister and Satija, 2019;Stuart et al., 2019). Once the data was integrated, we continued downstream data processing. We first determined the principal components (PCA), then constructed a shared nearest neighbor graph (SNN), identified clusters with a resolution of 0.75, and finally visualized the cells using uniform manifold approximate and projection (UMAP), per the typical Seurat workflow (Butler et al., 2018). Clustering was achieved by using 15 components from the PCA dimensionality reduction.
To identify cluster-specific markers following the creation of UMAP plots, we utilized normalized RNA counts of all clusters, scaled the data, and performed differential gene expression (DE) testing by applying the Wilcoxon rank sum test using Seurat's FindMarkers function (Butler et al., 2018). We also plotted normalized and scaled gene expression of canonical markers in conjunction with DE testing to determine identities of each cluster. To compare control vs. Tocilizumab-treated cell clusters from specific cell types (such as monocytes, CD4 + T cells, or CD8 + T cells), we once again utilized normalized/scaled RNA counts and performed DE testing with FindMarkers.
To perform pathway analysis (PA) for any specific comparison we performed, we filtered for all differentially expressed genes with an adjusted (based on the Bonferroni correction) p-value < 0.05, and then selected the top 10 percentile of genes with the highest log-fold changes. These top genes were used to perform the PA utilizing the Reactome database (Fabregat et al., 2017) with the clusterProfiler package (Yu et al., 2012). We also performed analyses of enriched biological processes utilizing the Gene Ontology database for these same groups of cells (Balakrishnan et al., 2013). To perform cell trajectory analysis, we first subset our clusters and cell types of interest from our Seurat workflow, then performed dimensionality reduction and cell ordering with Monocle (Qiu et al., 2017) (v 2.14.0). We were then able to plot specific cells by their trajectory branches based on their pseudotime values assigned by Monocle. DE of individual cell trajectory branches was then performed with Monocle's BEAM (branched expression analysis modeling) function, followed by visualization of these differentially expressed branches with Monocle's heatmap visualization tool.
We then leveraged two publicly available bulk RNA-seq datasets from PBMCs, GSE152418 (Arunachalam et al., 2020), and peripheral blood monocytes, GSE160351 (Brunetta et al., 2020), of COVID-19 patients and healthy individuals. The raw gene counts of the GSE152418 dataset were downloaded and normalized by the variance stabilizing transformation approach using the R package DESeq2 (Love et al., 2014), and the prenormalized gene counts of GSE160351 dataset were downloaded. The Ensembl gene IDs were converted to gene symbols using the R package biomaRt (Durinck et al., 2009). We then filtered these datasets to only include our upregulated control monocyte genes that we obtained using the FindMarkers function as described above. Once we had our gene list of upregulated monocyte genes, we performed unsupervised hierarchical clustering of the above COVID-19 datasets, using the pheatmap package (Kolde, 2012).
RESULTS
The overall experimental design is presented in graphical form (Figure 1). In order to examine the impact of Tocilizumab on the composition and expression of circulating single cells, we compared scRNA-seq data from stimulated control cells (patients not treated with Tocilizumab) PBMCs, to stimulated cells after 3 to 6 months of Tocilizumab treatment. After filtering cells, a total of 57,737 cells remained for analysis. These cells were put through our analysis pipeline described (see section "Materials and Methods"). After UMAP projection of cell clusters, there were a total of 21 distinct clusters representing major PBMC groups. Cluster 20 was found to express canonical markers from multiple PBMC cell types, signifying this was likely a cluster of doublets that had not removed by our previously performed cell filtering. Clusters were then annotated according to canonical cell type markers (Figure 2A), which are also demonstrated as feature plots to show the relative expression amongst the different clusters (Supplementary Figure 2). Cluster 2 expressed markers of CD8 + T cells, and additionally markers of memory T cell expansion (Patil et al., 2018), while clusters 6 and 15 lacked memory cell markers and were therefore identified as naïve CD8 + T cells. Clusters 4 and 5 expressed markers of both CD4 + T cells and memory T cell expansion [S100A4, IL7R (Salek- Ardakani and Croft, 2006;Martin and Badovinac, 2018)]. Clusters 0, 8, and 16 expressed markers of CD4 + T cell activation [TNFRS4, CD69 (Simms and Ellis, 1996)]. Clusters 3 and 17 lacked CD3D expression, but expressed GNLY (Tewary et al., 2010), suggesting they were NK cell clusters, while cluster 10 additionally expressed CD56, suggesting this was a CD56 + bright NK cell cluster (Michel et al., 2016). Clusters 11 and 13 expressed CD14, CD16, and LYZ, signifying these were monocyte clusters (Mukherjee et al., 2015;Sampath et al., 2018). Cluster 18 expressed LAMP3 and was therefore identified as a DC cluster (Yin et al., 2017). Clusters 1 and 19 expressed MS4A1 and were therefore identified as B cells (Zuccolo et al., 2013). This assignment of cell types resulted in our final annotated clusters ( Figure 2B).
After 6 months of treatment with Tocilizumab there is a shift in peripheral blood subset frequencies observed across no treatment (control) vs. treatment (Tocilizumab) groups. In comparison to changes in overall cell types, there was little observed effect on frequencies of naïve CD4 + /CD8 + T cells, DC, or NK cells, but with a marked reduction of activated CD4 + T cells (approximately 12.5% of control PBMCs were activated CD4 + T cells, while there were essentially no activated CD4 + T cells in the Tocilizumab group, Figure 2C). Feature plots showing the expression of "cytokine storm" (Tisoncik et al., 2012) related pro-inflammatory genes are cell-type specific, with predominance for expression in T cell and monocyte clusters ( Figure 2D). Although many genes are known to be involved in the cytokine storm of COVID-19 (Chua et al., 2020;Ye et al., 2020), we demonstrate that some of the key proinflammatory genes (cytokines, interferons, and tumor necrosis factor) are also noted as part of the inflammatory profile in control (no Tocilizumab) patients ( Figure 2D, control cells). Overall, stimulated PBMCs not exposed to Tocilizumab show T cell activation signals. Within these different cell subsets, Tocilizumab therapy results in significant polarization of gene expression based on UMAP presentation ( Figure 2E), with notable polarization by treatment status observed in monocytes. Because our analysis focused on an in vitro cytokine storm that was represented by CD3/CD28 stimulation, we did not focus our analysis on unstimulated cells. Of note, when we did look at unstimulated Tocilizumab-treated vs. control cells, we did not observe the same notable polarization or differential expression of genes seen between different cell types in the stimulated Tocilizumab-treated vs. control cells. We also looked for sexbased differences in cell clustering, and did not find any notable differences based on sex (Supplementary Figure 3).
Given Tocilizumab's function as an IL-6R blocker, we looked at the expression of IL6, IL6R, as well as SOCS1 [feedback inhibitor of IL-6 signaling, expressed upon IL-6 pathway activation (Prêle et al., 2008)], and PRDM1 [activated by the JAK/STAT3 pathway via activation of the IL-6 pathway (Garbers et al., 2015;Liu et al., 2019)] in Tocilizumab-treated cells (Figure 2F). Tocilizumabtreatment resulted in the expected reduction of IL6R, SOCS1, and PRDM1 expression, in CD4 + and CD8 + T cells, and unexpectedly also in monocytes. IL6 expression did not appear to be affected by Tocilizumab treatment.
We then looked at the top 30 most differentially expressed genes (highest log 2 -fold changes) for control vs. Tocilizumab cells to create heatmaps of gene expression amongst all cells (Figure 3A), CD4 + T cells (Figure 3B), CD8 + T cells (Figure 3C), monocytes ( Figure 3D). We then took the top tenth percentile of genes with the highest log 2 -fold changes and performed corresponding PA for these genes utilizing the Reactome database. PA showed enrichment of inflammatory pathways such IL and TNF signaling amongst control cells. Looking at the most differentially expressed genes (highest log 2 -fold changes) for control vs. Tocilizumab monocytes (Figure 3D), we saw some notable differences as would be expected. Control monocytes were enriched in chemokines such as CXCL9, various HLA genes involved in antigen processing (Yamamoto et al., 2020) (HLA-DQB1, HLA-DRB5), CD40 [member of the TNF-receptor superfamily (Martínez et al., 2020)], and SOCS1 [downstream gene activated by IL-6R pathway, as previously discussed (Prêle et al., 2008)]. PA revealed enrichment of many inflammation-related pathways, including interferon, interleukin, T cell receptor (TCR), and PD-1 signaling in control PBMCs, suggesting the relative suppression of these pathways in cells exposed to Tocilizumab ( Figure 3A). In CD4 + and CD8 + T cells, we also found enrichment of inflammatory pathways (Figures 3B,C), such as inflammasomes and interleukin signaling, although the overall number of enriched pathways was fewer than seen amongst monocytes. Enriched pathways for these cell types are also shown in table form (Supplementary Figure 4). Additionally, we looked at enriched biological process pathways in CD4 + T cells, CD8 + T cells, and monocytes. We found that across all cell subsets, there was a propensity for enrichment of inflammation-related processes in control vs. Tocilizumabtreated cells (Supplementary Figure 5).
In addition to the effect of Tocilizumab on T cells, we also observed an unexpected polarization of monocytes after Tocilizumab treatment ( Figure 2E). Notably, the Tocilizumab monocyte cluster was enriched for CD14, suggestive of an increased presence of classical monocytes (Mukherjee et al., 2015), while CD16/FCGR3A expression was more evenly expressed between the two clusters ( Figure 4A). The 2,000 most highly variable features amongst the monocytes in our dataset were then utilized to perform a cell trajectory analysis. These monocyte features were input into the Monocle pipeline to create cell trajectories, and annotated based on treatment status. This revealed six distinct cell trajectory branches, with two of the branches containing nearly all control cells, and the other four branches containing nearly all Tocilizumab-exposed PBMCs ( Figure 4B). As Monocle tracks changes as a function of progress along the trajectory, the distinct branches containing nearly all control cells vs. Tocilizumab-treated cells, supports the idea that there are unique transcriptional changes amongst the cells after patient exposure to IL6-R blockade. We utilized Monocle's BEAM function to perform branched expression analysis modeling of the distinct cell trajectory branches for Tocilizumab-exposed PBMCs (circled branch, Figure 4B), which showed distinct clusters of cells based on treatment status ( Figure 4C).
Finally, we mapped the upregulated control cell monocyte genes to COVID-19 bulk RNA-seq gene expression data from PBMCs (Arunachalam et al., 2020) and monocytes (Brunetta et al., 2020) and visualized results with heatmaps created using unsupervised hierarchical clustering. We found that our upregulated monocyte gene list, when applied to COVID-19 patients, showed nearly perfect clustering when applied to the PBMC dataset (Supplementary Figure 6A), and perfect clustering when applied to the monocyte dataset (Supplementary Figure 6B), with regards to whether or not patients had COVID-19, or were healthy.
DISCUSSION
The results of this study showed that in PBMCs undergoing a cytokine storm signal in rejection (Sarwal et al., 2003), with overlapping signatures of IFNG, CCL3, and TNF expression, along with TCR signaling also seen in the cytokine storm of COVID-19 (Chua et al., 2020;Ye et al., 2020), there is suppression of these inflammatory pathways after Tocilizumab treatment. This is inclusive of suppression of downstream signaling of IL6-R pathway genes in both monocytes and T cells. Our study was focused on the simulation of an in vitro cytokine storm model by CD3/CD28 stimulation of PBMCs that were either Tocilizumabtreated or control cells. While our findings here describe our findings from stimulated cells, it is worth noting that we did not observe any notable polarization of cells, or significant differential gene expression of identical cell types based on treatment status, when looking just at unstimulated cells. This was suggestive that it was under stimulated conditions where the effects of Tocilizumab treatment on PBMCs were most notable.
Monocytes have been shown to play a significant role in the pathophysiology of COVID-19 (Merad and Martin, 2020).
A significant expansion of populations of monocytes producing IL-6 has been observed in the peripheral blood of patients with COVID-19 in ICUs compared with those patients who did not require ICU hospitalization , with similar findings of increased IL-6 production from monocytes also seen by scRNA-seq analysis of PBMCs (Wen et al., 2020). With regards to Tocilizumab treatment and COVID-19, multiple centers have found that Tocilizumab treatment has been associated with improved outcomes, and that measured IL-6 tended to decrease after Tocilizumab treatment in patients with improved outcomes, while IL-6 tended to increase in those with worse outcomes Xu et al., 2020). This suggests that Tocilizumab may in fact counteract the cytokine storm seen in COVID-19, by decreasing activity of IL-6. Guo et al. performed a single-cell analysis of two patients with severe COVID-19 pre and post-treatment with Tocilizumab, looking at differences in gene and pathway enrichment amongst monocytes . Interestingly, the authors found enrichment of genes related to regulation of the acute inflammatory response, regulation of leukocyte activation, cell chemotaxis, and the cellular response to chemokines in severe-stage COVID-19 patients compared to remission-stage patients and healthy controls, suggesting that the inflammatory storm caused by monocytes is suppressed by Tocilizumab treatment. Our findings were similar to Guo et al. in that we have an enrichment of similar inflammation-mediated pathways amongst control cells that had not received Tocilizumab. Our findings are from the first clinical trial utilizing Tocilizumab for transplant rejection recipients and the first scRNA-seq analysis for such a study. We show a separation of cell clustering based on treatment status, reduced enrichment of inflammatory pathways in Tocilizumab patients, and relatively reduced expression of IL-6R pathway genes in Tocilizumab-treated cells. As would be expected, we did not observe any differences in IL-6 gene expression between control and Tocilizumab cells (as Tocilizumab is an IL-6R blocker), but rather only effects on the subsequent function of that cytokine's pathways. We also show an enrichment of CD14 expression (associated with classical monocytes) in Tocilizumab-treated monocytes, which are believed to be phagocytic, but with reduced inflammatory attributes (Mukherjee et al., 2015). This is consistent with our PA described above that shows enrichment of inflammatory pathways in control cells, but not Tocilizumab-treated cells (possibly due to the increased presence of non-inflammatory classical monocytes in Tocilizumab-treated cells).
Interestingly, when we utilized our upregulated genes from control cells in monocytes as the gene list for performing unsupervised hierarchical clustering of gene expression data from COVID-19 PBMCs and monocytes, we saw both perfect and near perfect clustering based on patient phenotype. This suggests that the inflammatory pathway genes that are upregulated in stimulated control cells from our study, may in fact be representative of some of the same genes that are affected by COVID-19 infection.
Our study is limited by the lack of COVID-19 patients and the in vitro nature of our inflammation model. Our goal was to better understand the biologic effects of Tocilizumab and its impact on inflammation, and while our group does not have single-cell data for Tocilizumab treatment in COVID-19 patients, we demonstrate the anti-inflammatory effects of Tocilizumab. This data shows promise that in this in vitro model, Tocilizumab does have anti-inflammatory effects which may be of clinical and biologic interest in actual COVID-19 patients. Despite these limitations, we believe that transferability of our findings exist. Specifically, the finding of the gene list of upregulated monocyte genes from control cells which leads to perfect clustering of COVID-19 vs. healthy patients, is suggestive of a common pathway of inflammatory genes. In addition, this was seen in both COVIDinfected PBMCs and monocytes, suggesting that even across different cell types, there may be a set of common upregulated genes in inflammation in COVID. Interestingly, single-cell analyses of COVID monocytes has shown inflammatory gene signatures such as increased expression of IFNG, similar to what we saw in our control cells (Schulte-Schrepping et al., 2020). Our future work may include single-cell analysis of COVID-19 transplant patients that have received Tocilizumab for treatment, which would help us to better understand these biologic mechanisms in actual COVID-19 patients.
Our findings, in conjunction with the available data on clinical outcomes of Tocilizumab treatment and ongoing trials, show promise for the use of Tocilizumab in the treatment of patients with COVID-19. The results of our study support the belief that Tocilizumab may be effective in reducing the inflammatory burden that results in the adverse outcomes of COVID-19. Future studies will need to be undertaken to look at outcomes of Tocilizumab treatment for COVID-19 in a clinical trial setting, ideally in conjunction with scRNA-seq analysis of these patient's blood samples to achieve a greater understanding of the transcriptomic effects of infection and treatment at a single-cell level.
DATA AVAILABILITY STATEMENT
The data presented in the study are deposited in the GEO repository, accession number GSE163014 and https://www.ncbi. nlm.nih.gov/geo/query/acc.cgi?acc=GSE163014.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by UCSF IRB. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
AZ was responsible for data cleaning, analysis, interpretation, and manuscript writing. GH was responsible for data cleaning, analysis, and methodology. DR was responsible for data analysis and interpretation. PR was responsible for experimental design and performance of experiments. SC was responsible for clinical data and clinical trial design. FV was responsible for clinical data and clinical trial design. CY was responsible for study design, review, oversight of analysis, and manuscript edits. MS was responsible for study design, review, oversight of analysis, and manuscript edits. All authors contributed to the article and approved the submitted version.
FUNDING
This research was funded as an ancillary study of a CTOT21 grant (5U01AI113362-07) for an investigator-initiated clinical trial in kidney transplant recipients with subclinical rejection. Additionally, funding was provided through HIPC (5U19AI128913-03) and ROI funding (5R01DK109720-04). Additionally, AZ was funded by an NIH T32 training grant (T32 AI 125222). Enriched biological processes based on the Gene Ontology database for CD4 + T cells. (B) Enriched biological processes based on the Gene Ontology database for CD8 + T cells. (C) Enriched biological processes based on the Gene Ontology database for monocytes. Legend shows color gradient for adjusted p-values, with red being smaller adjusted p-values and blue being larger adjusted p-values. The x-axis represents the number of genes from the gene list that were a part of that respective biological process pathway.
Supplementary Figure 6 | Heatmaps of COVID-19 and healthy patient gene expression. (A) Unsupervised hierarchical clustering of gene expression data from bulk RNA-seq of COVID-19 infected and healthy patient PBMCs, using only upregulated genes from control monocytes from our single-cell study; near perfect clustering is seen between COVID-19 and healthy patients; additional phenotypes included are patient sex and disease severity. (B) Unsupervised hierarchical clustering of gene expression data from bulk RNA-seq of COVID-19 infected and healthy patient monocytes, using only upregulated genes from control monocytes from our single-cell study; perfect clustering is seen between COVID-19 and healthy patients; relative gene expression is represented as a color gradient with higher gene expression represented in red and lower gene expression represented as blue. | 2021-01-05T14:08:19.840Z | 2021-01-05T00:00:00.000 | {
"year": 2021,
"sha1": "fc57e2a5ea990b1ce04f07fac1db8048eeee372e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2020.610682/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fc57e2a5ea990b1ce04f07fac1db8048eeee372e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18626925 | pes2o/s2orc | v3-fos-license | Uncovering the Rare Variants of DLC1 Isoform 1 and Their Functional Effects in a Chinese Sporadic Congenital Heart Disease Cohort
Congenital heart disease (CHD) is the most common birth defect affecting the structure and function of fetal hearts. Despite decades of extensive studies, the genetic mechanism of sporadic CHD remains obscure. Deleted in liver cancer 1 (DLC1) gene, encoding a GTPase-activating protein, is highly expressed in heart and essential for heart development according to the knowledge of Dlc1-deficient mice. To determine whether DLC1 is a susceptibility gene for sporadic CHD, we sequenced the coding region of DLC1 isoform 1 in 151 sporadic CHD patients and identified 13 non-synonymous rare variants (including 6 private variants) in the case cohort. Importantly, these rare variants (8/13) were enriched in the N-terminal region of the DLC1 isoform 1 protein. Seven of eight amino acids at the N-terminal variant positions were conserved among the primates. Among the 9 rare variants that were predicted as “damaging”, five were located at the N-terminal region. Ensuing in vitro functional assays showed that three private variants (Met360Lys, Glu418Lys and Asp554Val) impaired the ability of DLC1 to inhibit cell migration or altered the subcellular location of the protein compared to wild-type DLC1 isoform 1. These data suggest that DLC1 might act as a CHD-associated gene in addition to its role as a tumor suppressor in cancer.
Introduction
Congenital heart disease (CHD) presents a variety of structural malformations of the heart or great vessels at birth, constituting a major cause of birth defect-related deaths [1]. Although decades of research have revealed that both environmental and genetic factors contribute to the etiology of CHD, increasing evidence supports an important role of a genetic predisposition to the disease [1][2][3][4]. Indeed, many disease-causing genes, which follow Mendelian patterns of inheritance (e.g., TBX5, JAG1, NKX2-5, GATA4, NOTCH1), have been identified by pedigree analysis [5][6][7][8][9][10]; however, the genetic mechanism of most sporadic CHD cases remains elusive [11].
In our previous mutational screen in a Chinese sporadic CHD cohort, a low-coverage (1006) exome sequencing of 18 pooled samples identified a splice-site mutation (chr8:13072284, C.G, reference assembly: hg19) of the deleted in liver cancer 1 (DLC1) gene in a patient who has atrial septal defect (ASD). This variant is not recorded in The 1000 Genomes Project database and the dbSNP 137 database; after validation assays, it is absent in 800 control samples, suggesting that this splicing site mutation is unique in the CHD cohort (unpublished data).
DLC1, which encodes a GTPase-activating protein, is considered to be a tumor suppressor gene in several types of tumors (e.g., primary hepatocellular carcinoma, breast cancer, prostate cancer, non-small cell lung carcinoma and meningioma tumors) [12][13][14][15][16][17][18]. The migration and proliferation of some tumor cells are reported to be inhibited by DLC1 [19][20][21][22]. DLC1 can interact with tensin family proteins [23,24] and is localized to focal adhesions [25], which together indicate that DLC1 is essential for the cytoskeletal organization and morphology of cells. Interestingly, Dlc1 2/2 mice are embryonic lethal, and histologically, the heart is incompletely developed with a distorted architecture of the chambers [26]. Another study reported that Dlc1 homozygous gene-trapped mice demonstrated abnormalities in the embryonic heart and blood vasculature of the yolk sac [27]. These results, which were derived from observations of knockout mice, unequivocally prove that DLC1 is of paramount importance to the developmental events occurring in the embryonic heart.
The human DLC1 gene encodes four transcript variants: isoforms 1-4 encode protein products of 1528 aa, 1091 aa, 463 aa and 1017 aa, respectively. Although there have been numerous investigations focused on characterizing the multi-faceted function of DLC1 isoform 2, the properties of the other isoforms remain unclear. In particular, DLC1 isoform 1, the longest isoform of the DLC1 gene (NCBI Reference Sequence: NM_182643.2), is abundantly expressed in human heart tissues [28].
The evidence described above logically leads to the hypothesis that, in addition to its role as a tumor suppressor in cancer, DLC1 might play another role in the pathogenesis of CHD. Therefore, to verify the rare variant frequency of DLC1 isoform 1 in a CHD cohort, we sequenced the coding regions and intron boundaries of DLC1 isoform 1 in 151 CHD patients (not including the initial screening CHD cohort of our previous work). Functional experiments were then performed to determine the consequences of the identified mutations.
Ethics statement
The written informed consent for the genetic analysis was obtained from all the subjects who participated in this study, and the research was approved by the ethics committee at Institute of Health Sciences, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences.
Sample preparation
A total of 151 patients with congenital heart disease were enrolled in the study at the First Hospital of Hebei Medical University. All the subjects were examined by experienced cardiologists, and the cardiac phenotypes were determined using standard transthoracic echocardiography and other tests according to the ICD-10 diagnostic criteria (Table S1 in File S1). The patients' basic medical situation and family history were recorded. The karyotypes of all patients were examined; with the exception of three individuals with trisomy 21, all others were normal. Most of the patients did not have extra-cardiac manifestations except the three individuals with Down syndrome. Most of the patients had undergone cardiac catheterization or surgery. After recruitment in Hebei and Shanghai of normal individuals without CHD, control blood samples (n = 500) were collected. Genomic DNA was extracted from peripheral blood using QIAamp DNA Blood Mini Kits.
Mutational analysis
The exons and portions of 59UTR and 39UTR regions of DLC1 isoform 1 were amplified using the primers shown in Table S2 in File S1. The PCR products were then purified using ExoSAP-IT reagent (USB) and sequenced with an ABI 3730 Genetic Analyzer. The results were analyzed using the ABI software suite and the identified variants were re-sequenced and validated.
Mutation simulation
The method of O'Roak et al. [29] was used to calculate the mutation weight of each base of the DLC1 isoform 1 coding sequence. Because the simulation only focused on the DLC1 gene, the locus-specific substitution rate was not considered. Thus the mutation weight for each base and each substitution can be calculated as follows: where W n is the weight measuring the nucleotide-specific substitution rates and has two values according to the base composition [30]: For the weight W s , which represents the relative transition or transversion substitution rates [31]: We mutated each base to the other three bases and predicted the class of mutation (i.e., synonymous, missense or nonsense) that would be introduced. For the sake of convenience, only the missense and nonsense classes were considered. We then obtained the mutation weight of each base for missense and nonsense classes using: To address whether the cluster of mutations we observed was identical to that expected by chance, after the common SNP sites were eliminated from the coding sequence, 13 non-synonymous rare mutations were randomly introduced into the gene based on the mutation weights in one simulation. We then recorded how often the number of mutations residing within the identical range of our cluster was larger than or equal to 8. The range of the cluster was defined as 639 bp (the length from substitution Ala220Val to Thr433Asn in the coding sequence). The significance was estimated as P~nz1 ð Þ= mz1 ð Þ, where n is the number of instances where the randomized number was greater than the observed number and m was the number of randomizations (we employed m~1,000,000). Thus, we could estimate the probability of the identical cluster occurring by chance.
Plasmids construction
The wild-type DLC1 isoform 1 expression plasmid was purchased from OpenBiosystems. Seven missense mutants of DLC1 isoform 1 (threonine substitution of alanine-350, lysine substitution of methionine-360, methionine substitution of leucine-413, lysine substitution of glutamic acid-418, valine substitution of aspartic acid-554, valine substitution of leucine-952 and leucine substitution of valine-1371) were generated by site-directed mutagenesis. The wild type DLC1 isoform 1 and these mutants were cloned into the pEGFP(N1) plasmid, and the DLC1-GFP fusion constructs were transferred into the retroviral plasmid pBabe-puro.
Transwell migration assay
To test the effects of the DLC1 wild-type and mutant proteins on cell migration, pBabe-puro overexpression plasmids were transfected into the amphotropic Phenix packaging cell line, and the viruses were collected as previously described [32]. When the cells (HUVEC or HBMEC-60) grew to 30,40% confluency, the culture medium was replaced with a 1:1 mixture of fresh medium and the above virus-containing medium in the presence of 5 mg/ mL polybrene for infection and this operation was repeated every 24 h until the infection rate of the target cells reached ,80%, as judged by GFP-positive cells. After infection, 10 5 infected endothelial cells were resuspended in fresh media containing 0.5% serum, and the cells were seeded in inserts (Costar) containing 8 mm pores. These inserts were placed in Transwell cartridges that contained 300 mL of medium with 10% FBS in the bottom wells. At 24 h after seeding, the medium was aspirated, and 350 mL of trypsin was added into the wells to trypsinize the cells that had passed through the pores. After serum neutralization of the trypsin, the trypsinized cells were centrifuged for 4 min at 1000 rpm, resuspended in 100 mL phosphate-buffered saline (PBS) and counted using a hemocytometer.
Proliferation assay
When the virus infection rate reached ,80%, 5610 4 infected cells were seeded. After 2 days, the resulting cells were trypsinized and counted using a hemocytometer. Then, 5610 4 of these cells were reseeded for another round of counting. The process was repeated for at least three cycles.
Active rho assay
Cells at 80% confluence were gently rinsed once with ice-cold Tris-buffered saline (TBS) and lysed. The lysate was centrifuged at 16,0006g at 4uC for 15 min, and the supernatant was subjected to active Rho purification and detection with the Active Rho Kit (Pierce, Cat No. 16116) according to the manufacturer's protocol.
Stress fiber staining and DLC1 subcellular localization
When the cells reached 40% confluence, they were transfected with pEGFP(N1) plasmids harboring DLC1 wild-type or mutant cDNA. After 24 h, the cells were fixed with 10% formalin for 15 min, permeated with 0.1% Triton X-100 for 10 min and stained with 5 units/mL rhodamine phalloidin (Invitrogen) for 20 min. The stained cells were imaged with using a laser confocal microscope. A total of 100 randomly selected transfected cells in each sample were assessed for subcellular localization of the DLC1-GFP fusion protein. The selected cells were also assessed for the percentage of cells with visible stress fibers as previously described [33].
Angiogenesis (tube-formation) assay
A total of 5610 4 cells infected with DLC1-expressing viruses were suspended in 300 mL of DMEM supplemented with 10% FBS and 10 ng/mL FGF (Invitrogen). The cell suspension was seeded on 300 mL of pregelled Matrigel (10.8 mg/mL, Becton, Dickinson and Company). After 24 h, 10 microscopic fields were randomly selected for each well. Angiogenesis in each well was determined by counting the branch points of the formed tubes, as previously described [34].
Apoptosis assay
Cell apoptosis analysis was performed using an Apoptosis Assay Kit (Keygen Biotech) according to the manufacturer's instructions. Briefly, 1610 6 cells infected with virus expressing wild-type or mutant DLC1 were trypsinized and resuspended in 500 mL of 16 binding buffer. Then, fluorochrome-conjugated Annexin V was added to the cell suspension and was incubated for 10 min at room temperature, followed by incubation with 5 mL of 7-AAD viability staining solution for 10 min at room temperature. The cells were then subjected to flow cytometry using a FACSAria (BD Biosciences).
Results
Identification of rare variants in the DLC1 gene of CHD patients DLC1 isoform 1 contains 18 exons and spans 431,558 base pairs (bp). Each exon of DLC1 isoform 1 was amplified from the genomic DNA of 151 CHD patients and the PCR products were then sequenced by Sanger sequencing. After eliminating the common single-nucleotide polymorphisms (SNPs) (SNPs with minor allele frequency §1%) found in the dbSNP database, 13 rare non-synonymous variants were identified. One of these variants was found in 2 patients and each of the rest 12 variant was found in 1 patient. We then assessed the frequency of these rare variants in the control cohort by sequencing the corresponding sites in 500 normal samples using Sanger sequencing method. These data were combined with an additional exome sequencing dataset of 400 individuals (average depth 606) (G.N., unpublished data) to widen the control cohort to 900 individuals. Consequently, only 3 rare variants identified in the CHD cohort were also found in the controls. In addition, 6 of the 13 variants were SNPs with very low frequency recorded in dbSNP build 137 (Table 1). Altogether, we identified 6 private variants that were absent in 900 controls and the dbSNP database (Table 1, Fig. 1A). The clinical information of 14 patients who carried these rare variants of DLC1 were reviewed, and ten of the fourteen patients had septal defects. We also reviewed the health status information of the parents of these patients, and all of them had no cardiac defects. However, it's a great pity that we could not obtained the blood samples of these parents because they came to the hospital years ago and we lost touch with these families.
DLC1 rare variants cluster in the N-terminus of the protein Compared to DLC1 isoform 2, which is the most studied isoform, the coding product of isoform 1 has an N-terminal end of 447 amino acids prior to the SAM domain (including an extended region of 437 amino acids and 10 amino acids which are different from the corresponding parts of DLC1 isoform 2) (Fig. 1B). Although several domains have been identified in the DLC1 protein, the function of the N-terminus is still undefined. Interestingly, 8 (61.5%) of the amino acid-altering variants identified in sporadic CHD were located in this region (Fig. 1A). To evaluate the rare variant frequency of this region in other populations, the rare variant information of DLC1 in the 1000 Genomes Project [35] and the Exome Sequencing Project [31] were collected and analyzed (samplesize~7592). As described before, we defined amino acids 1-447 as the N-terminal region and found that 60 (29.6%) of the 203 rare protein-altering variants were localized in this region (Table S3 in File S1). Consequently, Fisher's exact test (two-tail) showed that, compared to variants found in the 1000 Genomes Project and the Exome Sequencing Project mentioned above, the rare variants identified in our CHD cohort significantly clustered at the N-terminus (P~0:027), revealing that this might be a disease-associated mutation hot spot. We then used the methods from O'Roak et al. [29] to measure the mutation weight of each base of the DLC1 isoform 1 coding sequence. Subsequently 13 missense or nonsense mutations were randomly introduced into the gene in a simulation according to the mutation weights. After one million simulations, we found that the probability of mutation enrichment similar to the observed cases (at least 8 mutations in a range of 639 bp) was very low (P~0:004), which illustrated that the existence of this mutation cluster in the case cohort was not a spontaneous phenomenon.
Most rare variants are predicted to be deleterious
We then BLAST-searched the N-terminal sequence in the UniProt database and aligned the homologous sequences [36]. The alignment showed that, seven of eight amino acids at the Nterminal variant positions were conserved among the primates, and it's worth noting that Arg351, Met360 and Leu413 were conserved in the primates and non-primates (Fig. 1C). The SIFT scores were also calculated to predict the effects of the rare variants on protein function [37] (Table 1, Table S3 in File S1). Among the 9 rare variants that were predicted as ''damaging'' in the case cohort (SIFTscorev0:05), 5 were located at the N-terminal region. As for other five rare variants beyond the N-terminal end, there were three amino acid substitutions in the region between the sterile alpha motif (SAM) and Rho-GTPase-activating protein (GAP) domains, but none in the focal adhesion targeting region [38,39]. The other two amino acid substitutions (Val1371Leu and Ile1511Met) were located in the steroidogenic acute regulatory protein related lipid transfer (START) domain. All of these substitutions were predicted to be deleterious except the c.1683C.A transition (Table 1). We also evaluated the effects of these 13 rare variants found in the case cohort by multiple prediction methods (PolyPhen-2, LRT, Mutation Taster, etc.), and the prediction results from PolyPhen-2 were similar to the SIFT results (Table S4 in File S1).
Three mutations affect the role of DLC1 in cell migration
To study whether the rare variants identified in the CHD cohort affect the protein function of DLC1, we cloned 7 of the variants, including 4 private variants and 3 other rare variants, by introducing the point mutations into the wild-type DLC1 isoform 1. These variants are as the following: Mutant 1, Ala350Thr; Mutant 2, Met360Lys; Mutant 3, Leu413Met; Mutant 4, Glu418Lys; Mutant 5, Asp554Val; Mutant 6, Leu952Val; and Mutant 7, Val1371Leu. These seven variants were selected because they were absent in 900 control samples (altogether 10 rare variants were absent in 900 control samples, but mutant vectors of Gly266Glu, Thr433Asn and Ile1511Met were failed to construct for technical reasons). Cell migration inhibition is one of the most studied functions of DLC1. However, most studies focused on the isoform 2 of DLC1 (1091 aa) and the effect of isoform 1 and its mutants on cell migration has not been reported. Therefore, we assessed the functions of DLC1 isoform 1 and its mutants on migration in human umbilical vein endothelial cells (HUVEC) and human bone marrow endothelial cells 60 (HBMEC-60), the two cell lines widely used in cardiovascular disease studies. The wild-type isoform 1, mutants 1-7, and the control vector were transfected into HUVEC and HBMEC-60 cells ( Fig. 2A), following by transwell migration assays to analyze the migratory abilities of the cells. As shown in Figure 2, DLC1 isoform 1 suppressed the migration abilities of HUVEC and HBMEC-60 in vitro. Mutants 2, 4 and 5 (Fig. 1D), which either changed the polarity (Met360 and Asp554) or altered the electric charge (Glu418) of the amino acids, rescued the migration suppression by the wild-type DLC1 protein, as the migration of the cells transfected by these mutants was similar to the control cells. The other mutants appeared to have no significant differences from the wild type to suppress cell migration (Fig. 2B, 2C). In addition, the migration rescue effect of Mutants 2, 4 and 5 could not be accounted for by their effect on cell proliferation, because the mutants and the wild-type protein similarly suppressed the growth of endothelial cells (Fig. S1 in File S1).
The Glu418Lys mutant changes subcellular localization of DLC1
DLC1 is an inhibitor protein of small GTPases including RhoA/B/C and CDC42. Such an inhibitory effect was thought to be mainly mediated by the GAP domain of DLC1. Interestingly, none of the variants identified in CHD lay within the GAP domain. Since a recent study reported that the protein sequences outside of GAP domain may also affect the Rho-inhibiting activity of DLC1 [40], we studied whether the CHD variants affect the GAP activity of DLC1. It was found all the mutants and the wildtype protein efficiently suppressed the activation of RhoA ( Fig. 2A). Then we considered whether the small GTPases in the endothelial cells were regulated by DLC1 in situ by analyzing the formation of stress fibers in the cells, a process that is regulated by Rho activities. The DLC1 constructs were tagged with GFP, and the stress fiber formation was analyzed by the high-affinity F-actin probe Rhodamine phalloidin. The data showed that when the wild-type and mutant DLC1 were expressed in the endothelial cells, the formation of stress fibers were prevented to similar levels (Fig. 3A, Fig. S3 in File S1).
Although the variants in DLC1 did not lead to any difference in the regulation of endothelial cytoskeleton, we observed Mutant 4 (Glu418Lys) markedly altered the localization of the protein in the cells. Fluorescent confocal microscopy revealed that DLC1 isoform 1 was primarily located in the cytoplasm, as were Mutants 1-3 and 5-7. Mutant 4 was found in both the cytoplasm and nucleus. Compared to the wild type and the other 6 mutant proteins which were excluded from the nucleus of 73% -84% endothelial cells, the Mutant 4 protein was not seen in only 11% of the nucleus, suggesting the protein nuclear translocation (PNT) caused by the Glu418Lys substitution (Fig. 3). It was previously reported that PNT occurred in 10% of tumor cells after transfection with DLC1 isoform 2 and was accompanied by morphological changes, and then these cells progressed to apoptosis stage [41]. Although no difference was observed between the cells transfected by Mutant 4 and those by other DLC1 constructs in our apoptosis analysis, all the wild type and mutant DLC1 led to markedly enhanced percentages of apoptotic cells (Fig. S2 in File S1).
Discussion
Congenital heart disease is complex. Although key mutations have been identified by pedigree research, the great heterogeneity of CHD makes it very difficult to identify the responsible genes, particularly among sporadic CHD cohorts. However, disease or deleterious alleles could be rare [42], and rare variants that have obvious functional consequences will show the largest effect size for the disease [43]. Therefore, we focused on the identification of rare variants in a case cohort. We successfully identified 13 rare variants in a sporadic CHD cohort and provide clear evidence that 8 rare variants are clustered in the N-terminal region of the protein. However, we should note that, the reference variant data from the 1000 Genomes Project and the Exome Sequencing Project were produced by different platforms, most of which were next generation sequencing platforms. The sequencing depth, coverage and data analysis pipelines might affect the variant detection rate. It is the consideration that the variant number from different platforms might not be compared directly. So we focused on the locations of the rare variants on the protein, and the analysis strategy is feasible in our study. More importantly, in our in vitro assays, three private variants (corresponding to Mutants 2, 4 and 5) were shown to alter the ability of DLC1 to inhibit cell migration or the subcellular localization of the protein, which supported the notion that private variants might also play major roles in the pathological process of complex diseases [43]. In addition, the extended N-terminal region of DLC1 isoform 1 harbors 83% (5/6) of the private variants identified in the CHD cohort in a non-random manner. The relatively high transcriptional level of DLC1 isoform 1 in human heart tissues [28] implies that the unique N-terminal region may possess a tissue-specific function in the cardiovascular system. However, future studies are necessary to elucidate the details.
Cell migration is an evolutionarily conserved mechanism that includes four steps: polarization, protrusion, adhesion and retraction [44]. Actin is primarily involved in the last three steps. Studies have confirmed that DLC1 can function in the regulation of actin cytoskeletal organization and cell migration [45], suggesting that DLC1 acts as an important regulator of migration. It is essential for endothelial cells in the outflow tract (OT) and atrioventricular (AV) regions to migrate into the cardiac jelly during embryonic heart development [46]. Similarly, the migration of cardiac neural crest cells is also a crucial event during heart development, and the inappropriate timing or path of cardiac neural crest cell migration will cause cardiac congenital anomalies [47]. Thus, if the migration regulatory ability of DLC1 is impaired in the early stage of fetal cardiac development, it is reasonable to speculate that inaccurate developmental consequences, such as defects or malformations, will occur. Although DLC1 is generally considered to affect cell motility and focal adhesion via the Rho-Gap domain and focal adhesion targeting region, respectively [38,39,45], the SAM domain has also been reported to regulate cell migration [48]. We demonstrated that three private variants near the SAM domain could reduce the inhibitory effect of wildtype DLC1, suggesting that these mutations might be implicated in regulating the function of the SAM domain.
Although DLC1 isoform 2 has been well studied during the past ten years, the functions of DLC1 isoform 1 still need to be characterized. A series of assays were performed to verify whether DLC1 isoform 1 had a function similar to isoform 2. As shown above, all the mutant and wild-type protein had suppression effects on Rho ( Fig. 2A), and similarly regulated the cytoskeleton rearrangement and prevented the formation of stress fiber in the endothelial cells (Fig. 3A, Fig. S3 in File S1). Considering that endocardium formation in the primitive heart tube is affected by vasculogenesis [49], we conducted an angiogenesis assay in vitro, and DLC1 isoform 1 and the mutants had similar prohibitive effects on angiogenesis (Fig. S4 in File S1). Although the mutants showed no difference from the wild-type protein, these negative results only indicate that the variations did not affect these specific features in certain cells. Indeed, the variants might impair the function of DLC1 in other ways or in other cardiac cells. Furthermore, to the best of our knowledge, this is the first report using in vitro assays to demonstrate that DLC1 isoform 1 manifests a function analogous to isoform 2. In conclusion, our mutational analysis of DLC1 isoform 1 presents a spectrum of rare variants in a CHD cohort and shows a mutation cluster in the N-terminus of the DLC1 protein. Our functional assays prove that the ability to inhibit cell migration or the subcellular localization of the protein are altered by three private variants. These findings provide novel insight that DLC1 may be a high-priority candidate gene associated with CHD.
Supporting Information
File S1 Tables S1-S4 and Figures S1-S4. Table S1. The statistics of phenotype information of 148 non-trisomy CHD patients; Table S2. The primers for PCR to amplify the exons and portions of 59UTR and 39UTR regions of DLC1 isoform 1; Table S3. Rare variants of DLC1 isoform 1 identified in The 1000 Genomes project and Exome sequencing project; Table S4. The effects of 13 rare variants identified in the CHD cohort were predicted using multiple prediction algorithms; Figure S1. Effect of wild-type DLC1 isoform 1 and mutants on HUVEC proliferation; Figure S2. The apoptosis analysis of wild-type DLC1 isoform 1 and mutants in HUVECs; Figure S3. Percentage of cells overexpressing wild-type DLC1 isoform 1 and mutants that exhibited stress fibers; Figure S4. Wild-type DLC1 isoform 1 and mutants had similar effects on angiogenesis. (DOC) | 2018-04-03T04:38:09.791Z | 2014-02-28T00:00:00.000 | {
"year": 2014,
"sha1": "1d40b2ed226c0808dd84f04ac9d3ad13cfca6099",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0090215&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1d40b2ed226c0808dd84f04ac9d3ad13cfca6099",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
54021516 | pes2o/s2orc | v3-fos-license | Van der Waals interactions : Corrections from radiation in fluids
Related Articles Phase diagram of the modified Lennard-Jones system J. Chem. Phys. 137, 174502 (2012) A parameter-free, solid-angle based, nearest-neighbor algorithm J. Chem. Phys. 136, 234107 (2012) Surface charge density model for predicting the permittivity of liquid mixtures and composites materials J. Appl. Phys. 111, 064101 (2012) Energy dissipation in non-isothermal molecular dynamics simulations of confined liquids under shear J. Chem. Phys. 135, 134708 (2011) Extension of the Steele 10-4-3 potential for adsorption calculations in cylindrical, spherical, and other pore geometries J. Chem. Phys. 135, 084703 (2011)
I. INTRODUCTION
Atoms and molecules are polarizable.Their polarizability can be related to fluctuating dipole moments, and with the electrostatic dipole-dipole interaction the well known attractive van der Waals force between pairs of particles is induced.It was noticed that there were deviations from this force, and by use of quantum electrodynamics the Casimir-Polder force was obtained where retardation effects were taken into account. 1][4] Methods of quantum mechanics are not easily extended to general fluid density.But Chandler et al. and Høye and Stell realized that the equilibrium properties of a fluid of quantized polarizable particles could be evaluated by using methods of classical statistical mechanics 5,6 The basis for this was the path integral representation of quantum mechanics.Feynman found that the partition function of a quantum mechanical particle can be represented as a path integral in imaginary time β = 1/(k B T) where k B is Boltzmann's constant and T is temperature. 7The path integral can be interpreted as a "classical" polymer in 4 dimensions where imaginary time is the fourth dimension with periodic boundary conditions at times 0 and β.It can also be regarded as a random walk whose properties have been studied and analyzed, and this has been used to solve problems in statistical mechanics. 8revik and Høye reconsidered the evaluation of the Casimir-Polder force by applying the statistical mechanical method of thermal equilibrium to the path integral. 9Then it was realized that the method also was applicable to time-dependent interactions, and the Casimir-Polder force a johan.hoye@ntnu.no2158-3226/2013/3(2)/022118/18 C Author(s) 2013 3, 022118-1 was recovered.The latter derivation coincides with the interpretation that the Casimir force can be related to fluctuating dipole moments.In this way the electromagnetic field is fully replaced by pair interactions between dipole moments. 10,11 he reason why this replacement is possible, is that the field is quantized as a set of harmonic oscillators.A related conclusion was earlier noted by McLachlan. 2he Casimir force becomes the van der Waals force when retardation effects are neglected, i.e. the static dipole-dipole interaction is used.Thus the former includes corrections to the latter from radiation.A purpose of the present work is to investigate these corrections for fluids where the particles form a network of interactions.To do so we extend the statistical mechanical theory for the Casimir force of a pair of particles.For low density one can simply add up pairs of particles.However, for higher densities the "classical" methods applied to the quantized polarizable fluid can be used. 5,6 e have realized that these methods also can be extended to the situation with radiating dipole-dipole interaction, and we do so in this work.This extension makes evaluations more demanding, and results of several previous works must be utilized and combined.But explicit results can still be obtained in terms of the solution of the Ornstein-Zernike integral equation where now a (transformed) direct correlation function is of Yukawa form outside hard cores.
A motivation for this work is to investigate the influence from radiation upon the free energy of fluids as just indicated.Due to the time dependence or retardation properties of the interaction it is not obvious how to perform evaluations at thermal equilibrium where also the particle structure at the molecular level also should be taken into account.We are not aware of other approaches that have obtained quantitative results for this problem.However, the statistical mechanical method that we will utilize and extend to the situation with radiating pair interactions, can deal with the particle nature on the molecular level to give quantitative results.To perform explicit evaluations the fluid is modified in a way typical for developments in fluid theory.Thus in the present case the molecules are approximated by hard spheres with fluctuating dipole moments located at their centers.
Another motivation for this work is to get some estimate of the influence from radiation on the energy of electrons in molecules.This is a noticeable effect for large molecules, and it lead to the evaluation of the Casimir-Polder force. 1 In this respect one of us in recent works has included van der Waals and Casimir energies as leading perturbations [12][13][14] to ab initio Hartree-Fock or density functional theory for molecular energies. 15These energies can be expressed in terms of the occupied and excited eigenstates of the molecules.In Ref. 13 it was found that the system of electrons in molecules may be regarded as a dielectric fluid and radiation corrections can be taken into account.But incorporation of the van der Waals or Casimir energies in molecular evaluations will be demanding; so the influence of radiation is not easily obtained.However, the dielectric fluid studied in this work may be regarded as a strongly simplified model of a large molecule where the influence of radiation between the electrons is studied.The energy shifts from this influence are expected to be small since they are closely related to the Lamb shift as they both can be related to the consequences of the vacuum fluctuations of the electromagnetic field. 16Our results will show that this is the case.
Perturbing contributions to molecular energies from non-local correlations are of central interest and various recipes have been used.One of them is the RPA (random phase approximation) which is in accordance with the van der Waals energy where radiation is absent. 17,18 his has been studied by Lein et al. for the uniform electron gas where simulation results with which to compare are available. 19They point out that the RPA gives too low energy (a situation similar to classical Debye-Hückel theory).This is corrected by including a term in addition to the Coulomb interaction at short range.In Ref. 14 it was suggested in view of the statistical mechanical approach that this may be adjusted determined from a hard core condition upon the resulting correlation function as two electrons can not be on the same position due to the repulsive Coulomb interaction (and for equal spins).In any case this will need further investigations.
In Sec.II the expression for the Casimir free energy of a pair of polarizable particles is considered.By integration over separations outside the hard core diameter of the particles the induced free energy per particle for a fluid at low density is obtained.This is compared with the induced van der Waals free energy where radiation is disregarded.
In Sec.III a polarizable fluid at arbitrary density, where the particles interact via the electrostatic dipole-dipole interaction, is considered, and the van der Waals free energy is evaluated.For this situation we study how the induced energy deviates from the value that would be obtained from a direct sum of low density pair energies.
In Sec.IV the polarizable fluid at arbitrary density, where the full radiating dipole-dipole interaction is present, is considered.The induced energy for this situation is evaluated, and we study how it deviates from the induced van der Waals free energy.
II. A PAIR OF PARTICLES
The Casimir free energy between a pair of polarizable particles is given by Eq. (5.15) of Ref. 9 as Here where n is integer, r is separation between the particles, c is velocity of light, and K is the Matsubara frequency where ω is the frequency and i is the imaginary unit.It is to be noted that imaginary values of ω are used in expression (2.3) and other expressions below where real values of K, according to Eq. (2.2), are used.(The K was defined with opposite sign in Ref. 9. As noted in Refs.13 and 14 that was a mistake that did not influence results so far.This sign will depend upon how the Fourier transform is defined with −iωt or iωt in the exponent, i.e. −iωt = iKλ with imaginary time λ = it/ means K = i ω.) Finally α K is the frequency dependent polarizability of the particles.When each particle is modeled as a simple harmonic oscillator, which will be used throughout this work, it is given by (with −( ω) 2 = K 2 ) where α is the zero frequency polarizability and ω 0 is the eigenfrequency (which is real).The well known Casimir-Polder result (for T = 0) is recovered when α K = α for all K, i.e. ω 0 → ∞. 1 This and result (2.1) were earlier obtained by a Green function method. 20t can be remarked that the α K of Eq. (2.4) is a simplified version of real molecules as it contains only one resonance frequency.However, it can be replaced with any realistic polarizability by which results below will become less explicit.Further it can be noted that the polarizability α is here used in Gaussian units since the dipole-dipole interaction defined through Eqs.(3.2)-(3.5)below (and Eq. (4.1) instead of (3.5) in the radiating case) are in Gaussian units as is commonly used in models of ionic and dielectric fluids.In SI units the corresponding polarizability is α SI = 4π 0 α where 0 is the permittivity of vacuum.
Compared with the energy quantum ω 0 the thermal energy k B T will be regarded as small.Thus we will consider T = 0 to simplify.With this the summation in Eq. (2.1) is replaced by integration where We will consider fluids.Thus one should take the average over separations, and in the low density limit expression (2.1) is to be used for particle separations larger than the hard core diameter R. With number density of particles ρ the induced free energy per particle becomes (with factor 1/2 to avoid double counting of interactions) where the replacement of integration variable has been made to simplify.Further with this Integral (2.6) can be verified by differentiation with respect to R → r (and z ∝ R).The inverse of γ multiplied with 2π is the wavelength of radiation relative to the hard core diameter at resonance frequency ω 0 .Thus for small molecules the γ and thus radiation effects will be small.A limiting case of Eq. (2.6) is the electrostatic limit γ → 0 (z → 0) by which (2.9) Another limit is the Casimir γ → ∞ (ω 0 → ∞) case where the denominator in Eq. (2.6) can be put equal to one to obtain Eq. (2.9) is the van der Waals interaction F = −3α 2 ω 0 /(4r 6 ) integrated while Eq.(2.10) is the corresponding Casimir interaction F = −23 cα 2 /(4π r 7 ) integrated.Eq. (2.6) may also be expanded for small γ to obtain The γ 2 term of Eq. (2.11) represents the leading radiation correction to the free energy of van der Waals interactions.A notable feature of this correction is that it depends directly upon R since Thus the greater part of it must come from separations r close to the minimum r = R. Immediately this may be somewhat counterintuitive since with interaction F given by Eq. (2.1) the corresponding relative correction is largest for large r where retardation effects dominates, but clearly, this is outweighed by the rapid vanishing of the interaction for large r.
The f as given by Eq. (2.6) is evaluated numerically and the ratio f/f 0 is shown in Fig. 1 as As a numerical example we will make estimates for Ar (argon) where the atoms interact via the Lennard-Jones potential φ(r) = 4ε LJ [(σ /r) 12 − (σ /r) 6 ].For Ar the critical temperature is (2.13) The attractive part of φ(r) is to be identified with the van der Waals interaction F given below Eq. (2.10).So with R ≈ σ = 3.4 Å we have The dielectric constant may here be estimated with the Clausius-Mosotti relation (ε − 1)/(ε + 2) = (4π /3)ρα.With mass density 1.4 g/cm 3 , atomic weight 39.95, ε = 1.6 in liquid state, and (2.17) For the van der Waals energy per particle (2.9) we get The repulsive part of the Lennard-Jones interaction when integrated for r > σ ≈ R reduces this by one third.The change in free energy due to radiation is thus For larger molecules the influence of radiation will increase due to larger molecular diameter R and thus increasing γ .
III. ELECTROSTATIC INTERACTION
For higher densities and polarizabilities the resulting free energy will deviate from adding contributions from pairs of particles.However, the solution of the quantized polarizable fluid is then applicable. 5,6 y use of the path integral its solution turned out to be the one of the corresponding classical fluid for each Matsubara frequency K. Then the mean spherical approximation (MSA) 21 was used where the spatial positions of the particles were not quantized.
To sketch the solution we may merely consider the classical case which is for K = 0.This is based upon the solution of the Ornstein-Zernike equation with MSA boundary conditions 22,23 h(12) = c(12) + ρ(s 3 )c( 13)h(32) ds 3 dr 3 (3.1)where h (12) and c( 12) are the pair correlation function and the direct correlation function respectively.
Here the number i = 1, 2, 3 denote the position r i and the dipole moment s i of particle i.The ρ(s 3 ) is the density distribution of fluctuations of the dipole moment of particle i = 3 in the harmonic oscillator potential.Further the dipole moment of each particle is modelled as a point dipole located at its center.With the MSA one can write where the hats denote unit vectors.The MSA boundary conditions for hard spheres of unit diameter are The condition on c (12), where ( 12) is the pair interaction, is an approximation while the condition on h( 12) is the exact hard core condition for spheres of unit diameter.With static dipole-dipole interaction this means With the MSA it is found that the resulting density distribution is such that (ρ = ρ(s) ds) 22
.6)
In Appendix A we give more details about the solution of the above MSA problem.
In the quantum mechanical case it turned out that the solution is a straightforward extension of the classical MSA problem above. 5,6 he main change is to extend the polarizability to non-zero Matsubara frequencies, i.e. α → α K with α K given by Eq. (2.4) in our case.But a general α K that represents a sum of harmonic oscillators, can be used.With this Eq.(3.6) is generalized to (with the rescaling (2.7) for K) and Eqs.(A18) and (A19) for the solution in Appendix A becomes (with c (0) → c K (0) etc.) where the parameter κ is defined by Eq. (A12) and q(x) is given by Eq. (A15).These quantities are the only ones needed in the electrostatic case at T = 0. From this one sees that Eq. (3.9) with expressions (3.7) and then (3.10) inserted gives the required equation for the parameter ξ = ξ (ρ, α, K).
For the quantized polarizable fluid the total internal energy per particle u t is given by Eq. ( 76) in Ref. 6 as when the replacement (2.7) for K is used.(In the reference 1/α = σ 2 ω 2 0 .)At T = 0 one again can integrate, so with Eqs.(2.5) and (2.7) With c K (0) = 0 one gets the result for non-interacting oscillators (3 dimensions) The difference gives the induced energy At temperature T = 0 this is also the induced free energy f = f(ρ) since the entropy vanishes at T = 0 for quantized systems.
For general density the f must be evaluated numerically, but the low density limit may be checked against result (2.9).From Eq. (3.10) c K (0) = −16ξ , and from Eqs. (A12) and (3.5) in the limit ρ → 0 one finds κ → 1/3 as h D → c D = 1/r 3 , and from (3.9), (A15), and (3.7) 24ξ + • • • = (4π /3)R K by which 16ξ → (8πρα/3)/(K 2 + 1).Thus in this limit which is result (2.9) for spheres of unit diameter R = 1.In Figs. 2 and 3 the induced free energy (van der Waals energy) (3.14) divided by its low density expression (2.9) is shown as a function of density ρ and polarizability α respectively.It is seen that this ratio decreases somewhat with respect to increasing density.This decrease is due to the interaction via many particles by which the direct pair interaction is modified into an effective one.
The attractive part of the potential between neutral particles (e.g. the Lennard-Jones potential) is the above van der Waals interaction.Commonly it is assumed to be constant independent of density.Our results show that this is a reasonable approximation.Thus for the situation considered for Ar at the end of Sec.II with α/R 3 → α = 0.0485 and ρR 3 → ρ = 0.82, one from Figs. 2 or 3 finds the change f in the van der Waals interaction f compared to its low density value f 0 to be It can be noted that our numerical results below are limited to α = 1/8 since for larger α the MSA solution will fail to be unique.This value may reflect an instability of "close-packed" clusters in the fluid.The Clausius-Mosotti relation for the dielectric constant ε on a regular cubic lattice would give (ε − 1)/(ε + 2) = (4π /3)ρα = 1 for ρ = 6/π when α = 1/8, i.e. ε → ∞.Thus our results may cover realistic values of α for fluids.(In SI units the limiting value α/R 3 → α = 1/8 corresponds to the polarizability α SI = 4π 0 α = (π /2) 0 R 3 , where 0 is the permittivity of vacuum).Also our results cover densities of interest for fluids below close packing of hard spheres.The dielectric constant of the MSA fluid itself is given by ε = q(2ξ )/q(−ξ ) (K = 0). 22,24,25 Te problem with larger α is connected to the properties of Eqs.(3.6) and (3.10)where one notes that R 0 → ∞ for ξ < 1/2 when α > 1/8.On the other hand the R 0 of Eq. (A19) in Appendix A is finite for ξ < 1/2 with q(2ξ ) defined by Eq. (A15).
IV. RADIATING INTERACTIONS
The quantum mechanical problem can be extended to time-dependent interactions as mentioned before.Within the MSA the problem again turns out to be solvable in terms of a simple fluid problem, i.e. hard spheres with added interaction of Yukawa form where also an analytic solution has been worked out.The c (12) and h( 12) can still be written in the form (3.2), but the c (r ) and c D (r) will change. 24,25 he radiating dipole-dipole interaction is a solution of Maxwell's equations, and with Eq. (5.10) of Ref. 9, one now instead of Eq.One can note that ĉ1 is unchanged.It corresponds to the longitudinal part of the dipolar interaction, i.e. the J 1 term of Eq. (A5), while ĉ2 may be related to radiation of transverse waves, i.e. the J 2 term.Again transformation (A14) in Appendix A is performed, and for C 1 one has the same PY problem as before while for C 2 one will get an MSA problem with one Yukawa term with boundary conditions (for density −κR K ) For q 1 = q(2ξ ) the solution will be given by Eq. (A15) as before, but the q 2 is replaced by a below.As found by Waisman the C 2 (r) will have the following form for r < 1 27, 28 with x = (π /6)n 2 where n 2 is number density.For the coefficients a and v one has the relations where g(r) = H 2 (r) + 1.Further from Eq. (4.5) one finds The integral for v is proportional to the internal energy of the Yukawa fluid, and for low density and in the mean field limit z → 0 one finds with expression (4.4) inserted and with With expression (4.1) for c D (r) one now will find from transformation (A10) that cD (0) = 0 (4.9) instead of Eq. (A19) with the consequence that Eq. (3.9) now turns into This seems to be a discontinuous change of the equation, but it is not so.To see that one can take the z → 0 limit for which Eq. (4.8) gives v → −(4π/3)R K .In the electrostatic case with z = 0 the ĉD (r ) of Eq. (4.2) vanishes by which the v of Eq. (4.6) will vanish too.However, the −C 2 (0) = q(− ξ ) will have the limiting value of Eq. (4.7).Thus a + v → q(−ξ ), (z → 0).With this Eq.(4.10) becomes Eq.(3.9) in the limit z → 0, and the discontinuity is avoided.Further, with use of Eq. (4.10), Eq. (3.8) will be modified to while Eq.(3.7) will remain unchanged.Eq. (4.10) will be the equation to be solved for ξ .This is done together with Eq. (3.7) where Eq. (4.11) is inserted.Then the solution for a and v for the one Yukawa fluid problem is needed.Simplified expressions for this solution were worked out by Høye and Stell. 28These expressions are used, and those needed here are given in Appendix B. Together they give a relation that are solved numerically with respect to the unknown parameter a ( = a( − ξ )).Then "reduced density" x = −ξ is assumed known.As shown in Appendix B the other quantities of interest can all be expressed explicitly in terms of a and ξ .With known a Eq.(4.10) may again be solved with respect to ξ in an iteration procedure.
When radiation is included expression (3.11) for the internal energy will be modified.The reason is the temperature dependence of the transformed interaction (4.3) outside the hard core.Thus we must turn to the free energy expression which is still valid.For the total free energy per particle f t we can write where from Eqs. (66), (67), and (64) of the Ref. 6 one has (with σ 2 ω 2 0 = 1/α) Here I R K is the contribution from the reference system (with a modified R K due to pair interactions) while I K is the perturbation.To obtain expression (4.13) correctly the path integral was discretized such that and the limit N → ∞ (η → 0) was considered.The I 0 is merely hard spheres alone and will not contribute to the configurational internal energy as classical kinetic energy may be disregarded in this connection.Again for T = 0 it is convenient to consider the expression for the internal energy u t Now with expression (4.15) one has Further with Eq. (3.7) one can put R K c K (0) = 3ρ − R K (K 2 + 1)/α for the last term of expression (4.13).With this substitution one will find that the partial differentiation with respect to R K will vanish and thus will not contribute to u t .This reflects the method used to differentiate the free energy in Ref. 6 where the density distribution ρ({s n }) of polymer configurations is considered constant (as it should) by differentiation with respect to temperature.(Here R K /η is the corresponding quantity to be kept constant in this respect, according to its definition given by Eq. (47) of Ref. 6.) With this we find (4.19)When Eq. (4.18) alone is inserted in Eq. (4.16) the static result (3.11) is recovered in full.The reason for this is that here R K is kept fixed by differentiation instead of R K /η by which a contribution has moved from I Kβ to I R Kβ .(If differentiation with respect to R K had been included, one would get the integral [ h1 (k) + 2 h2 (k)] dk = 0 with use of Eq. (A9) and the core condition (A13) ( ĉi (r ) → ĉK i (r ) etc.).Now one can write (i = 1, 2) Inserted in the integrals of Eq. (4.19) the R K term of Eq. (4.20) will give a common term multiplied with (cf.Eq. (A18)) which will cancel its last term.Integration in k-space can be replaced with integration in r-space, and we are left with With boundary condition (4.3) on ĥi (r ) the sum of integrals would vanish if ĉ2 (r ) like ĉ1 (r ) were non-zero only for r < 1 since Eq.(A8) and condition (4.9) implies c1 (0) − c2 (0) = 0. Thus the net result is that only the part of ĉK 2 (r ) for r > 1, where hK 2 (r ) deviates from its core condition, contributes to Eq. (4.22).So we get For small z the first term is the leading one.Also in the low density limit ĥ2 (k) → ĉ2 (k) by which simple explicit expressions can be obtained.However, the Fourier transform h2 (k) may be utilized to evaluate the last integral more generally. 29A simpler method would be to evaluate the Laplace transform of r [−κ + ĥ2 (r )] (i.e. of r[H 2 (r) + 1]) and its derivative 29 since with Eqs.(2.8) and (4.3) However, for simplicity we here will use ĥ2 (r ) = ĉ2 (r ) for all densities as z will be considered small, and we find where κ follows from its relation (3.8) to ξ and R k .Again for non-interacting oscillators one has c K (0) = 0 (and I Kβ = 0) with contribution u 0 given by Eq. (3.13).When subtracting this from contribution (4.18) expression (3.14) for the static case is recovered.By including expression (4.25) to this the induced energy u and thus the induced free energy f is obtained It is of interest to show that the low density limit of this coincides with the Casimir energy.This is verified in Appendix C. In Figs. 6 and 7 the small difference f between the Casimir energy (4.26) and the van der Waals energy (3.14) divided by f 0 given by Eq. (2.9) is shown for different values of λ.A notable feature of this difference relative to f 0 is that it is almost independent of density and polarizability.This was not expected.However, by further considerations it seems to us that the physical interpretation of this unexpected independence can be understood from Eq. (2.11) where Eq. (2.12) is the perturbation from radiation.As commented below the latter equation, the largest correction from radiation comes from separations where particles are close to each other.In this way the influence from correlations with surrounding particles further away becomes small, and the correction becomes mainly the contribution from the direct sum of all separate pairs of particles like it is for low density.The counterintuitive aspect of this is that the relative influence from radiation upon the induced interaction dominates for large separations; but since the interaction decays rapidly with increasing separation anyway, this has little influence upon the resulting energy when integrated.
For the situation with liquid Ar considered at the end of Sec.II the correction from radiation can be read off from Figs. 6 or 7.As the relative change in free energy is almost independent of ρ FIG. 6.The difference f between the induced Casimir free energy given by Eq. (4.26) and the van der Waals energy given by Eq. (3.14) divided by f 0 given by Eq. (2.9) as function of dimensionless density.The curves for different values of λ = 2π /(ω 0 R) are for dimensionless polarizability α = 0.075.A notable feature of these curves is that there is almost no dependence upon α.FIG. 7. The same as Fig. 6, but now as function of polarizability α for density ρ = 0.5.and α, it remains essentially the same as already found at the end of that section for λ ≈ 150 where f rad = 0.12 meV was found.
V. SUMMARY
We have studied corrections from radiation for a simplified fluid model consisting of hard spheres with fluctuating dipole moments located at their centers.The fluctuating dipole moments are quantized as harmonic oscillators.First the well established Casimir interaction between a pair of particles is studied.Its electrostatic limit is the induced van der Waals interaction.For low density of particles the contribution to the energy of the fluid is obtained by averaging the Casimir energy over particle positions.The influence of radiation or retardation effects depends upon the ratio of the hard sphere diameter and the characteristic wavelength of the electromagnetic radiation at the harmonic oscillator eigenfrequency.Then general fluid density is considered for the electrostatic case, and changes in the van der Waals interaction are found.The fluid model used, is the quantized polarizable fluid evaluated by a method based upon classical statistical mechanics.Finally radiation effects are taken into account by further extension of the polarizable fluid model.Results are presented in Figs.1-7.As a general feature we find that the average van der Waals interaction per pair of particles decreases slightly with increasing particle density.Radiation effects are small, but will increase in magnitude for increasing particle size relative to characteristic wavelength of radiation.As a specific example, numerical values are found for Ar in liquid state.
Further its Eqs. (2.24) and (2.26) give for U
With this the connection between a and K y is given by its Eq.(2.38) as Eq.(B3) with Eqs.(B1), (B2), and (B4)-(B6) inserted for fixed x = −ξ gives the relation that can be solved numerically with respect to a = a(− ξ ).This is then used in Eq. (4.10) to obtain ξ by iterations.(Also Eq. (4.10) may be solved explicitly as ξ = ξ (a) by which a will be the only parameter to solve for.)For small K y → 0 (A → p, U 0 → 0) one may utilize a more transparent linearized version by which numerical solution of Eq. (B3) is avoided.The solution for this situation may be used as input for the non-linear case.Then Eq. (2.42) of Ref. 28 or Eqs.(B1) and (B2) above give a fixed ratio to be used in Eq. (B7) via Eq.(B5) to determine v.Further its Eq.(2.39) or Eq.(B1) determine a since then where with known K y and v the U 0 follows from Eq. (2.34) of the reference as With this the a(− ξ ) is obtained by use of Eqs.(B8), (B5), (B7), and (B10) in Eq. (B9).This is further inserted in Eq. (4.10) to be solved numerically with respect to ξ .
APPENDIX C: LOW DENSITY LIMIT
In the low density limit one finds from Eq. (3.7) R K → 3ρα/(K 2 + 1), and a partial integration can be performed on the first part of integral (4.26) for the induced free energy to obtain
022118- 5 M
FIG.1.Casimir free energy per particle f given by Eq. (2.6) divided by its electrostatic limit f 0 given by Eq. (2.9) as function of dimensionless wavelength λ = 2π /(ω 0 R) on logarithmic scale.The ω 0 is the oscillator eigenfrequency while R is the hard core diameter.
3 z 2 e −zr r for r > 1 . ( 4 . 2 )
z is given by Eq. (2.8) and r/R → r such that again hard spheres of unit diameter are considered.(By a misprint the ψ K = −c has wrong sign in Ref. 9. As commented below Eq. (2.3) its K should have opposite sign too.)Like the static dipole-dipole interaction defined through Eqs.(3.2)-(3.5), the radiating one defined through Eqs.(3.2)-(3.4) is an interaction between pairs of particles.Radiation means exchange of energy between particles.But like the situation for one pair of particles in Sec.II such exchange does not enter evaluations at thermal equilibrium.Now we apply transformation (A10) in Appendix A to the new c D to obtain ĉD (r ) = 1It can be noted that the Yukawa form of ĉD (r ) is an exact transformation of the radiating dipole-dipole interaction.Thus it does not serve as an approximation in the present case contrary to the typical situation by other applications in the statistical mechanics of fluids.The definition (A12) of κ will remain.But with c1 and c2 given by Eq. (A8) the boundary conditions (A13) are modified into ĉ1 (r ) = 0 and ĉ2 (r ) = − 1 3 z 2 e −zr r for r > 1, (4.3) ĥ1 (r ) = −2κ and ĥ2 (r ) = κ for r < 1. | 2018-12-01T02:39:04.766Z | 2013-02-13T00:00:00.000 | {
"year": 2013,
"sha1": "931939070538f83a3e0929b76768e526184b2f5c",
"oa_license": "CCBY",
"oa_url": "https://pubs.aip.org/aip/adv/article-pdf/doi/10.1063/1.4792939/13060942/022118_1_online.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "931939070538f83a3e0929b76768e526184b2f5c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3952908 | pes2o/s2orc | v3-fos-license | Accuracy of predictive magnification factor for preoperative templating in total hip arthroplasty
Total hip arthroplasty (THA) is one of the most common orthopedic procedures. It has proven to be very successful in reducing pain and restoring joint function.1–4 Preoperative templating makes an essential part of the planning process before THA. It prepares an orthopedic surgeon for the procedure by reducing surgical time needed to measure the size of the implant.3,5 Furthermore, it minimizes the cost of the procedure related with inventory control and also helps to avoid postoperative complications.3,6–8
Introduction
Total hip arthroplasty (THA) is one of the most common orthopedic procedures. It has proven to be very successful in reducing pain and restoring joint function. [1][2][3][4] Preoperative templating makes an essential part of the planning process before THA. It prepares an orthopedic surgeon for the procedure by reducing surgical time needed to measure the size of the implant. 3,5 Furthermore, it minimizes the cost of the procedure related with inventory control and also helps to avoid postoperative complications. 3,[6][7][8] Preoperative templating could be classified into three types: 1) Traditional templating using acetate template on a hard copy of the anteroposterior (AP) radiograph of the pelvis; 2) Hybrid templating using conventional acetate templates on digital x-ray images; 3) Fully digital templating (DT). 7,[9][10][11][12] Usually, an x-ray image is magnified up to 20 % because of the gap between bone and the film. This level of magnification may vary. 13 Accordingly, acetate templates come already adjusted for 20% magnification. Such templating can produce errors during the preoperative planning stage, because the magnification factor (MF) is predicted theoretically and may vary depending on the gap between X-ray film and bone. 9 After introduction of Picture Archiving and Communication System (PACS) in hospitals, digital templating is becoming more and more popular among orthopedic surgeons. 10 DT software allows surgeons to change the magnification of x-ray image and adapt it to the template. That helps to minimize the errors of templating produced by the magnification effect. In order to get the best results during the preoperative planning, accurate calculation of MF is needed. 4,14,15 The purpose of the present study was to evaluate the accuracy of the predicted MF for preoperative digital templating for THA in our institution and to give recommendations to increase the accuracy of MF, accordingly.
Materials and methods Patients
A retrospective review of postoperative radiographs of individuals who received primary THA of single hip was conducted. The data from a single institution was collected over one-year period (1 September, 2015 -1 September, 2016) from the local register of hip arthroplasty. 632 patients who received primary THA for degenerative osteoarthritis, posttraumatic osteoarthritis, rheumatoid arthritis, aseptic necrosis of femoral head and hip joint dysplasia, were included. After analysis of the data, 101 patients were excluded (Table 1) and the final sample size included 531 cases with a mean age 66±12 years: 221 (41.6%) males, mean age 62±12 years and 310 (58.4 %) females, mean age 68±11 years. 388 (73.1%) of the implants were cemented ones, 131 (24.7%) mechanical and 12 (2.2%) hybrid (11 cemented stem and 1 cemented cup).
Methods
Three independent orthopedic surgeons were invited to participate as observers during this study. Each of them had to measure the biggest diameter of acetabular component in postoperative radiographs after primary THA twice at one-month interval ( Figure 1). Observers were provided as much time as needed for accurate evaluation of x-ray photos. The measurements were made under MF of 0 %. The predictive MF of 15% is used by our institution during preoperative planning with the assumption that the distance between the x-ray emitter and film is 110 -115 cm and a patient is as close to the x-ray film as possible. The actual size of acetabular component was taken from the local register of hip arthroplasty. True MF was calculated as the actual size of acetabular cup divided by the measured diameter of cup multiplied by 100. The accuracy of predictive MF was calculated as the predictive MF divided by the true MF multiplied by 100. 100% accuracy would indicate that the predictive MF and the true MF are equal, while lower value of accuracy indicates greater discrepancy between the true and predictive MF.
Statistical analysis
The level of inter-and intra-observer reliability was determined by intraclass correlation coefficient (ICC), using its two-way mixed model. Reliability for absolute agreement was tested also. The following intervals were used to interpret the ICC values: less than 0.40 for poor agreement, 0.41-0.60 indicates moderate agreement, 0.61-0.80 indicates substantial agreement, and greater than 0.81 for almost perfect agreement. 16 Statistical analysis was performed using SPSS v21.0 software. For additional calculations Microsoft Excel 2016 was used.
Results
The intra-observer ICC 0.93 ± 0.01 and inter-observer 0.95±0.02 agreement was excellent. The mean true MF was 15.51±5.40% and accuracy of the predictive MF was 77.23±11.53%. When true MF was divided into groups by gender, mean true MF was 15.84±5.17%, accuracy of the predictive MF was 77.97±10.78 in males and 15.28± 5.54%, 76.71±12.03% in females, respectively. There was no statistically significant difference of true MF and accuracy between genders (P>.05).
Discussion
The most important finding of the present study was low accuracy of the predictive MF, which was only 77.23±11.53%. Even though the mean true MF diverges from the predictive MF by 0.51% only (15,51 % vs 15%), there was a high variance around the mean. In our opinion, that was exactly the reason why there was such a low accuracy of the predictive MF.
Riddick et al. found that the accuracy of the preoperative MF (scaling ball was used) was 96% (range 89.6 -99.9) in their institution. 17 The author assumed that the accuracy would be even higher if the calculated true MF were considered equal across all patients as is the case for our institution. However, in our study the divergence between true MF and the predictive MF ranged from 2.17% to 24.58%. We cannot agree with the above statement by Riddick et al. and recommend using the calibration markers (CM). Furthermore, the present study shows a simple method to evaluate accuracy of the predictive MF. In our opinion, it is essential to perform this self-assessment in all institutions where digital templating is used routinely, since accurate MF factor could help to reduce operating time, avoid intra and postoperative complications and finally, to improve patient safety.
The usage of external CM in order to predict MF is a technique proposed by Clarke et al. 18 It has several limitations such as correct placement and shape of CM, discomfort for patients, etc. 19,20 There were studies on positioning of spherical CM that proposed a formula to determine the vertical and horizontal position of CM. 4 The horizontal position of CM was shown to be less significant in comparison with the vertical position. 15 Furthermore, it was found that CM should be placed at the height of greater trochanter without skin and bone overlap in order to obtain the greatest accuracy. 21 Franken et al. 22 compared two different ways to place CM: CM positioned laterally, at the height of the greater trochanter and CM positioned medially between the legs. Mean errors were 2.55% and 2.04%, respectively. Another study compared the CM method with the distance measuring method. 23 The authors concluded that measuring MF without CM is almost as accurate as measuring with CM (mean error with CM 2.6%; without CM 2.8%). CM was positioned laterally, near the greater trochanter. However, mean error of the predictive MF in our study was lower (only 0.51%) in comparison with the abovementioned study, but high dispersion of the true MF around the predictive MF resulted in high inaccuracy. It seems the magnification error is unavoidable even when CM is used. Nevertheless, the magnification error is evidently lower when using CM and the use of the latter may improve the accuracy of the predictive MF. 24 Amount of soft tissue between hip joint and x-ray detector affects magnification by increasing the gap between them, 25 making it more difficult for a physician to locate a correct position to place the CM (e.g. at the height of greater trochanter) for more corpulent patients. 25 Also, it becomes more difficult to keep a standardized distance between x-ray tube and detector for patients with increased BMI, which may lead to a potential risk of inaccuracy when predicting MF. Contrarily, a number of studies have shown no positive correlation between BMI and accuracy of MF. 25,26 This appears to be highly consistent with our findings. We have found no significant difference in MF accuracy between male and female patients even though the elderly female patients are associated with larger hip circumference. 27 The authors are aware of limitations of this study. Firstly, this is a retrospective study, without randomization. Secondly, there were too few observers. Further research investigation is needed to find out if MF could be significantly improved having introduced CM.
Conclusion
The predictive MF factor used in our institution (MF=5%) has appeared to have too low accuracy for modern templating. We recommend to assess true MF before every THA in hospitals where digital templating is routinely used and to consider using the calibration markers in order to increase the accuracy of preoperative planning. | 2019-03-18T13:58:37.675Z | 2018-03-14T00:00:00.000 | {
"year": 2018,
"sha1": "b94fdf7a8ff13d039db68b8c3e9afd022b475d1e",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/MOJOR/MOJOR-10-00395.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e56ab520962ed6f15d2c47f433c2efb5fef107c7",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9716418 | pes2o/s2orc | v3-fos-license | Analysis of Cool Roof Coatings for Residential Demand Side Management in Tropical Australia
: Cool roof coatings have a beneficial impact on reducing the heat load of a range of building types, resulting in reduced cooling energy loads. This study seeks to understand the extent to which cool roof coatings could be used as a residential demand side management (DSM) strategy for retrofitting existing housing in a constrained network area in tropical Australia where peak electrical demand is heavily influenced by residential cooling loads. In particular this study seeks to determine whether simulation software used for building regulation purposes can provide networks with the ‘impact certainty’ required by their DSM principles. The building simulation method is supported by a field experiment. Both numerical and experimental data confirm reductions in total consumption (kWh) and energy demand (kW). The nature of the regulated simulation software, combined with the diverse nature of residential buildings and their patterns of occupancy, however, mean that simulated results cannot be extrapolated to quantify benefits to a broader distribution network. The study suggests that building data gained from regulatory simulations could be a useful guide for potential impacts of widespread application of cool roof coatings in this region. The practical realization of these positive impacts, however, would require changes to the current business model for the evaluation of DSM strategies. The study provides seven key recommendations that encourage distribution networks to think beyond their infrastructure boundaries, recognising that the broader energy system also includes buildings, appliances and people.
Residential Demand Side Management (DSM) in Heat Dominated Climates
Electricity distribution networks in Australia typically make capital works (infrastructure investment) decisions based on ensuring the network can meet system peak demand in line with security of supply standards demanded by local regulations or customs.For many networks, however, significant financial challenges arise when the system maximum demand only occurs for short periods of time.For example, in regional Queensland in 2011, approximately 10% of the network capacity was used for less than 1.5% of the year [1].Season, climate and time of day are some of the acknowledged key variables that contribute to peak demand.Reducing peak demand through DSM programs is therefore seen as a more economical option than network augmentation, flattening the load curve to increase the utilisation rate (and hence investment return) of existing infrastructure.In Australia, residential DSM strategies fall into three categories: (i) network controlled tariffs (voluntary or mandatory requirement for particular appliances such as electric water heaters to be controllable by the network at specific times); (ii) tariff price reform (using price signals to drive changes to appliance usage times); and (iii) direct rebates (e.g., financial assistance to customers to replace inefficient appliances with more efficient and/or controllable appliances).
The principles which guide the development and implementation of DSM programs in distribution networks are focused on managing business risk and maximising economic return.DSM programs are required to increase the asset utilisation rate and deliver measurable and predictable reductions in demand, whilst not compromising network service standards [1].The energy network, from the distribution company's perspective, consists of the poles and wires that deliver the electricity from the high transmission distribution network to the end use (the residential customer).Neither the houses nor the appliances connected to the network are considered to be part of the energy system.
Cool Roofs Research
The heat load of detached residential buildings in warm and hot climates is predominantly driven by the interaction of the external environment with the building envelope.Because of its large surface area in relation to building volume and its high exposure to direct and indirect solar radiation, the roof is the key building structure that allows or limits heat flow into internal spaces of one and two storey single family houses in Australia.The energy balance of a roof is determined by incoming solar radiation, the reflectance and absorptance of the roof surface, heat transfer, roof structure and internal and external temperatures [2,3].While light coloured roofs have long been used in many hot regions as a means of providing a cooler internal space, modern 'cool roof coatings' use advanced chemistry to increase both the solar reflectance and infrared emittance of the roof.For over two decades researchers have been studying, through modelling and field experiments, the impacts of reflective roof coatings on the urban environment, on occupants and on electricity networks.The improvements to the chemistry of roof coatings over time (from the 1990s to the present) needs to be considered when interpreting early field results based on white reflective paints, with current coatings representing fourth generation technologies [4].Extensive research utilizing simulation software and a smaller number of field experiments has shown a range of positive impacts, including reductions in cooling energy (kWh/day) and peak cooling demand (kW); reductions in roof surface and attic temperature; reduction or elimination of air conditioning use in shoulder seasons; changes to air conditioner load profiles; improvements in air conditioner operation efficiency and reduced strain on electricity supply infrastructure [5][6][7][8][9].This research encompasses residential buildings in a range of climatic and cultural contexts, including hot climates [10][11][12][13].Beyond the benefits to individual buildings, cool roofs (and green roofs) have an important role to play in reducing the urban heat island [4].
Most, if not all, studies to date have evaluated field data and simulated data from the perspective of the building owner, presenting arguments to entice building owners of the potential comfort, economic and/or infrastructure and societal benefits of cool roofs.Indeed, the argument for the development of Standards for Cool Roofs was based on the difficulties faced by building owners in assessing cool roof impacts on lifetime heating and cooling costs [14].Few residential field studies have been published from regions with long cooling seasons and negligible heating seasons, and, to the authors' knowledge, the role of cool roof coatings in Demand Side Management programs (i.e., from the perspective of electricity network providers) has not previously been studied.
This Research
The specific aim of this research was to determine whether simulation software used for building regulation purposes can provide networks with the 'impact certainty' required by the distribution company's Demand Side Management principles.
Methodology
A numerical and experimental methodology was utilized [15,16].This section explains the selection of the simulation software and case study.
Choice of Simulation Software
The Australian Nationwide House Energy Rating Scheme (NatHERS) establishes the protocols and validates and standardises software that can be utilised by the design-construction industry for the purposes of determining design compliance with the energy efficiency regulations of the National Construction Code.All software accredited under this national scheme calculates the heat flows into and out of the building envelope on an hourly basis to determine the space heating and cooling loads for each zone of the house.All software uses Reference Meteorological Year (RMY) climate files based on at least 25 years of meteorological data (air temperature, humidity, solar radiation, wind speed and direction) for each climate zone.All software requires the input of detailed spatial and architectural data as well as construction material and components data, however the occupancy patterns, latent heat loads and heating and cooling schedules are pre-set (to enable comparison between designs) [17,18].The particular software package selected for this study is BersPro 4.2.This software was selected rather than internationally utilized simulation software (e.g., IES VE or EnergyPlus), because the research objective was to determine whether this commonly utilised tool could also model the potential benefits of DSM programs that incorporate changes to the building envelope, reducing the need for duplication of effort.The simulations were conducted in accordance with requirements of the Australian Construction Code, with regard to the protocols that establish thermostat settings and heating and cooling schedules (Table 1).
Choice of Case Study
The case study was conducted in tropical Townsville (19.3°S) between November 2012 and February 2014.The seasonal climate statistics (Table 2) reveal a summer cooling-dominated climate extending from November through to March, with a relatively short autumn and spring.This regional city was selected for the case study as it lies within a constrained network area of Australia's largest electricity distribution network covering an area of 1.7 million square kilometres (97% of Queensland).91% of housing in this region (predominantly detached one or two storey single family homes), have air conditioners (AC), with an estimated 3.5 AC units per household.These air conditioners account for 30% of the overall residential load and 57% of residential peak demand (17:00-20:00).Based on current loads, residential energy consumption is expected to contribute approximately 28% to this region's summer afternoon peak demand (13:00-17:00, at a zone substation level) in 2025 [1].Similar to the Italian housing context [19], air conditioners have often been installed in buildings with little or no insulation.Energy efficiency requirements were not introduced into the Australian building regulations until 2003, and even now the standards are quite low by international comparison.
A recently constructed single-family house, representative of the size and style of new homes in this region, was selected for the field study (Figures 1 and 2).This involved monitoring the electricity consumption and temperature of the house before and after the application of a Cool Roof acrylic coating (Thermobond HRC Rsol 0.878 tested to ASTM C1549).Sensors (Maxim iButtons DS1922/3 with 0.5 °C temperature resolution and 5% humidity error) were installed in various locations to measure, at 30 min intervals, ambient outdoor temperature, indoor temperature and relative humidity (in living rooms and bedrooms), roof cavity temperature and external roof surface temperature (east and west).Quarterly electricity bills (based on actual consumption recorded by the Class 1 revenue meters) were used to establish the baseline electrical consumption profiles (average kWh/day) on an annual and seasonal basis.The air-conditioning load is separately metered and is on a circuit that is controllable by the electricity network.Key characteristics of the house are summarised in Table 3. Simulation was used to determine baseline cooling load.The same house plan was then used as the basis of broader simulation studies to model the impact of cool roof acrylic paints on houses with different construction material characteristics.This method was utilized also by Dabaieh [13].Common construction materials and practices in the region were examined and applied to the building simulation model for the selected house (Table 4).Thirty-seven different combinations of construction materials were simulated (the existing house and 36 variations), enabling simulation of a representative sample of a wide variety of common construction practices in the region.
For each combination, the annual cooling load (kWh/yr) and peak demand on hot days was determined.Daily demand curves are generated by the software, based on the COP of the air conditioner, the thermal load of the building and assumed operation times.For peak demand simulation, a split-system air conditioner with a COP of 3.1 was assumed, and the kW demand was calculated for achieving the comfort levels (26.5 °C) as determined by NatHERS.As February is the peak cooling month for this climate, the cooling demand for a hot February day was calculated, differentiating between predominantly afternoon/night loads (bedrooms), day and evening loads (living room) and whole house loads (24/7 cooling).Most new homes have light coloured roofs (Rsol 0.5-0.7);Some dark roofs.
Easily available product for retrofit * The Rsol of the product used for this field study was rounded up (0.878 to 0.9) for the purposes of simulation (a limitation of the tool).In practice, 0.8 may be a more realistic Rsol of readily available products and accounting for a slight loss in reflectance due to aging and weathering.The predicted long term savings and DSM impacts may be overestimated.
Seasonal Cooling and Occupancy of Field Study House
The seasonal baseline electricity consumption (average kWh/day) was calculated from historic billing information, showing a significant summer cooling load (Table 5).The five ACs are connected to a meter that enables network implemented control between 07:00-08:30 and 18:30-20:00.Temperature data from this house reveals frequent conditioning of the main bedroom overnight (22:00-07:00) and the early morning (07:00-08:30), infrequent use of AC systems in the hours adjacent to the evening controlled time and infrequent daytime use of the AC.Understanding how occupancy and occupant behaviour impacts on AC operation is essential in designing effective DSM strategies, as discussed later.* Represents all household stationary energy services except hot water.
Simulated and Actual Cooling Load of Field Study House
Prior to any intervention to the house, the actual "cooling energy" and the simulated "cooling energy" were compared, showing a significant difference between the two loads (Table 6).As there were no significant differences between Bureau of Meteorology measured temperature for the experimental period and the RMY data used in the simulation software, the lower than expected energy consumption is likely explained by different occupancy and operational practices than the regulatory assumptions built into the simulation tool.For example, the simulated thermal load assumes that all four bedrooms and the living room will be cooled according to Table 1 (i.e., simulations are based on number of potentially occupied rooms, not on number of occupants).The residents of this house, however, report that they typically cool only the main bedroom overnight during summer and use the air conditioner in the living room only on very hot days.This highlights the difficulty in using the simulation software as a guide to actual energy consumption, an issue that is discussed in more detail later.
Roof Cavity Temperatures
Roof cavity temperatures were measured in the case study house before and after the application of the cool roof acrylic paint (Rsol 0.89) on the already light coloured roof (Rsol 0.6).The effect of the cool roof coating on the roof cavity temperature was analysed by graphing the relative reduction in roof cavity temperature to outdoor ambient temperature (Figure 3), showing a greater impact at higher ambient temperatures.This is consistent with findings of Pisello [9].The simulation tool (BersPro) was then used to simulate roof cavity temperatures of this house plan with and without reflective foil under the roof, and with solar reflectance values of 0.5 and 0.9.Results of these simulations are shown in Table 7. Key findings from this numerical analysis are: Only 2% of annual hours of RMY outdoor ambient temperatures are >35 °C (row 2); A typical roof cavity for this region experiences 35% of annual hours >35 °C (row 4); Adding reflective foil under the roof (row 5), or a high reflectivity roof coating (row 6), reduce these temperatures; Combining the two (row 7) provides a roof cavity temperature profile very closely aligned to ambient temperature conditions (row 2).
Simulated Peak Demand with House Construction Variables
The case study house design was simulated with thirty-six variations in building construction characteristics representing typical regional construction variables for wall and roof materials, roof and ceiling insulation values and glazing.The simulations for this house design consistently show that cool roof coatings (SR 0.9), regardless of the selected construction detail, delivered a reduction in peak demand (Figure 4).The houses with the greatest demand reduction potential have dark roofs (SR 0.15), whilst those with the least demand reduction potential have a high level of ceiling insulation (e.g., R 3.5) combined with other energy efficiency design measures that restrict solar gain through the building façade (e.g., tinted windows).Simulated reductions in peak demand were higher for concrete block houses (high thermal mass) than for light weight houses, although both types of houses benefit unless they already have a very high level (for this climate) of ceiling insulation.Comparing dark roofs (SR 0.15) with cool roofs (SR 0.9), this figure shows peak demand reduction on a hot day ranging from 10%-40%.The cooling load profiles for the cool roof model also revealed a significantly different load profile (shorter AC running times for Cool Roofs).The simulations reveal a few anomalies for light coloured roofing which require further investigation.
Discussion of the Results
The results clearly show that Cool Roof coatings could be broadly applied to a wide range of housing types in this climate zone, reducing energy consumption for cooling (kWh) and peak demand (kW) in individual residences.Does this automatically mean that electricity networks could include Cool Roofs into their DSM programs, perhaps through offering rebates?In this section we will discuss the results in light of this region's DSM principles that have an overarching focus on risk management for the network.The five principles discussed in this section include (i) certainty of load reduction at times of network peak demand; (ii) visible and measurable results; (iii) improvement in asset utilisation; (iv) ability to reward customer participation; and (v) informed stakeholders [1].This discussion will highlight seven key conclusions for electricity distribution companies.
Certainty of Load Reduction at Peak Demand
There is little doubt that for many buildings, the application of cool roof coatings has the potential to reduce overall cooling energy demand (kWh) and peak demand (kW): a reduced thermal load can impact both the hours air conditioning is required to maintain comfort standards and the efficiency and operation of air conditioners.In this sense, cool roofs are an example of an intervention that could simultaneously address energy efficiency and demand management.What is unclear, however, is how to quantify this for multiple residences in specific constrained networks.Unlike commercial buildings, the residential sector has greater diversity of building characteristics (e.g., size, age, design and construction variables), occupancy profile (occupants per house, use of different spaces, time of use, and range of occupant behaviours) and cooling technologies and expectations (type and efficiency of air conditioner; set point).Can the simulation tools used to provide energy rating compliance certificates for houses as designed, also be used to provide network certainty?
NatHERS establishes the protocols which accredited simulation tools must incorporate to model the energy performance of house designs as evidence of the design meeting the energy efficiency requirements of the Australian Construction Code.NatHERS has four main assumptions: (i) occupants will adopt a three staged approach to the achievement of comfort in summer (natural ventilation, mechanical air movement then extraction of heat); (ii) bedrooms and living rooms will be occupied at different times; (iii) when cooling is activated, it is applied to all rooms of the same type (e.g., living or bedrooms); and (iv) the cooling set point when AC is activated, will be 26.5 °C (for this climate zone).This research has shown, unsurprisingly, a mismatch between occupancy patterns in houses and assumptions made by the building regulations.NatHERS tools were not meant to be reflective of actual occupancy patterns, but to enable comparison of potential energy consumption between buildings, removing the uncertainties of occupancy.For example, the peak load simulations displayed a peak load at 4-5 pm, a reflection of simulation assumptions of bedrooms being conditioned from 4 pm.The timing of actual peak load, at an individual house level, would be impacted by actual occupancy (time of day, specific occupied rooms, and thermal comfort preferences) as well as the thermal efficiency of the occupied rooms.These inconsistencies between reality and simulations, combined with the large number of variations in construction and design, make it difficult to use current simulation tools to extrapolate the effect of cool roof coatings on residential demand within a specific network at a specific time of day.This does not mean that useful information cannot be gained.
Conclusion #1: Networks can use the simulation tools to provide evidence of the types of houses that would most likely deliver demand reductions with the application of cool roofs.
The evidence from this study would suggest the electricity network might benefit from targeting the following dwelling types for cool roof retrofitting: (i) houses with dark or medium coloured roofs (solar reflectance <0.7), unless they have roof reflective foil, R3.5 ceiling insulation and other characteristics that limit solar gain (e.g., wide eaves, tinted glazing); (ii) houses with no ceiling insulation (e.g., roof foil only) or with bulk ceiling insulation (under R3) and no reflective foil insulation; and (iii) houses with high electricity bills (i.e., high AC usage in terms of number of systems, hours of use, or low temperature set points).
The results of the simulation of roof cavity temperatures (Section 3.2) are also significant for several reasons and impact indirectly on the electricity networks.First, under Australian Standards, insulation values (the R rating) are determined based on an ambient temperature of 24 °C and for temperature differences of 18, 12 and 6 K [20].In an air conditioned house in a hot climate, however, the temperature difference between the internal room and the roof cavity during summer may be over 20 K [21].This means that it is likely that the typically low levels of insulation installed in houses in this region are not providing the expected thermal barrier.Second, the relationship between cool roofs, roof cavity temperatures, insulation placement and insulation effectiveness has not previously been reported.Other studies have only shown how insulation level is significant in determining the impact of cool roofs [11,12].Third, because it is easier to apply a coating to a roof surface rather than retrofit reflective foil under the roof sheeting, cool roof coatings present an effective method of improving the energy efficiency of existing homes.With this in mind, the networks arguably need access to regional level building information, something which does not exist in the Australian context.
Conclusion #2: Networks may benefit from participating in the development of detailed and accurate regional building files [22] utilizing existing simulated data, and Agent Based Models that enable simulation of different scenarios and tailored solutions for subareas of the network [23,24].This work is in its infancy and its full potential has yet to be explored.
Visible and Measurable Benefits
Following on from the difficulty of quantifying demand reductions prior to implementing a cool roof DSM project is the challenge of measuring benefits.Arguably the only reliable measurement option from a network's perspective would be to ensure that residential air conditioners are separately metered from other household loads, allowing for the tracking of energy consumption and demand before and after intervention with cool roof coatings or other energy efficiency measures (such as insulation) that would reduce the heat load of the building.In Queensland air-conditioners are not required to be separately metered, resulting in lack of robust data about current air conditioner loads at household level.Some recently introduced DSM programs offer financial incentives to households to connect air conditioners to a separately controlled meter.
Conclusion #3: Networks could consider the mandatory metering of all air conditioning loads.
In addition to measurement and verification data, such metering could also provide a means of network control of the device, and provide information back to occupants to assist them in understanding and managing their cooling behavior.This would require, however, a robust technical, economic and social analysis about the type of metering infrastructure and their respective costs and benefits [25,26], development of data management and utilisation strategies [27], and data security and privacy concerns relating to advanced metering infrastructure [28,29].
Improvement in Asset Utilisation through Incorporation of DSM in Asset and Network Planning
The best return on investment for distribution networks is arguably increasing the utilisation rate of existing assets through smoothing the peaks, limiting the need to increase investment in more infrastructure.Without a means of measuring the benefit there is also then no evidence to show that the application of cool roof coatings would improve the asset utilisation rate.Whilst it is understandable, from a corporate perspective, that distribution companies focus on their asset and network planning and return on investment, there is arguably a need for a broader systems perspective.Electricity generation, delivery and consumption are not just the business of energy companies: it is also the business of energy policy makers, building regulators, housing development and construction companies, and home owners and occupiers.Each of these parties has assets that they wish to optimise or have a 'return-on-investment' or 'cost-benefit' expectation.What if the 'asset utilisation' criterion was reframed to "How could energy efficiency and demand management strategies be better integrated to maximise value to all stakeholders?"Conclusion #4: Networks should take a broader view of what defines the 'energy assets' and more closely align their business practices to building regulations and practices.Perhaps a starting point for the distribution companies would be to more closely align their marketing activities to building regulations, such as ensuring that the air conditioning operation guidelines for their customers (promoting a set point of 25 °C) are consistent with the regulatory simulation software (which utilises a cooling set point of 26.5 °C) for this region.
Encourage and Reward Customer Participation
This study has shown that there are energy and power benefits from the application of cool roof technologies to a range of housing typologies in this region, but that these benefits are difficult to measure and quantify from a network perspective.Whilst electricity metering and billing continues to only consider total accumulated kWh over a period of time (typically 3 monthly for residential accounts), there is little incentive for households to invest in any demand reduction strategies.Energy efficient houses that require no or little air conditioning do not get any benefit from the extra investment they have made in good design and construction to achieve occupant comfort.Air conditioners are not regulated appliances: they can be installed at the sole discretion of building owners regardless of the thermal efficiency of the building envelope and regardless of their impact on the network-yet the cost of the network asset investment required to support these appliances is spread across all customers.Queensland is moving towards 'cost reflective' pricing for residential customers, with recent (2013-2014) and near future (2015) changes to how the costs of delivering energy can be equitably recovered.This approach raises other equity issues.
Conclusion #5: Networks should consider implementing network conditions for the installation and operation of air conditioners, in a similar manner to the connection controls and conditions (in Australia) on the installation of solar power systems at residential premises.New connection and metering practices, reflecting the environmental, technical and economic benefits and limitations of all embedded technologies need to be developed, considering the needs of all stakeholders.
Conclusion #6:
The impacts of different combinations of pricing strategies and direct load control options on different customer types in different parts of a network need to be explored.This work would require examination of a broad suite of pricing strategies (e.g., service charges, kWh charges, KW charges, critical peak pricing, real time pricing, time of use pricing, inclining block tariffs) and curtailment strategies (e.g., direct load control and aggregated curtailable loads), as well as the profiling of different residential customer types (e.g., retirees, pensioners, single person households, large families etc.).
Educate and Inform Customers and Stakeholders
Do networks still have "customers"?The birth of the "prosumer", the rapid uptake of household solar power in Queensland, the arguably imminent commoditisation of battery storage systems and electric vehicles, the advanced functionality of metering and energy information systems, and the growing acknowledgement of the value and usefulness of big data, collectively create a very different market place than that which existed even ten years ago.While some research has examined the reclassification of customers as co-producers, peers and partners [30], much more research is needed.
Conclusion #7: Electricity networks need to reconceptualise the relationships between traditional electricity generators, distributors and retailers, and 'end-users', and develop business models and relationship structures that are built on mutual trust and respect.
Conclusions
It is well known that demand side management is often the most appropriate (i.e., cost effective) response on constrained electricity networks, especially when combined with energy efficiency.This study sought to understand whether building simulation tools used for regulatory purposes could assist in determining the feasibility of cool roof coatings for a residential DSM strategy in a constrained network area where peak electrical demand is heavily influenced by residential cooling loads.This study has confirmed previous research that cool roof coatings can reduce electricity consumption and demand in residential buildings in tropical Australia: this was supported by both measured and simulated data.However, in analysing these results according to the DSM principles of distribution companies, it is clear that the simulation tools used for regulatory purposes in Australia could not provide networks with the required visible and measurable reductions, at an individual building level, that would guarantee peak demand reduction and an increase in network utilisation rates.The numerical and experimental data does raise important issues for network consideration.
The paper has presented seven conclusions relevant to networks.These recommendations incorporate the need for distribution networks to think beyond their infrastructure boundaries, recognising that the broader energy system also includes buildings, appliances and people.With a view to win:win solutions, these companies could arguably play a more proactive role in promoting and encouraging cool roof technologies (and other energy efficiency measures) that impact on demand and consumption, such as more active engagement with energy and building regulators, the development and construction industries and with home owners/occupants.
Figure 1 .
Figure 1.Residential building representing the case study.
Figure 2 .
Figure 2. Case study floor plan with location of sensors.
Figure 4 .
Figure 4. Simulated peak demand (kw) for 36 variations of the whole house on hot summer day.
Table 1 .
Summer and winter cooling and heating schedule (as per NatHERS).
Table 2 .
Climate conditions and location of the case study *.
Table 3 .
Field study house characteristics.
Table 4 .
Common construction materials used in simulations.
Table 6 .
Simulated and actual cooling loads.
Table 7 .
Effect of roof insulation and roof reflectance on roof cavity temperature. | 2016-03-22T00:56:01.885Z | 2015-06-03T00:00:00.000 | {
"year": 2015,
"sha1": "5fd90832f20fe7c10043af3fdf5069d1247e3a97",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/8/6/5303/pdf?version=1433322050",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5fd90832f20fe7c10043af3fdf5069d1247e3a97",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering",
"Environmental Science"
]
} |
16296914 | pes2o/s2orc | v3-fos-license | The Interaction between HIV and Intestinal Helminth Parasites Coinfection with Nutrition among Adults in KwaZulu-Natal, South Africa
In South Africa few studies have examined the effects of the overlap of HIV and helminth infections on nutritional status. This cross-sectional study investigated the interaction between HIV and intestinal helminths coinfection with nutritional status among KwaZulu-Natal adults. Participants were recruited from a comprehensive primary health care clinic and stratified based on their HIV, stool parasitology, IgE, and IgG4 results into four groups: the uninfected, HIV infected, helminth infected, and HIV-helminth coinfected groups. The nutritional status was assessed using body mass index, 24-hour food recall, micro-, and macronutrient biochemical markers. Univariate and multivariate multinomial probit regression models were used to assess nutritional factors associated with singly and dually infected groups using the uninfected group as a reference category. Biochemically, the HIV-helminth coinfected group was associated with a significantly higher total protein, higher percentage of transferrin saturation, and significantly lower ferritin. There was no significant association between single or dual infections with HIV and helminths with micro- and macronutrient deficiency; however general obesity and low micronutrient intake patterns, which may indicate a general predisposition to micronutrient and protein-energy deficiency, were observed and may need further investigations.
Background
Approximately 2 billion (24%) of the world's population is infected with intestinal helminth parasites, with high prevalence occurring in poor and deprived communities in tropical and subtropical regions, including sub-Saharan Africa [1]. Helminths may impair the nutritional status in these infected individuals [2]. In sub-Saharan Africa the geographic overlap between the human immunodeficiency virus (HIV), intestinal helminth parasites, and malnutrition may have an additive impact on the competency of the immune system in affected hosts [3,4]. This triple burden may lead to accelerated HIV and helminth disease progression [5][6][7]. Potent immune responses and adequate nutrition are essential to resist infectious agents. Research suggests that individuals who are coinfected with HIV and helminths have lower biochemical levels of micronutrients [8], as well as carbohydrate and protein macronutrients [4,9]. It has been reported that deficiencies of protein, energy, and micronutrients including iron, zinc, and vitamins impact on competent cell mediated and humoral immune responses, and the link to increased susceptibility to HIV and helminth coinfections in such cases has been demonstrated [10,11]. Thus, micronutrient and macronutrient deficiencies may predispose individuals to HIV and helminth coinfection as well as leading to exacerbated HIV progression, resulting in a vicious cycle of malnutrition, infection, and immune deficiency. A significant proportion (approximately 54%) of South Africans live under conditions of poverty [12]. Furthermore, KwaZulu-Natal (KZN), a province of South Africa, has a significant proportion of the population living in environments where there is lack of adequate sanitation (22.7%) and safe water supplies (15.8%) [13]. In these areas, the standard of living is generally poor and intestinal helminth infections are highly prevalent [14]. Prevalence of intestinal helminths was found to range among adults from 11.2% in the inland region, 30.3% in the north coast region, and 29.2% in the south coast region [15]. KwaZulu-Natal also has the highest HIV prevalence in South Africa, reported to be 37.4% in 2014 compared to the national estimate of 10.2% [16]. However, despite these data, studies of the possible deleterious effects of HIV and helminth coinfection on nutritional status among adults in KZN are lacking. This study investigated the interaction of HIV and intestinal helminth coinfection with nutritional status as measured by body mass index (BMI) and biochemical micro-and macronutrient markers, against food intake levels, in a periurban informal setting in KZN.
Study Setting.
The study was conducted in a periurban area, randomly selected from eThekwini enumeration areas under the eThekwini Health District in the KZN province of South Africa. It comprises approximately 39,000 households with approximately 30% informal settlements [17]. Poverty is widespread in this area, with low income households, and approximately 34% of the population in the area were not economically active [17]. There is generally poor access to facilities in the area [18] with about 60% households not having piped water inside the household [17].
The study site was a comprehensive primary health care clinic, providing all essential health care services, including HIV counselling and testing (HCT). Recruitment was therefore purposively conducted in this clinic. By default, the majority of clinic attendees were female.
Recruitment and Selection of Study Participants.
Ethical approval to conduct the study was obtained from the University of KwaZulu-Natal Biomedical Research Committee (BREC Ref: BE 230/14). Permission to conduct the study was granted by the Provincial and eThekwini Health District office and the KZN Provincial Department of Health. The local political authorities granted permission to conduct the study in their area, after a series of meetings where the study objectives were explained and discussed.
During the recruitment process, information sessions were held in the reception area, to inform all the clinic attendees about the study. Those willing to participate were individually given further information. After ensuring that the potential participants fully understood the study, they were asked to give informed consent. They then underwent HIV pretest counselling at the HCT clinic. Eligible participants were adults who were 18 years of age and older, not on antiretroviral therapy, and not pregnant, if female. The enrolment process is outlined in Figure 1.
Ethical Considerations.
The study commenced only after ethical approval and permissions from the relevant authorities were obtained. All eligible participants gave written BioMed Research International 3 consent before enrolment into the study. Participants were tested for HIV status for the purpose of allocating them to either a study or a reference group. Pre-and post-HIV test counselling was provided. The HIV infected individuals who had CD4 counts below 350 cells/ l were referred to the HCT clinic and were excluded from participating in the study for ethical reasons. The country guidelines recommend the protection of vulnerable individuals such as very sick or severely immunocompromised persons. Likewise, for classifying helminth infection status, participants were screened for intestinal parasites. Those who were found to be infected were referred to the clinic for anthelminthic treatment.
Study Design and Sample Size.
A cross-sectional survey of HIV and intestinal helminths prevalence including the investigation of nutritional status was conducted between June 2014 and May 2015 in the eThekwini Health District in KZN. The objective was to describe the nutritional status of individuals infected singly or dually with HIV and intestinal helminths in comparison with noninfected counterparts. A sample size of 229 adults was calculated to detect an effect size of 0.4 with 80% power and probability of 95% between the study groups. The study sample was to include 160 adults not infected with parasites and 69 infected with parasites, assuming that 30% of adults in KZN are infected with parasites, based on the 20.4% prevalence reported on KZN adults [15]. Fifty percent of the study sample would be coinfected with HIV and 50% not be infected with HIV, assuming that 50% of KZN adults are HIV infected, based on the 2011 HIV prevalence of 37.4% among antenatal women in KZN and 2011 HIV prevalence of 38% in the eThekwini district [19].
Measures.
Participants were tested for HIV and were screened for intestinal helminth parasites. Demographic data and socioeconomic status data were collected using a structured questionnaire. Nutritional status was assessed using anthropometric measurements, micro-and macronutrient markers, and 24-hour food recall. [20], when a sample may not contain many eggs, which may be caused by light infections or by day to day variation in egg excretion [21]. Egg excretion depends on immune responses to the parasite infection and genetic and environmental factors [22]. Adams et al. [22] recommended that analyses on the interaction between HIV and helminths should not only be based on the detection or nondetection of eggs in stool samples, since individuals who are infected with parasites in larval stages or male worms only, which cannot produce eggs, may be excluded. Hence, serological diagnosis of intestinal helminths, using Ascarisspecific IgE and Ascaris-specific IgG4 levels, was done, which supplemented the conventional microscopic diagnosis of helminth infection [20,22,23]. Blood samples that were collected from each participant by a trained phlebotomist were assayed for Ascaris-specific IgE and Ascaris-specific IgG4 levels in a South African National Accreditation System (SANAS) accredited pathology laboratory, using the Phadia5 ImmunoCAP method.
Ascaris-specific IgE and IgG4 antibodies show crossreactivity between the antigens of different helminth parasites including Trichuris trichiura [24,25]. Cut-off values of Ascaris-specific IgE and Ascaris-specific IgG4 were 0.35 kU/l and 0.15 kU/l, respectively, and any levels above the cutoff values were considered high. Infection with intestinal helminths was defined either by the presence of helminth eggs or ova in the stool samples and/or by high levels of Ascarisspecific IgE and/or IgG4 in serum.
The participants were stratified, based on the HIV, stool, IgE, and IgG4 results, into four groups: (1) coinfected with HIV and intestinal helminths, (2) infected with only HIV, (3) infected with intestinal helminths only, and (4) not infected.
Nutritional Status
Anthropometric Measurements. Weight and height were measured using a calibrated Kern5 MPE scale (Kern & Sohn, Germany). The participants were weighed with light clothing, without shoes. The scale calculated and displayed the BMI after the weight and height were keyed into the scale. To determine the BMI (kg/m 2 ) of the participants, the cutoff points established by the World Health Organization [26] were used to classify the participants into underweight (<18.5), normal weight (18.5-24.9), overweight (25-29.9), and obese categories (≥30) for both males and females.
Nutrient Adequacy Ratios (NARS) Analysis for Micro-and
Macronutrient Intake. Trained fieldworkers administered a structured questionnaire to collect 24-hour food recall data from the enrolled participants. Two food recall interviews were conducted to collect data on food items and their quantities, which were consumed the day before the day of the interview by each participant. The first questionnaire was for that which was consumed on a weekday and the second was for that consumed on the weekend. Beverages, regular and special meals, and between-meals snacks consumed, and how they were prepared, were recorded. Three-dimensional food models and a food model booklet were used to indicate food quantities and meal portions. Demographic data indicates that most of the interviewees were the main people in their households who were responsible for the preparing and cooking of meals. Data for the two food recalls were then averaged and nutrient adequacy ratios (NARS) were calculated by a trained nutrition specialist. A nutrient adequacy ratio is the ratio of a nutrient intake divided by the recommended daily requirement for that nutrient [27].
Biochemical Analysis of Micro-and Macronutrients. Biochemical and haematologic analyses were conducted in a SANAS accredited pathology laboratory. The following biochemical markers of nutrition were analysed by a spectrophotometric autoanalyser: macronutrients: total protein, albumin, and prealbumin; micronutrients: calcium, magnesium, phosphate, zinc, iron, and ferritin. Haemoglobin, haematocrit, white cell count, and differential count levels were assayed with a haematology autoanalyser that uses flow cytometry and sodium lauryl sulphate-(SLS-) haemoglobin methods.
Statistical Analysis.
Descriptive statistics was used to summarize the data. Differences between the infected and uninfected groups were assessed using the Kruskal Wallis test for categorical variables and the Wilcoxon signed rank sum test for continuous variables ( < 0.001). The outcome variable has four levels: uninfected, HIV singly infected, helminth singly infected, and HIV-helminth coinfected, which is a multinomial outcome. Therefore, univariate and multivariate multinomial probit regression models were used to assess nutritional factors associated with each group (HIV singly infected, helminth singly infected, and HIV-helminth coinfected), and the uninfected group was used as a reference category. Final multivariate models of effects of independent variables associated with each group are presented. Regression coefficients with 95% confidence intervals (CI) are reported to indicate the strength and direction of association and a value ≤ 0.05 to indicate the level of statistical significance. Data was analysed using the statistics packages STATA 12.0 (College Station, Texas, Stata Corporation, USA), SPSS version 23 (IBM Corporation, NY, USA), and GraphPad Prism version 5.01 (GraphPad Software, Inc., USA).
Sociodemographic Profile of the Study Participants.
Out of a total of 263 enrolled participants (Figure 1), the majority of the participants (91.6%; = 241) were female. The average age of the study participants was 36 years, ranging from 18 to 83 years. The majority were generally poor and 91.3% ( = 240) were unemployed. Some relied on government grants, either pension ( = 39; 14.9%) or a child support grant ( = 93; 35.5%), as their main source of income and 31.2% ( = 82) were dependent on their parents for their livelihood. The education level of this population was low, only 3.3% had tertiary education and 67.7% ( = 178) had secondary level education, a few up to 12th Grade. About 33% were unable to access clean water; they reported having to share a public tap or use neighbours' taps or tanks and boreholes. Most of the population (54.8%) were using pit latrines while 7.6% reported not having any toilet facilities and some using public showed that generally the intake of the micronutrients analysed was similar across all the groups, with the exception of iodine which was highest among the coinfected group and vitamin B12, which was lowest among this group ( Figure 2). Further analysis showed that various micronutrient intake levels were lower than the required daily intake (100%) for all the groups, which included calcium, magnesium, selenium, iodine, vitamin A, riboflavin (vitamin B2), pantothenate (vitamin B5), folate, vitamin B12, biotin (vitamin H), vitamin C, vitamin D, vitamin E, and vitamin K. Phosphate, zinc, and thiamin (vitamin B1) were, however, close to the normal required intake. Intake levels for iron, niacin (vitamin B3), and vitamin B6 were higher than the 100% required daily intake for all the participant groups.
The macronutrient NARS analysis showed that the coinfected group did not differ from the other groups, where all groups had low levels in all the macronutrient intake levels except for carbohydrates. All the participant groups had a low mean intake (less than the 100% required daily intake) of energy, total protein, total fat, and total fibre ( Figure 3). Notably, the intake of carbohydrates was higher than the daily required quantity in all the participant groups, way above 100%, and it was highest in the HIV infected and the coinfected groups (Figure 3).
The acceptable macronutrient distribution ranges (AMDR) (fat 15-30%, protein 10-15%, and carbohydrate and fibre 55-75%) to energy showed that all the participant groups had lower contributions of total fat and total protein, less than 30% and 15%, respectively. The contribution of carbohydrate and fibre to energy was within the acceptable range for all participant groups (Figure 4).
Nutritional Status
3.3.1. Anthropometry. The body mass index (BMI) measures of nutritional status among HIV singly infected, helminth singly infected, HIV-helminth coinfected, and uninfected participants are described in Table 2, showing the differences between the participant groups, although not statistically significant ( = 0.089). In the uninfected group 39.0% of the participants were overweight and 51.3% were obese. In the HIV-helminth coinfected group 16.9% were overweight and 5.3% were obese. The helminth singly infected group had 26.3% of participants who were obese. The proportions of participants who were underweight was low ( = 6). Of this number, 50% were HIV infected, 33.3% were helminth infected, and only 16.7% were coinfected and none of the uninfected group were underweight.
Biochemical and Haematologic
Analysis. The biochemical and haematologic measures of nutritional status among HIV singly infected, helminth singly infected, HIV-helminth coinfected, and uninfected participants are described in Table 2. Except for BMI and phosphate, there was a statistically significant difference in biochemical and haematologic measures across the groups ( < 0.001). The median micronutrient levels were varied among the groups although all were within the reference ranges, with transferrin and ferritin levels being lower in the coinfected group compared to the reference group. Percentage transferrin saturation levels were higher in the HIV infected and the coinfected groups compared to the other groups. C-reactive protein, a marker of inflammation, was within range for all the participant groups.
The median biochemical levels of macronutrients (total protein, albumin, and prealbumin) varied among the groups although all were within the reference ranges. Total protein levels were lowest in the uninfected group and highest in both the HIV infected and the HIV-helminth coinfected groups. Albumin levels were lowest in the HIV infected group and highest in the uninfected group. Prealbumin levels were lowest in the HIV infected group and highest in the helminth infected group.
The haematology parameters revealed levels that were within the reference ranges. However, the HIV and the coinfected groups had lower haemoglobin levels compared to the other groups. The absolute eosinophil count levels were highest in the helminth infected group compared to the other groups.
Associations between HIV and Helminth Coinfection and Single Infection with Nutritional Status.
The estimated coefficients of the multivariate multinomial probit model are presented in Table 3. BMI was not statistically significant in all the infection groups. Relative to the uninfected group, the HIV-helminth coinfected group was associated with a significant increase in total protein [ = 0. 16
Discussion
In many regions of developing countries, malnutrition is superimposed with endemic helminth and HIV infections. The findings of this study showed that the prevalence of HIV (36.1%) and helminths (36.1%) was high in this adult population (the majority of whom were females), with notable levels of HIV-helminth coinfection. This was against the backdrop of scant data on the prevalence of intestinal parasites in adults of KZN, where most of the prevalence studies have been conducted in schoolchildren. The only other study on prevalence of intestinal helminth parasites in KZN among adults found overall moderate levels of helminth prevalence (20.4%) in the eThekwini district [15]. The higher HIV prevalence in this study is to be expected given the fact that the study site was situated in eThekwini district which has one of the highest HIV prevalence rates in KZN, with a 38% HIV prevalence rate among antenatal women being reported in 2011 for this district [19].
The majority of participants who were obese and overweight (66.3%) were among the uninfected group. Nutrient adequacy ratios analysis revealed a significantly increased carbohydrate intake among all groups, much above the recommended dietary allowance [28]. Increased carbohydrate intake causes weight gain leading to obesity [29]. This may be expected as the general South African population is reported to have a significant proportion of adults who are overweight and obese. The South African National Health and Nutrition examination survey (SANHANES-1) established that 25% and 40.1% of women are overweight and obese, respectively, and 19.6% and 11.6% men are overweight and obese, respectively [30]. This could probably be due to the general consumption of diets that are rich in refined carbohydrates [29,31] which would be in line with the current study which revealed excessive carbohydrate intake.
Despite the substantially elevated carbohydrate intake in this study population, the energy intake was low. This may be attributed to the fact that fat and protein intake were less than the recommended daily intake. Protein and carbohydrates constitute a lower contribution to energy (16.8 kilojoules per gram each) compared to fat (37 kilojoules per gram) [32,33], where protein, carbohydrates, fat, and fibre would all together contribute to the required 100%. The contribution of total protein to the daily energy intake was lower than the recommended 15% [34] for all the participant groups. This low protein-energy intake may predispose all the participant groups to protein-energy malnutrition.
The results of anthropometric measurements revealed that underweight was more common among the infected group: 50% among the HIV infected, 33% in the helminth infected group, and 17% in the coinfected group, while none of the uninfected group were underweight. This concurs with the fact that weight loss and wasting is associated with HIV infection and some helminth infections, through a variety of mechanisms including increased energy requirements and/or reduced dietary intake and absorption, reduced appetite, inflammatory cytokines, and diarrhoea [4,35].
Further analysis showed that the HIV singly infected group was associated with higher total protein and lower albumin biochemical levels compared to the levels of the uninfected group. Similar observations were made in the HIV-helminth coinfected group with regard to significantly higher total protein accompanied by lower albumin levels. Total protein and albumin are serum proteins synthesized by the liver that are not only affected by nutritional status, but by inflammation and infection [36]. Total protein comprises albumin and globulin fractions. Albumin in healthy individuals is highest in concentration in serum, usually 60% of the total protein [37]. HIV infection induces a nonspecific expansion of the globulin fraction due to the polyclonal stimulation of B cells in response to the acute or chronic stages of the infection and associated opportunistic infections [38,39]. Thus, the higher total protein seen in both the HIV singly infected group and the HIV-helminth infected group may have been as a result of prioritization in the formation of globulins and acute phase proteins in response to the HIV infection [37] and, proportionately, reduced albumin levels.
On the other hand, lower albumin levels may have been due to the increased rate of transcapillary leak of albumin into the interstitial fluid associated with infection [40]. Both HIV and helminth infections have acute and chronic stages, resulting in chronic activation of the immune system. However, in this study, it was not possible to determine the stages of both HIV and helminths as this was not within the scope of the study objective. The low albumin finding in the current study is corroborated by a similar finding in the Kannangai et al. [41] study of HIV infected individuals, where albumin levels were low as well.
The food recall NARS analysis also revealed a general low micronutrient intake where the median intake of calcium, magnesium, selenium, iodine, vitamin A, vitamin B2, vitamin B5, vitamin B12, vitamin C, vitamin D, vitamin E, vitamin H, vitamin K, and folate micronutrients was low for all the participant groups. The expected finding would be similarly low biochemical levels since intake levels were low [42]. It had been hypothesized that all the micronutrient biochemical levels would be low in the singly and coinfected infected with HIV and helminths study groups. HIV infection has been reported to predispose to micronutrient deficiency [43] and, likewise, helminth infections have been associated with deficiency of most of the micronutrients [44]. However, in this study, discrepant results were found where biochemical analyses showed that these micronutrients were within the reference range for all the participant groups. Biochemical markers as an indication of nutritional status are more reliable than food intake questionnaire data, and food intake data should be used as evidence of food variety rather than to indicate nutritional status [45,46]. This discrepancy could have been due to underreported 24-hour food recall data as self-reported actual food intake may have been omitted consciously or by accident, leading to the discrepancy between infection status and biochemical micronutrient levels [47,48] or the participants may have been taking supplements and failed to declare these during the collection of data [49]. Food recall data collected over 3 days, accompanied by a food frequency questionnaire, may have given a more holistic indication of dietary consumption [50,51].
Further analysis showed that the HIV-helminth coinfected group was associated with significantly lower ferritin levels, although percentage of transferrin saturation levels were higher with nonstatistically significant lower transferrin levels. Low ferritin levels are typical of iron deficiency anaemia [52]. However, iron intake levels were higher than the daily required quantity for all the participant groups. Intestinal parasitic helminths are associated with iron deficiency anaemia [9,53] and HIV on its own is also associated with iron deficiency anaemia [54]. Intestinal helminths source nutrients from the host for their own growth, while the infection itself, either caused by HIV or helminths, may increase the host's need for nutrients [55]. Thus, the lower ferritin levels in the coinfected group may indicate subclinical iron deficiency. Subclinical iron deficiency, even though it may be mild, impacts on the physiological functions that drive the development of cells and their metabolic function, which would have an effect on the immune system action against the HIV-helminth coinfection [56]. Moreover, anaemia in the HIV-helminth coinfection may lead to increased HIV progression, increased mortality, and poor quality of life [57]. Mupfasoni et al. [58], in Rwanda, however, found no association of intestinal helminth parasite infection with anaemia, although the authors attributed this to the fact that anaemia was uncommon in their study area.
Although the eosinophil counts were within range for all the groups, the helminth infected and the dually infected groups had significantly higher levels compared to the other groups. These results are in keeping with the classic feature of helminthiasis. These infections are associated with increased production of eosinophils [59,60], which are reported to decrease significantly after deworming [61,62].
There was no significant association observed between HIV-helminth coinfection and single infections with microand macronutrient deficiency. However, the results highlighted the various micro-and macronutrient intake patterns in the population. Low intake levels of calcium, magnesium, selenium, iodine, vitamin A, vitamin B2, vitamin B5, folate, vitamin B12, biotin (vitamin H), vitamin C, vitamin D, vitamin E, vitamin K, total protein, and energy were noted in all the participant groups. This may indicate a general predisposition to micronutrient and protein-energy deficiency in the study participants and may need further nutritional investigations.
Limitations of This Study
The cross-sectional design in this study is limited to determining the associations only and cannot infer causality. A prospective cohort study design with randomised sampling would be recommended for such an investigation. The small sample size may have resulted in the inability to determine a significant association between macro-and micronutrient levels and the coinfection. Moreover, the use of selfreported food recall data collected over two days, which relies on memory and correct estimations of quantity, is a limitation, although the value of the data is recognised since it indicated the food intake patterns in the population. Energy intake of the study participants was only about 50% of the reference nutrient intake. However, the prevalence of overweight and obesity was almost two-thirds. Therefore the energy intake may be underestimated. The fat intake may be underestimated as the 24-hour dietary recall may not cover the cooking oil intake. This could result in inaccurate macronutrient contribution to energy. In addition, the study used biased sampling since recruitment was from individuals who attended the HCT clinic and thus the findings cannot be generalised to the population in the area where the study was conducted. Furthermore, the fact that the stool samples were screened microscopically for intestinal helminth parasites eggs and ova the following day is a limitation, although they were prepared and preserved on the same day of collection. This could have significantly affected the ability to detect hookworm eggs since these rapidly disintegrate upon storage of stools. Nevertheless, this study adds value to the less studied but growing research area of HIV-helminth infection impact on nutritional status in sub-Saharan Africa.
Concluding Remarks
Helminth infection is a neglected disease globally, with more attention and priority given to HIV/AIDS, TB, and malaria. The high prevalence of helminth infection observed in this adult population warrants attention, especially since HIV is endemic in the area. However, there was no significant association between single and dual HIV and helminth infections with micro-and macronutrient deficiency in this population. The frequent occurrence of obesity and overweight which is an additional health burden in South Africa, possibly due to excessive carbohydrate intake, and the general low intake of micro-and protein-energy macronutrients observed in this study require further nutritional investigations and the current South African Department of Health campaign on healthy lifestyle needs strengthening [63]. Future studies should investigate the nutritional, parasitic, and infectious conditions that may act as cofactors for rapid progression of HIV infection [64].
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper. | 2018-04-03T01:17:56.494Z | 2017-03-22T00:00:00.000 | {
"year": 2017,
"sha1": "52dd6cdca1e6bce24032de87430b296aaf4aea2a",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2017/9059523.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc3a6770b6c5027d1053881ac04b1e8f0dd85104",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
211121784 | pes2o/s2orc | v3-fos-license | Bioorthogonal Labeling Reveals Different Expression of Glycans in Mouse Hippocampal Neuron Cultures during Their Development
The expression of different glycans at the cell surface dictates cell interactions with their environment and other cells, being crucial for the cell fate. The development of the central nervous system is associated with tremendous changes in the cell glycome that is tightly regulated. Herein, we have employed bioorthogonal Cu-free click chemistry to image temporal distribution of different glycans in live mouse hippocampal neurons during their maturation in vitro. We show development-dependent glycan patterns with increased fucose and decreased mannose expression at the end of the maturation process. We also demonstrate that this approach is biocompatible and does not affect glycan transport although it relies on an administration of modified glycans. The applicability of this strategy to tissue sections unlocks new opportunities to study the glycan dynamics under more complex physiological conditions.
Introduction
Glycans displayed at the cell surface determine the cell interactome and fate [1,2]. In the nervous system, glycoconjugates play central roles in development, regeneration and synaptic plasticity [3,4]. They participate in the formation of a complex molecular network (both at the cell surface and in the extracellular matrix) that mediates recognition processes and triggers specific pathways. Their fundamental role in the central nervous system (CNS) is evidenced by the neuropathological and psychomotor incapacities of patients with congenital glycosylation diseases [5][6][7].
The structural diversity of glycans provides a myriad of possible combinations that allow fine regulation of the cell interactome: glycosylation patterns are cell-specific and in the brain, they are tightly regulated during different development stages [4,8]. Therefore, approaches that allow to
Metabolic Labelling of Mouse Hippocampal Neuronal Cultures
Unlike the other biomacromolecules (polynucleotides and proteins), carbohydrates are not a genetic product. Thus, unnatural metabolic precursors can be interspersed in the carbohydrate biosynthetic pathways [31,32]. The incorporation of unnatural monosaccharides bearing reactive functional groups into cell-surface glycoconjugates provides a scenario in which the glycan can be further elaborated with an exogenously delivered imaging reagent [31]. The selectivity and rate of the reaction between the imaging agent and the incorporated carbohydrate determine the success of this strategy. Thus, the choice of the functional group/labelling reaction is crucial [20]. Among different possibilities, the copper-free azide-alkyne cycloaddition and the Staudinger ligation are better options for studies involving living cells and organisms. Herein, we have selected the copper-free azide-alkyne cycloaddition as the product stability and reaction rate of the Staudinger ligation can be compromised under in vivo conditions [33]. We prepared N-azidoacetylmannosamine (ManNAz), N-azidoacetylglucosamine (GlcNAz) and 6-azidofucose following previously described procedures (Scheme S3) . The membrane penetration of unnatural metabolites is a key point of this approach and thus, the compounds were peracetylated to improve their uptake by the cells.
The obtained unnatural metabolic precursors were feed into the culture medium of mouse hippocampal cells. To select the time points at which the supplementation would be performed, we first studied the differentiation of neuroprogenitor cells into neurons by immunofluorescence with Molecules 2020, 25, 795 3 of 12 key neuron markers (β actin, βIII Tubulin) and glial fibrillary acidic protein (GFAP) as an astrocyte marker. We observed a differentiation process up to 14 days (Figure 1). At this time point mainly neurons (Figure 1c, βIII Tubulin staining in green) and very few astrocytes are visible (Figure 1c, GFAP labeling in red) and thus, we selected it as an end point for our experiments. Because at day 3 very few cells were positive for βIII Tubulin (Figure 1b), we selected day 7 as an intermediate timepoint for metabolic labeling. Furthermore, the choice for the 7 and 14 days was made in order to avoid overcrowded cultures because: i) it would be harder to image individual cells as well as cell to cell contact; ii) more cells could split between them the sugars thus lowering the signal.
Molecules 2020, 25, x FOR PEER REVIEW 3 of 12 first studied the differentiation of neuroprogenitor cells into neurons by immunofluorescence with key neuron markers (β actin, βIII Tubulin) and glial fibrillary acidic protein (GFAP) as an astrocyte marker. We observed a differentiation process up to 14 days (Figure 1). At this time point mainly neurons (Figure 1c, βIII Tubulin staining in green) and very few astrocytes are visible (Figure 1c, GFAP labeling in red) and thus, we selected it as an end point for our experiments. Because at day 3 very few cells were positive for βIII Tubulin (Figure 1b), we selected day 7 as an intermediate timepoint for metabolic labeling. Furthermore, the choice for the 7 and 14 days was made in order to avoid overcrowded cultures because: i) it would be harder to image individual cells as well as cell to cell contact; ii) more cells could split between them the sugars thus lowering the signal. At day 7 and 14, cultures were supplemented with the unnatural metabolic precursors containing azido groups. Cells were allowed to metabolize the supplemented carbohydrates for 24 h and then the labelled cyclooctyne was introduced to initiate the click reaction. We tested different reaction conditions and found that for the studied cell cultures cyclooctyne concentrations of 50 µM and reaction time of 1 h gave the optimal output.
The supplementation of the azidocarbohydrates resulted in different fluorescence intensity among the tested carbohydrates and culture times. At day 7, highest intensity is visible for the cells supplemented with Ac4ManAz (Figure 2a), lower for the Ac4GlcAz supplemented cultures ( Figure 2b) and a faint signal for Ac4FucAz that reveals less fucose at this stage ( Figure 2c). Fluorescence is visible along the cell body but also throughout the dendrites for cells supplemented with mannose and glucose analogs ( Figure 2). At day 7 and 14, cultures were supplemented with the unnatural metabolic precursors containing azido groups. Cells were allowed to metabolize the supplemented carbohydrates for 24 h and then the labelled cyclooctyne was introduced to initiate the click reaction. We tested different reaction conditions and found that for the studied cell cultures cyclooctyne concentrations of 50 µM and reaction time of 1 h gave the optimal output.
The supplementation of the azidocarbohydrates resulted in different fluorescence intensity among the tested carbohydrates and culture times. At day 7, highest intensity is visible for the cells supplemented with Ac 4 ManAz (Figure 2a), lower for the Ac 4 GlcAz supplemented cultures ( Figure 2b) and a faint signal for Ac 4 FucAz that reveals less fucose at this stage ( Figure 2c). Fluorescence is visible along the cell body but also throughout the dendrites for cells supplemented with mannose and glucose analogs ( Figure 2).
At day 14, when the cell culture is mostly composed by differentiated cells (Figure 3a2-d2) the glycosylation pattern is different: incorporation of fucose increases and matches mannose and glucose derivative levels in neuronal glycoproteins (Figure 3a1-d1). These results indicate a decrease in the mannose and glucose derivative incorporation during the differentiation process and increase in the glycoprotein fucosylation.
Sialylation (and more specifically polysialylation) and fucosylation are major post-translational modifications occurring in carbohydrate-carrying molecules, e.g., proteins, in the nervous system. These post-translational functionalizations are related with proliferation, migration and differentiation of neural progenitors [4]. Higher expression of mannose/glucose-containing glycoproteins at day 7 might indicate abundant sialylation as either mannose or glucose can be metabolized by cells to sialic acid [36]. Polysialylated Neural Cell Adhesion Molecule (NCAM) is associated with neuritogenesis and neurite outgrowth of hippocampal neurons in culture. The fact that these processes are very intensive within the first days of culture [29] can explain the results obtained with Ac4ManNAz and Ac4GlcNAz incorporation ( Figure 2). While polysialylation gradually decreases [37], fucosylation increases with neuronal maturation [38]. Fucosylated glycoproteins are involved in neuronal communication. Their expression changes extensively during the course of neuronal development in mouse hippocampal tissue and during maturation of neurons in culture [38]. These previous results agree with our finding that fucose becomes more abundant with differentiation of neuroprogenitor cells (Figure 3). Sialylation (and more specifically polysialylation) and fucosylation are major post-translational modifications occurring in carbohydrate-carrying molecules, e.g. proteins, in the nervous system. These post-translational functionalizations are related with proliferation, migration and differentiation of neural progenitors [4]. Higher expression of mannose/glucose-containing glycoproteins at day 7 might indicate abundant sialylation as either mannose or glucose can be metabolized by cells to sialic acid [36]. Polysialylated Neural Cell Adhesion Molecule (NCAM) is associated with neuritogenesis and neurite outgrowth of hippocampal neurons in culture. The fact that these processes are very intensive within the first days of culture [29] can explain the results obtained with Ac4ManNAz and Ac4GlcNAz incorporation ( Figure 2). While polysialylation gradually decreases [37], fucosylation increases with neuronal maturation [38]. Fucosylated glycoproteins are involved in neuronal communication. Their expression changes extensively during the course of neuronal development in mouse hippocampal tissue and during maturation of neurons in culture [38]. These previous results agree with our finding that fucose becomes more abundant with differentiation of neuroprogenitor cells (Figure 3).
Gene Expression Levels of Carbohydrate Transporters
Glycosylation is incomparably crucial and, therefore, tightly controlled in neurons [4]. The formation of glycan linkage is catalyzed by highly selective glycosyltransferases (GT) with specificity both for substrate and donor nucleotide carbohydrate. The results described above showed that neuronal GT tolerate the use of unnatural azido adducts. We have used low concentration to avoid changes in the machinery used by the cell to transport and modify these
Gene Expression Levels of Carbohydrate Transporters
Glycosylation is incomparably crucial and, therefore, tightly controlled in neurons [4]. The formation of glycan linkage is catalyzed by highly selective glycosyltransferases (GT) with specificity both for substrate and donor nucleotide carbohydrate. The results described above showed that neuronal GT tolerate the use of unnatural azido adducts. We have used low concentration to avoid changes in the machinery used by the cell to transport and modify these molecules. To confirm that the carbohydrate transporters and transferases are not affected by these unnatural molecules, we performed RT-PCR analysis for the expression level of the respective genes (Figures 4-6).
Molecules 2020, 25, x FOR PEER REVIEW 6 of 12 molecules. To confirm that the carbohydrate transporters and transferases are not affected by these unnatural molecules, we performed RT-PCR analysis for the expression level of the respective genes (Figures 4-6). Apart from a statistically significant (* p < 0.05) increase in Glut1 expression after the addition of azido modified glucose at day 7 in culture ( Figure 5), no significant differences were observed for the gene expression of mannose ( Figure 4) and fucose ( Figure 6) transporters and transferases upon addition of azido modified carbohydrates to neuronal cell cultures. These results indicate that the used conditions (24 h period of exposition to the carbohydrate analogs at a low concentration of 50 µm) do not affect significantly the de novo expression of carbohydrate transporters and transferases. The significant change in Glut1 expression indicates that brain cells are highly responsive and sensitive to glucose fluctuations. Mammalian brain cells use glucose as a main source of energy; therefore, they depend on the tight regulation of glucose metabolism for proper physiological brain function. Disruption of the normal glucose homeostasis is the pathophysiological cause for many brain disorders [39]. In euglycemic condition, glucose concentration in the plasma is around 5-8 mM, and this corresponds to brain levels of approximately 1-2.5 mM [40]. The concentration used in this study (50 µM) is far below this range and therefore it is not expected to deleteriously affect neurons. However, the cell culture was performed in conditions of hyperglycemia (glucose concentration in Neurobasal A medium, Invitrogen is 25 mM), i.e., we used a high glucose concentration. An additional increase in the glucose concentration (by addition of azido modified glucose, Figure 6) can cause stress and be the reason for the increased expression of Glut1, as reported for adult neural stem cells [41].
Synthesis and Characterization of Biorthogonal Reporters
N-azidoacetylmannosamine (ManNAz) and N-azidoacetylglucosamine (GlcNAz) were prepared according to a method described from Bertozzi et al (SI, Scheme S1) [10,36]. Briefly, hydrochloride of D-aminocarbohydrate (1.0 mmol) was added to azidoacetic acid (1.37 mmol) in methanol (10 mL). After dissolution, triethylamine (0.34 mL, 2.43 mmol) was added and the reaction mixture was stirred for 5 min at room temperature (RT). The solution was cooled to 0 °C and hydroxybenzotriazole (HOBt, 0.135 g, 1.0 mmol) was added first, followed by 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDAC, 0.383 g, 2.0 mmol). The mixture was warmed to RT and the reaction proceeded overnight. Next, the solution was concentrated, and the residue was eluted with water over AG 50WX8 resin and AG 1-X2 resin. After concentration, the residue was further purified by silica gel chromatography, eluting with CHCl3-MeOH. Of note, the purification of azidoderivatives from the ammonium salt is a critical step. We have carried out the reaction of D-aminocarbohydrate with chloroacetic anhydride and used NaOH as a base (SI, Scheme S2). This approach was successfully applied for GlcNAz. In this case, a hydrochloride of D-glucosamine (1.0 mmol) was added to a suspension of NaOH (1.0 mmol) in MeOH (3 mL). The mixture was stirred at RT for 5 min and filtered. Triethylamine (0.93 mmol) and chloroacetic anhydride (4.6 mmol) were added to the filtrate. The reaction mixture was stirred for 24 h at RT. The solvent was removed and column chromatography was applied for partial purification of the compound, eluting with a gradient of CHCl3:MeOH (20:1 to 7:1). The resulting oil was dissolved in Apart from a statistically significant (* p < 0.05) increase in Glut1 expression after the addition of azido modified glucose at day 7 in culture ( Figure 5), no significant differences were observed for the gene expression of mannose ( Figure 4) and fucose ( Figure 6) transporters and transferases upon addition of azido modified carbohydrates to neuronal cell cultures. These results indicate that the used conditions (24 h period of exposition to the carbohydrate analogs at a low concentration of 50 µm) do not affect significantly the de novo expression of carbohydrate transporters and transferases.
The significant change in Glut1 expression indicates that brain cells are highly responsive and sensitive to glucose fluctuations. Mammalian brain cells use glucose as a main source of energy; therefore, they depend on the tight regulation of glucose metabolism for proper physiological brain function. Disruption of the normal glucose homeostasis is the pathophysiological cause for many brain disorders [39]. In euglycemic condition, glucose concentration in the plasma is around 5-8 mM, and this corresponds to brain levels of approximately 1-2.5 mM [40]. The concentration used in this study (50 µM) is far below this range and therefore it is not expected to deleteriously affect neurons. However, the cell culture was performed in conditions of hyperglycemia (glucose concentration in Neurobasal A medium, Invitrogen is 25 mM), i.e., we used a high glucose concentration. An additional increase in the glucose concentration (by addition of azido modified glucose, Figure 6) can cause stress and be the reason for the increased expression of Glut1, as reported for adult neural stem cells [41].
Synthesis and Characterization of bioorthogonal Reporters
N-azidoacetylmannosamine (ManNAz) and N-azidoacetylglucosamine (GlcNAz) were prepared according to a method described from Bertozzi et al. (SI, Scheme S1) [10,36]. Briefly, hydrochloride of D-aminocarbohydrate (1.0 mmol) was added to azidoacetic acid (1.37 mmol) in methanol (10 mL). After dissolution, triethylamine (0.34 mL, 2.43 mmol) was added and the reaction mixture was stirred for 5 min at room temperature (RT). The solution was cooled to 0 • C and hydroxybenzotriazole (HOBt, 0.135 g, 1.0 mmol) was added first, followed by 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDAC, 0.383 g, 2.0 mmol). The mixture was warmed to RT and the reaction proceeded overnight. Next, the solution was concentrated, and the residue was eluted with water over AG 50WX8 resin and AG 1-X2 resin. After concentration, the residue was further purified by silica gel chromatography, eluting with CHCl 3 -MeOH. Of note, the purification of azidoderivatives from the ammonium salt is a critical step. We have carried out the reaction of d-aminocarbohydrate with chloroacetic anhydride and used NaOH as a base (SI, Scheme S2). This approach was successfully applied for GlcNAz. In this case, a hydrochloride of D-glucosamine (1.0 mmol) was added to a suspension of NaOH (1.0 mmol) in MeOH (3 mL). The mixture was stirred at RT for 5 min and filtered. Triethylamine (0.93 mmol) and chloroacetic anhydride (4.6 mmol) were added to the filtrate. The reaction mixture was stirred for 24 h at RT. The solvent was removed and column chromatography was applied for partial purification of the compound, eluting with a gradient of CHCl 3 :MeOH (20:1 to 7:1). The resulting oil was dissolved in DMF (3 mL). NaN 3 (3.0 mmol) was added to the solution and the reaction mixture was heated at 80 • C for 2 h. The solvent was removed and second column chromatography purification was applied, eluting with a gradient of CHCl 3 :MeOH.
The obtained azides were further peracetylated to obtain Ac 4 ManNAz and Ac 4 GlcNAz. Acetic anhydride (2.0 mL) was added to a solution of corresponding N-azidocarbohydrate in pyridine (2 mL) and the reaction mixture was stirred overnight at RT. The solution was concentrated, resuspended in CH 2 Cl 2 , and washed consecutive with 1 M HCl, saturated NaHCO 3 , and saturated NaCl. The organic phase was dried over Na 2 SO 4 , filtered, and concentrated. The crude material was purified by silica gel chromatography, eluting with hexanes-ethyl acetate (2:1, v/v). Further purification by reversed-phase HPLC (KANUER, Berlin, Germany) was also performed using column Atlantis T3 5 µm (Waters, Manchester, UK), 30 × 150 mm and eluting with a gradient of CH 3 CN and H 2 O.
Hippocampal Neuron Isolation, Characterization and In Vitro Culture
Brains from postnatal day 1 (PND1) mice were used to obtain neurons as previously described [41]. Briefly, hippocampi were dissected, under a conventional light microscope (SZX7, Olympus, Hamburg, Germany), into smaller fragments, trypsinized for 30 min at 37 • C and mechanically dissociated through a 2 mL pipette and a Pasteur pipette. After that, the hippocampal cells were washed 5 times with Hanks' balanced salt solution (HBSS) supplemented with 0.5% penicillin-streptomycin (Sigma-Aldrich, St. Louis, MO, USA), 10 mM HEPES solution and 1% sodium pyruvate (Invitrogen, Carlsbad, CA, USA) and re-suspended in minimum essential medium (MEM, Invitrogen) supplemented with 10% FBS, 0.5% glucose (Sigma-Aldrich), 0.5% penicillin-streptomycin, 2 mM l-glutamine and 1% MEM vitamins (Invitrogen). Hippocampal cells were plated on culture wells coated with poly-l-ornithine (Sigma-Aldrich), at a density of 50000 cells/cm 2 and left at 37 • C in a humid atmosphere (5% CO 2 ) for 5 h. After this, the medium was changed into Neurobasal A (Invitrogen) supplemented with 0.5 mM l-glutamine and 2% B27 (Invitrogen). The culture medium was changed 24 h after for Neurobasal A with 2% B27, 1% newborn calf serum (Invitrogen), 0.5 mM l-glutamine, 0.03 µM uridine (Sigma-Aldrich), 0.07 µM FDU (Sigma-Aldrich) and 1 µM kynurenic acid (Sigma-Aldrich), to prevent the proliferation of cells undergoing mitotic division and to reduce enhanced synaptic transmission. The hippocampal neurons were maintained in culture for at least 14 days, in a humid atmosphere (5% CO 2 ) at 37 • C. After 1, 3 and 14 days in culture, cells were probed for β actin, βIII Tubulin (neuron marker) and GFAP (astrocyte marker) and nuclei were counterstained with DAPI. For that, cells were fixed in 4% paraformaldehyde at RT for 20 min. Cells were incubated for 1 h at RT with the primary antibodies for GFAP (1:500, Dako, Golstrup, Denmark) and βIII-TUB (1:500, Millipore Iberica, Madrid, Spain) diluted in PBS. Cells were washed and incubated with specific Alexa 488-conjugated or Alexa 594-conjugated secondary antibodies (Invitrogen) diluted in PBS (1:500) for 1 h at RT, according to the source and isotype of the primary antibodies. Cells were washed with PBS and incubated with 4,6-diamidino-2-phenylindole (DAPI, 1:1000, Invitrogen) in PBS for 5 min at RT. Finally, cells were washed with PBS, and the glass coverslips were mounted in PermaFluor mounting medium (Thermo Fisher Scientific, Fremont, CA, USA). Fluorescence analysis and image capture were performed using a conventional (BX61; Olympus) or a confocal (FV1000; Olympus) microscope. Nuclei were counterstained with DAPI for 15 min. Imaging of the cells with labelled glycans was performed by confocal microscopy (Olympus).
Gene Expression of Carbohydrate Transporters by qRT-PCR
Mouse hippocampal neuronal cultures (P1) were supplemented with azido modified carbohydrates at day 7 and 14. Cells were harvested 24 h after supplementation and mRNA was isolated for quantification of gene expression of glucose transporters (Glut1 and Glut3), UDP-glucose glycoprotein glucosyltransferases (Uggt1 and Uggt 2), fucose transporter (solute carrier family 35, member c1, Slc35c1), protein O-fucosyltransferases (pofut1 and pofut2) and protein-O-mannosyltransferases (Pomt1 and Pomt2). Total RNA was extracted from cells using the RNeasy ® Plus Micro Kit (Qiagen, Hamburg, Germany), following the manufacturers' instructions. RNA quality and quantification were assessed in the NanoDrop ® ND-1000 (Thermoscientific, Massachusetts, USA) and 500 ng of RNA from each sample was reverse transcribed into cDNA using the iScriptTM cDNA Synthesis Kit (Bio-Rad Laboratories, Hercules, CA, USA) following the manufacturers' instructions. Primers used to measure the expression levels of selected mRNA transcripts of Mus musculus by qRT-PCR were designed using the Primer3 software, on the basis of the respective GenBank sequences (Table S1). The reference gene hypoxanthine guanine phosphoribosyl transferase (Hprt) was used as internal standard for the normalization of the selected transcripts' expression. qRT-PCR was performed on a CFX 96TM real-time system instrument (Bio-Rad), with the QuantiTect SYBR Green RT-PCR reagent kit (Qiagen) according to the manufacturer's instructions, using equal amounts of cDNA from each sample. The cycling parameters were 1 cycle at 95 • C for 15 min, followed by 40 cycles at 94 • C for 15 s, annealing temperature (primer specific) for 30 s and 72 • C for 30 s, finishing with 1 cycle at 65 • C to 95 • C for 5 s (melting curve). Product fluorescence was detected at the end of the elongation cycle. All melting curves exhibited a single sharp peak at the expected temperature.
Statistical Analysis
Values are reported as the mean ± standard error. Each condition was tested at least in triplicates in each independent experiment and the experiments were repeated twice. Statistically significant differences between groups were determined using one-way ANOVA, followed by Tukey's multiple comparison test. Values were considered to be statistically significant for p < 0.05 (*) and p < 0.01 (**).
Conclusions
The heterogeneity of glycans and their multiple regulatory roles, together with their importance in brain development, neuroregeneration and synaptic plasticity strongly suggests that glycans are invaluable tools to characterize neurons functions. Therefore, the development of methods to analyze the dynamics of glycan activity in neurons would be advantageous to decode their neural functions. We have proposed a neurocompatible strategy to image temporal distribution of glycoproteins during neuronal development using Cu-free click chemistry. This methodology unlocks new opportunities to study the dynamics of glycan activity in nervous system allowing decoding their effect over neuron functions. Their applicability in hippocampal tissue sections will allow in vivo tracking of glycan changes and understanding the spatial distribution of glycans within nervous tissues.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-02-16T14:04:19.842Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "ec083a3fbc32fdcd1578b877c24fbd1089291b74",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/25/4/795/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fbf50d5e874d615633b5c4580abf1b300eab44cb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
236344042 | pes2o/s2orc | v3-fos-license | Study on Risk Management Practices on Public Building Construction Project: In Case of Eastern Wollega Zone
This research work was tries to study the risk management practice on public building construction projects and aims to identify the level that use of risk management practice especially in the public building. The data collection method was a combination of interview and questionnaire. Samples were purposively selected from clients, consultants, and contractors representatives who are now actively participating in public building construction projects. For this study, the data was collected using both primary and secondary sources. Depending on the data that was gathered from the respondent to test the level of awareness, identifying the Risk that affect the performance of public building construction project and major risk management practice on public building construction project are considered and the RII was used to rank the factors. This data was analyzed using SPSS of version 22 to perform descriptive statistics. A total of 75 questionnaires were targeted to be distributed and out of those 50 which is 66.67% are successfully responded. The finding from this study revealed that, about (52%) of the project progress is lagging from the schedule. Regarding the awareness of the risk management, (94%) of the respondents where confirmed that they have awareness of risk management ideologies and they are confident enough to implement their knowledge while, (6%) of them have no concept about the risk management. The top five risks that affect the performance of construction project have been identified and ranked. Accordingly; market condition, unexpected inflation, local taxes, inadequate production of raw materials, and the economic condition of country are the top five identified associated project risks.
Introduction
Construction site is a very important place as a considerable number of works are involved in construction project activities employments in construction site can be categorized into three groups, skilled work force, semiskilled and un-skilled work force.
Safety in construction site is needed to be highly considered in order to reduce the risk of being injured to work.
Also, safety is identified as one of the major factor affecting the image of the project manager and the organized (grand jean.)." Risk factors will cause different severity of the consequences. If one doesn't consider these risk factors at all, or ignore the main factors, they will cause damage because of decision-making errors. Quality targets, time targets, cost targets are the three major objectives of construction project management, Taking into account the above problems, the construction industry is suffering from the misunderstanding of risk management including risk identification, analysis and assessment, and that is why this research is important, where it will discover the risk factors in the public building construction project.
According to the parties involved in Ethiopian building construction projects, most projects are not completed in conformity to the original plan i.e. they face various problems and changes that lead to delay, cost overrun or lower quality. This thesis tries to study risk management practice on public building construction projects in Eastern Wollega zone and aims.
Methodology
The articles, which had "construction safety, health hazards" as key words in this research papers, where identified. This paper has been published in journals, conference proceedings and technical reports in the respective official websites out of many articles. Risk Assessment Process A risk assessment is a significant step in protecting your labors and your business, with fulfilling through the law. It assists you to concentrate on the risks which happened in the work site-the ones with the possibility to cause real harm. In many instances, direct events can readily control risks, for example confirming spillages are cleaned up promptly so people do not slip, or storeroom drawers are kept closed to certain individuals do not trip. Mostly, that illustrates easy, inexpensive and active actions to guarantee the maximum valued properties and workers are saved. We cannot eliminate all risk, but we are authorized to protect people as far as possible [6].
Identifying any hazards
At the first, we should recognize that in which way individuals might be affected. During the daily work is easy to control many hazards. In our project we find out many hazards during our inspection to the construction site, Such as; No Personal Protective Equipment PPE (no hard hats, no gloves), Bad fixing of mesh protection, Extra rebar, Availability of materials that contain silica (Silicosis disease) etc [1].
Also, the construction site was full of unnecessary items including pieces of machines, scaffolding materials grouped, which provide the harmful workplace for the employees and workers. In addition, all machines were not guarded and there were nip points in parts of the machines which may cause pulling off clothes and cut off fingers. The following guides are some steps to identify the hazards: Employers must ask the workers or their agent about their ideas. They might see stuff not directly clear to employers.
Visiting the HSE website, it applied a leadership in places that hazards happened and the way of controlling them. There are enough info there on the hazards which may impact the work. Otherwise, calling HSE Info line, which classifies publications which may direct you, or by contacting Workplace Health Connect, unpaid emergency help for manager and staffs of small and medium-sized enterprises offering health and safety service in the site of work [1] for the followers of a trade organization, do a communication. Lots of them make actual supportive guideline process.
Check manufacturers' instructions or data sheets of equipment and chemical materials which are too much support in discovering the hazards and placing them in true viewpoints.
Looking to the accidents and ill-health records -are supportive in recognizing the hazards.
Never forget to think about long term hazards to health (for example, too much noise or disclosure to hurtful materials) together with safety hazards. Identifying the hazard is likely to cause harm After the hazards were specified, then the decision on what should be done to these hazards is in a must. The law obligates us to do everything 'sensibly practicable' in saving the persons to not get harm.
The easiest way is to compare our achievements with good practice. Some good practices have been recognized. In our inspection to Uzun construction site, many hazards have been discovered and the precautions to these hazards are very essential compulsive. For instance, Scaffolding materials should be maintained and barricaded. Also all unnecessary items of machines, organs should be scrapped [8].
Identifying what control measures are already in place For each hazard, who might be harmed should take in to a consideration; it helps you distinguish the favor approach in dealing the risk. The checklists preparing to aware the employees are very essential. In our project we find out that all staffs that are responsible for carrying out the construction site work in Uzun Company which will be harmed and badly affected, such as, skilled and manual labor, groups of carpenter, electrician, heavy equipment operators ironworkers, laborer, mason plasterer, plumber, pipefitter, sheet metal worker, steel fixer, also known as a rod buster, and welder. Each group may suffer from different dangerous injuries and illnesses, For example, heavy equipment operator might undergo backbone damage by repeating in operating and lifting heavy equipment.
Identifying what control measures are already in place For each hazard, who might be harmed should take in to a consideration; it helps you distinguish the favor approach in dealing the risk. The checklists preparing to aware the employees are very essential. In our project we find out that all staffs that are responsible for carrying out the construction site work in Uzun Company which will be harmed and badly affected, such as, skilled and manual labor, groups of carpenter, electrician, heavy equipment operators ironworkers, laborer, mason plasterer, plumber, pipefitter, sheet metal worker, steel fixer, also known as a rod buster, and welder. Each group may suffer from different dangerous injuries and illnesses, For example, heavy equipment operator might undergo backbone damage by repeating in operating and lifting heavy equipment.
According to Health and Safety Executive, we should think of: a lot of labors have specific necessities, for example; young labors, expectant mothers and persons with disabilities may be at specific risk. More concentration should be added to some hazards [2].
Literature Review
Health hazards and risk factor associated with construction activities, identified from previous studies are presented in this section. In addition, cause of poor safety practices and possible methods to enhance safety practices were also identified.
Zelalem Mebrate: Study on Risk Management Practices on Public Building Construction Project: In Case of Eastern Wollega Zone
Health Hazards and Risks
A hazard is a potential source of harm or an adverse health effect on persons. Risk is the likelihood that a person may be harmed or suffered from adverse health effects if exposed to a hazard. Therefore risk can be minimized although the hazard is there. Two major hazards that common in construction site have been identified. Two major hazards that are common in construction site have been identified by [8].
Cause of Poor Safety Practice
Possible causes of poor safety practices that were identified from safety equipment, safety management, and safety attitude of workers, safety training and others. Chemical hazards found in construction work include asbestos, welding, fumes, spray paints; cutting oil mists solvents and others [7].
Overview of Project and Its Lifecycle
In defining a project, Larson and Gray (2011) described it as a non-permanent venture carried out to create a specific outcome of product, service, or results and it's characterized by the following: A set objective, Time constraint, Budget constraint, desired performance criteria, and Engagement of distinct sectors and professionals. [3] On the other hand describes a Project as a chain of activities with a set start and close date that holds a specific objective to be realized within the constraint of time, cost, and resources.
Risks in the Project Life Cycle Cost
To make sure that everybody connected with a project is aware what a risk is, one common definition should be drawn up for the purpose of the particular project. To quantify identified risks, [3] uses a tool where likelihood of occurring risk is rated.
Risks are associated with every project and should be identified in order to avoid negative impacts on the overall performance. Many problems which are faced in later phases of the PLC result from unmanaged risks from the earlier stage this indicates how important it is to carry out accurate analysis especially in an initial phase of a project [8] Perceive RM as a process which starts at project definition and continues through planning, execution, control and closure phase. However, a study conducted by Lyons and Skit more (2002) proves that planning and execution are the two phases where RM is most widely used.
Discussions
By comparing health hazards, in different studies depending on the type of construction site worker may expose too many healthy problems.
This part of the research deal with the analysis and discussion of data collected from questionnaire survey and interviews. The questionnaire was developed based on the literature review of this research study as well as from research questions.
The research studies show that the developing country does not pay attention for both chronic and acute hazards and different contractors have different perception toward health hazards.
Unexpected inflation is ranked 2 nd in the overall group of factor and 1 st in his group according to the expectation of the respondents also 100% of respondent and interviewer are believe the effect of Unexpected inflation is risk that affect the performance in public building construction project and also for the question is it Unexpected inflation is affect risk management performance is hundred percent of the respondents are say yes with in relative importance index of 0.81. Increase of material price/unexpected inflation/ 2 nd not only in the group but also among the 59 factors. The factor was ranked very highly impact according to the respondent rate of response, as from discussions have stated that the increase of material price/fluctuation/is a major factor that affects the performance of public building construction project. In recent years in Ethiopia as a result of the successive economic growth, the price of materials on the market is observed to be very unstable. The price of everything has greatly increased and is being increased. The construction sector is one of the victims of this high price rise of inputs; and the main actors of construction are contractors who are sustaining the damage at the front line. In other words, the unexpected inflation is unpredictable that contractors cannot easily determine how the price of materials can go on, it can be seen that the price fluctuation of these inputs can be fairly predictable, that is contractors can fairly determine how the price of these inputs will go on based on other factors this is agrees and supported in the paper that is done by [4]. For to improve the problem of unexpected inflation the government must have considered the price of material from time to time; also during the agreement contractor and clients are to take consideration depending on the duration of the project the detail is give in the following figure. In this study let as seen the percent of the respondents say yes and no to practice risk management in his /her organization at different cycle of the project stages. Among the group which don't use risk management techniques around (42.9%) put down lack of awareness as their main reason.
Conclusions
The objective of this thesis was clearly stated in the introduction part and to achieve the objective, factors were identified; the questionnaire was prepared and filled by contractors, clients, and consultants. The major finding of the research which has already discussed was summarized. Finally, the following conclusions were drawn. 1) From the survey data analysis researcher were conclude that, about (52%) of the project progress is lagging from the schedule and also not finalized within a given budget because there is increment of time proportionally, depending on this analysis we can possible to say more than 50% of the project is lagging from the schedule. Consequently, there is a different risk factor that affects the progress of project accordingly because 52% of the projects are not finished within a given schedule and budget. 2) The ways become the respondents aware the opinion of risk management are: (52%), were first aware of risk management practices through experience, (24%) of the respondents gained their awareness through study, (22%) of them through workshop and training and (2%) were aware of risk management through reading. 3) Most of the parties involved in the Ethiopian building construction projects are aware of the concept of risk management, but only a relatively smaller number of this group believe they have an adequate knowledge needed for applying these risk management techniques to make their projects successful. 4) The top five risk rated by their level of effect and frequency of occurrence that affect the performance of construction project from the data analysis are: -Market condition, unexpected inflation, local taxes, inadequate production of raw materials in the country and the Economic condition of country. 5) Generally the seventy 70% of respondents say yes were practice risk management techniques at the different cycle of the construction project, from those (38%) of the during construction stages, (4%) during planning and design stage, (8%) during tendering time and (22%) of the respondents are practice risk management techniques at all stage of the construction project in public building construction project in case of Eastern Wollega zone. 6) While the remaining 30% stated they don't use any risk management practice because of different reason, from those (42.9%) lack of awareness (21.4%) negligence, (14.3%) of the expensiveness and time consuming and six percent (6%) of the respondents are say scarcity of trained man power. | 2021-07-27T00:05:57.673Z | 2021-05-21T00:00:00.000 | {
"year": 2021,
"sha1": "516889bdd4ec6f91d76ad5412d75d69ce42e9b3e",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajce.20210903.12.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "24eac25e435565d7bc3e26f5b684e1bb6786ddf6",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Business"
]
} |
18979684 | pes2o/s2orc | v3-fos-license | Comparison of the Effect of Lidocaine Adding Dexketoprofen and Paracetamol in Intravenous Regional Anesthesia
Objective. Comparison of dexketoprofen and paracetamol added to the lidocaine in Regional Intravenous Anesthesia in terms of hemodynamic effects, motor and sensorial block onset times, intraoperative VAS values, and analgesia requirements. Method. The files of 73 patients between 18 and 65 years old in the ASA I-II risk group who underwent hand and forearm surgery were analyzed and 60 patients were included in the study. Patients were divided into 3 groups: Group D (n = 20), 3 mg/kg 2% lidocaine and 50 mg/2 mL dexketoprofen trometamol; Group P (n = 20), 3 mg/kg 2% lidocaine and 3 mg/kg paracetamol; Group K (n = 20), 3 mg/kg 2% lidocaine. Demographic data, motor and sensorial block times, heart rate, mean blood pressure, VAS values, and intraoperative and postoperative analgesia requirements were recorded. Results. Sensorial and motor block onset durations of Group K were significantly longer than other groups. Motor block termination duration was found to be significantly longer in Group D than in Group K. VAS values of Group K were found higher than other groups. There was no significant difference in VAS values between Group D and Group P. Analgesia requirement was found to be significantly more in Group K than in Group P. There was no significant difference between the groups in terms of heart rates and mean arterial pressures. Conclusion. We concluded that the addition of 3 mg/kg paracetamol and 50 mg dexketoprofen to lidocaine as adjuvant in Regional Intravenous Anesthesia applied for hand and/or forearm surgery created a significant difference clinically.
Introduction
Regional Intravenous Anesthesia (RIVA) was first applied by German surgeon, August K.G. Bier, in 1908, and this technique was defined as Bier block [1]. RIVA is generally preferred for patients who will have upper extremity surgery due to advantages such as providing a blood free surgery site, rapid onset and termination of the anesthetic effect, lack of necessity of severe sedation, and general anesthesia and easy application [2,3]. Ketorolac, tenoxicam, paracetamol, clonidine, myorelaxant drugs, and opioids were added into local anesthetic agents as adjuvant to increase block quality in RIVA, to reduce tourniquet pain, to provide postoperative analgesia, and to reduce the dose of local anesthetic agent administrated [4][5][6][7].
Although molecular mechanism is not known well, intravenous paracetamol (perfalgan) is used for mild and intermediate postoperative pain. It is a nonopioid analgesic which reduces the opioid quantity used for severe pain [8][9][10].
Dexketoprofen trometamol is a nonselective NSAII with analgesic, antipyretic, and anti-inflammatory characteristics of which the parenteral form was developed in 2003 [11].
In the present study, we aimed to compare sensorial block onset and return periods, motor block onset and return periods, the block quality that appeared, preoperative and postoperative vital signs, and the need for intraoperative and postoperative analgesia for lidocaine-paracetamol combination and lidocaine-dexketoprofen combination retrospectively in the light of findings that we have obtained by the examination of patient files who have undergone hand and/or forearm surgery through the RIVA method in our university.
BioMed Research International
Once the study protocol was approved by the ethics committee of the Karadeniz Technical University in accordance with the 2nd Helsinki Declaration (date: 26.11.2012, meeting no.: 2012/125, resolution no.: 02), the anesthesia records of the patients were selected and the patients were enrolled in the study. Adult patients who have been examined routinely by anamnesis and physical examination and classified as ASA I and II according to preoperative physical status classification recommended by the American Society of Anesthetists were included in the study. Anesthesia records and hospital archive records of 73 patients between the age of 18 and 60 to whom regional intravenous anesthesia (RIVA) was applied were examined. The data of 13 patients were not included in the study because they did not comply with the study criteria, and the data of 60 patients were examined.
Exclusion criteria were (i) analgesic drug treatment in the previous 24 h, (ii) history of allergy to study medications, (iii) any neurological deficit in the upper extremities, and (iv) the presence of any contraindications to IVRA.
Age, gender, ASA, operation duration, and tourniquet periods were recorded from hospital archive files and anesthesia records.
It was observed from the files that premedication by 0.15 mg/kg midazolam (im) was performed before the surgery and RIVA (Regional Intravenous Anesthesia) was applied by monitoring average arterial pressure, heart rate, and peripheral oxygen saturation parameters.
The patients were divided into the following groups according to the medications used for RIVA procedure.
Records of these patients in three groups were examined. It was observed that the tourniquet pressure of the RIVA solution was kept as 100 to 150 mmHg higher than systolic arterial pressure or at 250 to 300 mmHg, study medications were administrated within 90 seconds, sensorial block was assessed by a pinprick test every 30 seconds, sensorial examination of antebrachial, radial, ulnar, and median nerve dermatomes was conducted, and the motor block was assessed via the Modified Bromage Scale (MBS) by inability to move the wrist and fingers voluntarily by asking the patients if they could move their wrist and fingers. It was also observed that sensorial and motor block onset times and termination times of the blocks were recorded and their mean arterial pressures (MAP), heart rates, pulse oximeter, and oxygen saturations (spO2) were recorded and their records were evaluated.
It was detected that VAS (Visual Analog Scale) and Ramsey sedation scale were used before and at the 5th, 10th, 20th, and 30th minutes after tourniquet procedure and at the 5th, 10th, 15th, and 30th minutes and the 1st and 2nd hours after the tourniquet was opened for pain and sedation level measurements. Furthermore, intraoperative and postoperative analgesic requirements of the patients who had analgesic administration as fentanyl 1 g/kg when intraoperative VAS was over 3 were examined. It was observed that 500 mg oral Parol tablet was given to the patients whose pain sustained postoperatively and 50 mg contramal tablet for those whose pain was persistent. It was detected that interviews were performed with the patients after their discharge and questions related to operation comfort, quality, and incision pain were asked. Side effects that the patients had, such as nausea, vomiting, dyspeptic complaints, skin rash, and tinnitus, were examined from hospital archive files and anesthesia records.
Statistical data analysis was carried out by using "Statistical Package for Social Sciences" (SPSS) for Windows Release 13.0 program. Ki-Square was used for comparison of qualitative data; compliance to normal distribution in comparison of the data obtained by measurement was performed through the Kolmogorov-Smirnov test; student's -test was used if it complied with the normal distribution and the Mann-Whitney -test was used if it did not comply. Variance analysis of repetitive measurements or the Friedman test was used for comparison of measurements which continue from the beginning. Data obtained through measurements were expressed with mean standard deviation and data obtained by count was expressed as %. Significance level was accepted as < 0.05.
Results
No difference was detected between the groups in terms of age, gender, ASA, operation durations, and tourniquet periods (Table 1).
No significant difference was found between the groups in terms of intraoperative and postoperative time values, heart rates, and mean arterial pressure values.
There was no statistically significant between-group type of surgery (Table 2).
Sensorial block onset durations of Group K were significantly longer than other groups ( < 0.05). There was no significant difference between Groups D and P in terms of sensorial block onset periods. No significant difference existed between the groups in terms of sensorial block termination times as well (Table 3).
Motor block onset durations of Group K were significantly longer than other groups ( < 0.05). There was no significant difference between Groups D and P in terms of sensorial block onset periods. Motor block termination duration was found significantly longer in Group D than Group K ( < 0.05) ( Table 3). VAS values of Group K were higher than other groups ( < 0.05). There was no significant difference in VAS values between Group D and Group P (Figure 1).
Intraoperative analgesia requirements were significantly more in Group K than Group P and Group D. Intraoperative analgesia was required for 8 patients in Group K and for 4 patients in Group D. Postoperative analgesia requirements were significantly more in Group K than Group P and Group D. Postoperative analgesia was required for 9 patients in Group K and for 5 patients in Group D (Figure 2).
It was also found that 1 patient had skin rash and 2 patients had bradycardia during their follow-ups. There was no significant difference between the groups ( > 0.05).
Discussion
Regional intravenous anesthesia is a common regional anesthesia method used for upper extremity surgery. It was detected that the addition of 3 mg/kg paracetamol and 50 mg dexketoprofen into local anesthetic agents as adjuvant in when compared with patients to whom no adjuvant agent was added.
In RIVA, adverse events may appear as a result of local anesthetic agent passage into the circulation during intraoperative period and these complications may rarely be fatal. We attempted to reduce local anesthetic quantity and concentration and to find a local anesthetic with the lowest dose that may create an efficient local anesthesia to reduce systemic toxicity. Different adjuvant medications were added to local anesthetic agents to support a sufficient anesthesia on low concentration and dose.
In the literature scan, the most preferred local anesthetic agents for RIVA are prilocaine and lidocaine [12][13][14].
In the study conducted by Fahim et al., sensorial and motor block onset time was found to be shorter in the group where sufentanil was added to lidocaine; however, dizziness was detected following tourniquet opening [15].
There are studies indicating that the addition of dexamethasone might prolong sensorial and motor block in RIVA. Bigat and Boztug detected in their RIVA study conducted with dexamethasone, a steroid, by considering inflammatory steps during pain physiopathogenesis that 8 mg dexamethasone added to 3 mg/kg lidocaine increased anesthesia quality and provided a significant anesthesia on the first postoperative day [19].
Sen et al. concluded in their RIVA study by adding lornoxicam into 3 mg/kg lidocaine that sensorial and motor block onset time was shorter, sensorial and motor block return time was longer, and the necessity for first anesthesia for tourniquet pain was longer and total analgesic consumption was reduced in the group (L-IVRA) where lornoxicam was added to lidocaine when compared with other groups (control and L-IV) [20].
When the literature was examined, studies where paracetamol and dexketoprofen were added to local anesthetic agents in RIVA are rare [7,[21][22][23]. There is no study where two adjuvants were compared in the literature.
There is only one study where dexketoprofen was used as adjuvant in RIVA in the literature. Yurtlu et al. [23] detected in their study with dexketoprofen in lidocaine in RIVA that sensorial and motor block onset times were shorter, return times were longer, intraoperative analgesia requirement was less, intraoperative and postoperative VAS values were lower, and no difference existed between hemodynamic values. Similarly, motor block and sensorial block onset periods were found to be shorter in our study in patients to whom dexketoprofen was added when compared with the group without adjuvant addition. Furthermore, the need for intraoperative analgesia and VAS values were similarly found to be lower [23].
There are three studies in the literature where paracetamol was used as adjuvant in RIVA.
Ko et al. [7] reported in their RIVA study by adding 300 mg of intravenous paracetamol into 0.5% lidocaine that although sensorial block onset time was shorter in the group where paracetamol was added when compared with the control group, there was no difference in terms of sensorial block return times after the operation, intraoperative analgesia requirement was less, and intraoperative and postoperative VAS values were lower. Similarly, motor block and sensorial block onset periods were found to be shorter in our study in patients to whom paracetamol was added when compared with the group without adjuvant addition. Furthermore, the need for intraoperative analgesia and VAS values were similarly found to be lower.
In another study conducted by Celik et al. [22] through the addition of 200 mg of intravenous paracetamol to lidocaine (3 mg/kg), it was reported that there was no difference between sensorial and motor block onset and return times; furthermore, requirement of intraoperative analgesia was less. Similarly, we also found that the intraoperative analgesia requirement was less.
Sen et al. [21] did not find any difference between sensorial block onset time, motor block onset, and return time between the group where paracetamol was added and the control group in their RIVA study where they added 300 mg of intravenous paracetamol into lidocaine (3 mg/kg), and they reported that postoperative sensorial block return time was longer in the paracetamol group and intraoperative analgesia requirement was less.
When we assess the studies for possible adverse events and complications developed, no adverse events were detected in the study conducted by Ko et al. [7]. Sen et al. [21] reported nausea in three patients in their study. In case of follow-ups of our study, rash on one patient and bradycardia on two patients were detected. We could not find any significant difference between the side effects developed and groups.
There is no study in the literature in which motor and sensorial block periods, intraoperative analgesia requirement, hemodynamic monitoring, and side effects developed were compared to adding paracetamol and dexketoprofen to lidocaine in regional intravenous anesthesia. We analyzed anesthesia records and hospital records of the patients for whom 50 mg of dexketoprofen and 3 mg/kg paracetamol were added to 3 mg/kg in regional intravenous anesthesia.
According to our results, the addition of 50 mg dexketoprofen and 3 mg/kg paracetamol to 3 mg/kg lidocaine shortened sensorial and motor block onset periods and prolonged motor block and sensorial block termination periods when compared with the patients to whom adjuvant was not added in line with the studies conducted. Furthermore, it was found that it reduced intraoperative analgesia need and intraoperative and postoperative VAS values were lower; no significant difference existed in hemodynamic parameters. In the study conducted, no significant value was found when groups that adjuvant was added to were compared.
Consequently, it was found that the addition of paracetamol and dexketoprofen to the lidocaine in regional intravenous anesthesia applied for hand and/or forearm surgery does not create any significant difference; however, it is more successful clinically according to the group without adjuvant addition. | 2018-04-03T04:56:56.597Z | 2014-03-31T00:00:00.000 | {
"year": 2014,
"sha1": "9186aa610ca346361885d41afca7518af2d4cf1f",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2014/938108.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52d66c41cc692dee1bda98d64eab2e39dcd4e599",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3283593 | pes2o/s2orc | v3-fos-license | State-specific monoclonal antibodies identify an intermediate state in epsilon protein kinase C activation.
Evaluation of the activation state of protein kinase C (PKC) isozymes relies on analysis of subcellular translocation. A monoclonal antibody, 14E6, specific for the activated conformation of epsilonPKC, was raised using the first variable (V1) domain of epsilonPKC as the immunogen. 14E6 binding is specific for epsilonPKC and is greatly increased in the presence of PKC activators. Immunofluorescence staining by 14E6 of neonatal rat primary cardiac myocytes and the NG108-15 neuroblastoma glioma cell line, NG108-15/D2, increases rapidly following cell activation and is localized to new subcellular sites. However, staining of translocated epsilonPKC with 14E6 is transient, and the epitope disappears 30 min after activation of NG-108/15 cells by a D2 receptor agonist. In contrast, subcellular localization associated with activation, as determined by commercially available polyclonal antibodies, persists for at least 30 min. In vitro, epsilonRACK, the receptor for activated epsilonPKC, inhibits 14E6 binding to epsilonPKC, suggesting that the 14E6 epitope is lost or hidden when active epsilonPKC binds to its RACK. Therefore, the 14E6 antibody appears to identify a transient state of activated but non-anchored epsilonPKC. Moreover, binding of 14E6 to epsilonPKC only after activation suggests that lipid-dependent conformational changes associated with epsilonPKC activation precede binding of the activated isozyme to its specific RACK, epsilonRACK. Further, monoclonal antibody 14E6 should be a powerful tool to study the pathways that control rapid translocation of epsilonPKC from cytosolic to membrane localization on activation.
Several isozymes of protein kinase C (PKC), 1 lipid-dependent protein kinases, are present within a single cell, each mediat-ing unique intracellular functions. Studies using conventional or confocal microscopy reveal a complex and specific localization of PKC isozymes in their inactive as well as their active state (1)(2)(3)(4). Most isozymes are localized to unique sites prior to cell stimulation, and translocate upon activation to new distinct intracellular sites. PKC isozyme localization is determined by binding to specific anchoring molecules termed RACKs (receptors for activated C-kinase) (5). Two RACKs have been identified and characterized to date: the IIPKC-specific RACK (RACK1) (6) and the ⑀PKC-specific RACK (⑀RACK), also known as ЈCOP (7).
Many aspects of PKC activation are not fully understood. Are conformational changes associated with PKC activation? Does lipid binding precede RACK binding? In order to answer these and other questions, it is necessary to develop new "statespecific" reagents to distinguish between active and inactive individual PKC isozymes.
⑀PKC is an isozyme that regulates many cellular functions (8 -14). In cardiac myocytes, ⑀PKC mediates cardioprotection from an ischemic episode (9, 11, 14 -16), cardiac hypertrophy (17,18), regulation of L-type calcium channel (19,20), and regulation of contraction rate (10,21). The V1 domain of ⑀PKC is involved in the binding of activated ⑀PKC to its RACK in these cells (7,10,22). Because binding to RACKs occurs only after activation of ⑀PKC (7,23,24), we expected that a conformational change in this domain must occur upon activation. We therefore predicted that some antigenic determinants on the V1 domain should be exposed only following ⑀PKC activation and that some of the antibodies raised against V1 might recognize the active state of ⑀PKC and be isozyme-selective.
We report here the production and characterization of an isozyme-selective monoclonal antibody (mAb) to ⑀PKC that is specific for the activated form of this isozyme. Activation is required to expose the epitope for this antibody, but 14E6 no longer binds to the activated enzyme when the latter is bound to ⑀RACK. Together, these studies suggest the existence of a previously unidentified state in PKC activation: a transient, activated, but non-anchored state. Immunostaining with 14E6 should help in identifying cells in tissues where ⑀PKC has been activated by a physiological trigger. In addition, 14E6 will be a useful marker to follow the pathway of ⑀PKC translocation and help elucidate the mechanism involved in this pathway.
EXPERIMENTAL PROCEDURES
Materials-Recombinant ␦, ⑀, and ␥ PKCs produced in Sf9 cells were obtained from PanVera (Madison, WI). Phorbol myristate acetate (PMA) was from Alexis (San Diego, CA), and phosphatidylserine (PS) and dioleoylglycerol (DG) were purchased from Avanti (Alabaster, AL). PKC was partially purified from rat brain by DEAE-cellulose chroma-* This work was supported by Grants AA11147 (to D. M.-R.) and AA010030 (to I. D.) from the National Institutes of Health. This research was also supported by funds provided by the State of California for medical research on alcohol and substance abuse through the University of California, San Francisco (to I. D., A. S. G., and D.M.-R.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
Preparation of Recombinant Proteins-The V1 regions of rat ␦ and ⑀PKC (amino acids 2-144 and 2-145, respectively) were expressed as fusion proteins with MBP (maltose-binding protein; pMAL-c2 vector, New England BioLabs) in Esherichia coli. Vectors were also constructed expressing the C-terminal half of ⑀RACK as an MBP fusion protein (MBP-⑀RACK-C, amino acids 425-905), MBP-LacZ, and MBP-IIPKC-V5 (amino acids 622-673). MBP fusion proteins were purified by affinity chromatography on amylose resin according to the New England Bio-Labs protocol. Where indicated, cleavage of the fusion protein and removal of MBP were carried out with 5 g/ml Factor Xa (New England BioLabs) for 24 h at 4°C.
Monoclonal Antibody Production-Purified ⑀PKC-V1 was incubated with 120 g/ml PS, 4 g/ml DG, and ⑀ RACK-C for 15 min at room temperature (PS and DG in ethanol were dried under nitrogen, 20 mM Tris, pH 7.5 was added to bring PS to 1.2 mg/ml and DG to 40 g/ml, the mixture sonicated on ice, with a Branson Sonifier at 20% output at setting 2.5, using 3 cycles of 1 min each, with 30 s cooling periods on ice between cycles). Eight-week-old BALB/c mice (Harlan) were injected subcutaneously three times at 3-4-week intervals with 10 g of ⑀PKC-V1 and 10 g of ⑀RACK-C with PS and DG, emulsified with Freund's adjuvant. Four days after the final boost, splenocytes (10 8 ) were fused with 2 ϫ 10 7 murine myeloma Fox-NY cells (from Thomas A. Stamey) according to Kohler and Milstein (26). Cells were grown in HAT-containing medium as described (25), and positive cultures were subcloned three times. Antibodies were obtained either as ascites in BALB/c mice or as supernatant in serum-free medium in roller bottles, and purified by precipitation with 50% ammonium sulfate, followed by gel filtration chromatography on Sephacryl S-300. SDS-PAGE revealed the expected 75 and 25 kDa heavy and light chains at ϳ95% purity. Immunoglobulin isotypes were determined with a kit from BD Pharmingen.
ELISA Binding Assays-Recombinant MBP fusion proteins and PKCs were incubated at 3-5 g/ml in PBS in 96-well ELISA plates for 2 h at room temperature or overnight at 4°C. Plates were washed three times with PBS plus 0.05% Tween-20 (PBS/Tween), blocked with 1% BSA in PBS for 2 h (this and subsequent incubations were at room temperature), and washed as above. 100 l of antibody was added, and the plate was further incubated for 1 h, followed by three washes with PBS/Tween. 100 l of HRP-goat anti-mouse IgG (Sigma Chemicals and Jackson Laboratories, West Grove, PA; diluted 1:5000 in PBS containing 1% BSA) was added and incubated for 1 h, followed by three washes with PBS/Tween. 100 l of O-phenylenediamine and 0.03% H 2 O 2 in 100 mM citrate buffer, pH 6, was added to assess HRP activity; after 15 min, 50 l of 2 N H 2 SO 4 was added to stop the reaction and absorbance at 490 nM determined. To determine whether binding of ⑀RACK to ⑀PKC affects the subsequent binding of antibodies, 5 g/ml MBP-⑀RACK-C was added to immobilized MBP-⑀PKC-V1 for 1 h at room temperature, followed by three washes with PBS/Tween, prior to antibody addition.
ELISA for Analysis of ⑀PKC Activation-Activation of ⑀PKC was carried out by incubation of 1-5 g of PanVera Sf9 ⑀PKC in a final volume of 100 l containing 20 mM Tris, pH 7.5, 60 g/ml PS, and 2 g/ml DG, at room temperature for 5 min. The mixture was diluted to 10 ml with 20 mM Tris, pH 7.5, and bound to ELISA plates (Costar 3590) at 100 l per well for 1 h. Plates were washed (this and all subsequent washes were carried out three times with PBS/Tween), blocked with 5% nonfat dry milk powder in PBS/Tween for 45 min, washed again, incubated with antibodies in PBS/Tween for 1 h at room temperature, and washed again. Plates were incubated with anti-mouse IgM-horseradish peroxidase conjugate (Pierce Endogen 31440) for 1 h, washed, and developed with 3,3Ј,5,5Ј-tetramethylbenzidine (Sigma T3405; 1 mg per 10 ml of 50 mM sodium phosphate/25 mM citric acid, pH 5.5 with 2 l of 30% hydrogen peroxide). The reaction was stopped with 50 l of 2 M H 2 SO 4 and absorbance determined at 450 nm.
Cardiac Myocyte Preparation-Primary cardiac myocytes were prepared from the hearts of 1-day-old Sprague-Dawley rats by gentle trypsinization as described (27). Non-myocytes were removed by preplating for 30 min. Myocytes were plated at 800 cell/mm 2 in M-199 medium (Invitrogen, Life Technologies, Inc.) with 0.1 mM bromodeoxyuridine, 2 g/ml vitamin B 12 , 100 units/ml penicillin, 100 g/ml streptomycin sulfate, and 10% fetal bovine serum (Hyclone, Logan, UT) in 1% CO 2 . After 24 h, the culture medium was changed to the above medium containing 80 M ascorbic acid, and cells were cultured an additional 72 h. Myocytes were then incubated in a defined medium (M-199 with penicillin and streptomycin, 2 g/ml vitamin B 12 , and 10 g/ml each of insulin and transferrin) for 24 h, after which experiments were initiated.
NG108-15 Cell Culture-NG108 -15 cells were cultured as described (28), and maintained in a serum-free defined medium for 6 days prior to immunofluorescence or immunoprecipitation experiments. Experiments with the dopamine receptor agonist NPA were carried out with NG108 cells stably transfected with the dopamine D2 receptor, NG108-15/D2 (29).
Immunofluorescence-Cardiac myocytes were plated on glass cover slips or in chamber slides coated with 1.2 g/ml laminin from mouse sarcoma cells (Invitrogen, Life Technologies, Inc.). Following the indicated treatments, cells were fixed for 3 min on ice in methanol/acetone (1:1, Ϫ20°C). Slides were washed three times with cold PBS, blocked with PBS plus 0.1% Triton X-100 (PBS/Triton) containing 1% normal goat serum, and incubated with primary antibodies overnight at 4°C in PBS/Triton containing 1% normal goat serum. After three washes with PBS/Triton, cells were incubated with fluorescein labeled-goat antimouse IgM, anti-mouse IgG, or anti-rabbit IgG (Organon Teknika Corp., Durham, NC) in PBS/Triton containing 1% normal goat serum for 2 h at room temperature, washed three times with PBS/Triton, and mounted in Vecta Shield (Vector Labs, Burlingame, CA). Fluorescence microscopy was carried out with a ϫ40 water immersion objective or by confocal microscopy. NG108 -15/D2 cells were fixed with methanol and confocal microscopy carried out as described (28). These fixation conditions were found optimal to conserve antibody epitopes and cell structure.
Immunoprecipitation of NG108 -15 Cell Lysates-For immunoprecipitation with 6E4 antibody, 2 g of 6E4 was incubated with 30 l of packed volume of protein G beads (Invitrogen) overnight at 4°C. For immunoprecipitation with 14E6 antibody, 2 g of 14E6 in PBS was added to 30 l of packed volume of anti-mouse IgM agarose (Sigma) overnight at 4°C. Antibody-bound beads were then washed twice with PBS and blocked with 3% BSA for 2 h at 4°C. Triton-soluble material was diluted 5 times in homogenization buffer to dilute the Triton X-100 to 0.2%, then precleared with protein G or anti-IgM beads for 30 min at 4°C, incubated with antibody-bound beads overnight at 4°C, and subsequently washed four times with PBS containing 0.1% Triton X-100. Bound material was eluted with SDS sample buffer, run on an 8% SDS-PAGE, transferred and probed for ⑀PKC (anti-V5 Santa Cruz) and ⑀RACK (monoclonal antibody 4B12).
Immunoprecipitation of Rat Brain PKC-For immunoprecipitation, 5 g of each monoclonal antibody was added to 50 l of goat anti-mouse IgM (Jackson Laboratories) conjugated to Sepharose beads using cyanogen-bromide-activated Sepharose 4B (Amersham Biosciences), or to 50 l of protein G-Agarose beads (Invitrogen, Life Technologies, Inc.), and rocked for 2 h at 4°C. The beads were blocked with 3% BSA in PBS for 1 h at 4°C, and washed three times with PBS/0.05% Tween. Approximately 30 g of rat brain PKC (partially purified by DEAE cellulose and gel filtration chromatography, Ref. 17) was diluted in 20 mM Tris, pH 7.4, with 20 mg/ml soybean trypsin inhibitor, 20 mg/ml aprotinin, and 10 mg/ml phenylmethylsulfonyl fluoride (protease inhibitors) without EDTA or EGTA, and incubated for the indicated times at room temperature with 120 g/ml PS and 4 g/ml DG. Brain PKC was then added to the beads and the incubation continued for 2 h at 4°C, followed by washing the beads three times with PBS/0.1% Triton X-100.
RESULTS
Our purpose was to generate mAbs that distinguish between active and inactive ⑀PKC. The V1 domain of this isozyme was chosen as the immunogen because it contains a RACK binding site (7, 10) that is exposed only after ⑀PKC activation. We therefore expected that certain epitopes in this domain may be exposed only following activation of ⑀PKC and therefore should be recognized by antibodies that would only recognize activated ⑀PKC. Mice were injected with recombinant protein comprising the V1 region of rat ⑀PKC (amino acids 2-145). Hybridoma culture supernatants were screened by ELISA for binding to MBP-⑀PKC-V1.
14E6 Competes with ⑀RACK for Binding to ⑀PKC-V1-Because the 14E6 binding site on ⑀PKC is within the V1 domain ( Fig. 1), the same domain that contributes significantly to the binding of ⑀PKC to full-length ⑀RACK (7), we next determined whether the 14E6 epitope overlaps with the ⑀RACK binding site. We used recombinant MBP-fusion proteins containing either the ⑀ V1 domain, MBP-⑀PKC-V1 (⑀PKC 2-145) or the C-terminal half of ⑀RACK, MBP-⑀RACK-C (Ј-COP 425-905), which contains the ⑀PKC binding site. Both 14E6 and the control anti-⑀PKC-V1 mAb, 6E10, bound to MBP-⑀PKC-V1 (Fig. 2, A and B, solid bar and Fig. 1), and did not bind to MBP-⑀RACK-C or to MBP (Fig. 2, A and B, striped bars). However, when ⑀PKC-V1 and ⑀RACK-C were preincubated before addition of 14E6, the binding of 14E6 to this complex ( Fig. 2A, open bar) was 70% lower than to ⑀PKC-V1 alone ( Fig. 2A, filled bar), suggesting that binding of ⑀RACK to ⑀PKC prevents the binding of 14E6 to the enzyme. In contrast, binding of 6E10 to ⑀PKC-V1 was not significantly decreased by concurrent ⑀RACK-C binding (Fig. 2B, open bar), making it unlikely that the inhibition of 14E6 binding to ⑀PKC by ⑀RACK was due to a nonspecific steric effect. Our data indicate that the epitope for 14E6 overlaps with or is close to an ⑀RACK binding site on ⑀PKC-V1 and predict that 14E6 will not bind to complexes of active ⑀PKC with ⑀RACK in vivo.
In Vitro Activation of ⑀PKC with PS and DG Increases Binding by 14E6 -PKC is activated in vivo through binding of lipid activators such as PS and DG. To determine whether 14E6 or 6E10 could distinguish between the active and inactive forms of ⑀PKC, recombinant ⑀PKC was activated by addition of PS and DG, and 14E6 or 6E10 binding was assessed by ELISA. Antibody binding to ⑀PKC is expressed as the ratio of binding in the presence of lipid activators to binding in the absence of lipid activators (Fig. 3). Binding of 14E6 was increased almost 3-fold by inclusion of lipid activators to 10 ng of ⑀PKC (Fig. 3). The concentration of enzyme appeared to be critical for optimal sensitivity in this assay. When 25 or 50 ng of ⑀PKC was used, binding of 14E6 was already high (data not shown) and was minimally affected by lipid activation of the enzyme (Fig. 3). This explains the binding of 14E6 to ⑀PKC in the absence of activators shown in Fig. 1, since the standard ELISA protocol used over 300 ng of protein per well. The increased binding to ⑀PKC in the presence of lipid activators when limiting amounts of ⑀PKC are used suggests that the 14E6 epitope on the enzyme becomes exposed upon activation of ⑀PKC. We also found that the binding of mAb 6E10 to activated ⑀PKC is slightly but significantly lower than binding to inactive ⑀PKC (Fig. 3). Taken together, our data suggest that mAb 14E6, but not 6E10, specifically recognizes the active form of PKC in an ELISA assay.
We next determined whether 14E6 recognizes the active form of ⑀PKC in solution using an immunoprecipitation assay. When a partially purified preparation of PKC from rat brain was incubated with 14E6, little ⑀PKC was immunoprecipitated (Fig. 4A). However, incubation of brain PKC with the activating lipids PS and DG for 3 or 6 min at room temperature resulted in significant immunoprecipitation of ⑀PKC by 14E6 (Fig. 4A). These activators increase the catalytic activity of calcium-independent PKC in the brain PKC preparation by over 10-fold as determined by phosphorylation of myelin basic protein (data not shown). In contrast, 6E10 immunoprecipitated more ⑀PKC in the absence of PKC activators (Fig. 4B) and commercial antiserum to the C terminus of ⑀PKC-V5 immunoprecipitated equal amounts of enzyme regardless of the presence of PKC activators (data not shown). The level of ␦PKC in the 14E6 immunoprecipitate was very low, indicating the specificity of 14E6 for ⑀PKC (Fig. 4A). In addition, SDS electrophoresis of the supernatants of the immunoprecipitates demonstrated that degradation of ⑀PKC did not occur during the experiment (Fig 4A, upper blot), indicating that the differences in immunoprecipitation observed in Fig. 4 reflect differences in epitope exposure in the various samples. Therefore, 14E6 is a unique antibody; it specifically recognizes an antigenic determinant on ⑀PKC that becomes exposed following activation by PS and DG.
The 14E6 Epitope Is Exposed Following Stimulation of NG108-15/D2 Cells-To determine whether 14E6 identifies activated ⑀PKC in cells, we used a rat/mouse neuroblastoma X glioma cell line stably expressing the dopamine D2 receptor, NG108-15/D2. These cells respond to the dopaminergic agonist NPA with a robust translocation of ⑀PKC (29). We compared staining with 14E6 and with commercial ⑀PKC monoclonal antibodies (raised against the last 15 amino acids in the V5 domain of ⑀PKC) as a function of time following cellular activation. Immunofluorescence confocal microscopy with the commercial anti-⑀PKC mAbs revealed an NPA-induced translocation from the nucleus and a narrow perinuclear region to a broader perinuclear/Golgi distribution as well as to cytosolic sites (Fig. 5A). In contrast, very little staining with 14E6 was observed in resting cells (Fig. 5B, left panel), but was induced by NPA treatment, yielding staining in broad perinuclear and Golgi regions (Fig. 5B); there was little cytosolic staining, however.
With the commercial anti-⑀PKC mAb, translocation was not obvious until 5 min after exposure to NPA, and persisted at 30 min. In contrast, 14E6 staining of NG108-15/D2 cells increased within 1 min after addition of NPA, was maximal by 10 min, and was low again by 30 min. The level of exposure of the 14E6 epitope in these cells is very low, as the signal from 14E6 (Fig. 5B) was amplified 10 times compared with the signal from the commercial antibody (Fig. 5A). Notably, 14E6 staining is not observed in the nucleus either before or after NPA is added, in contrast to staining with the commercial anti-⑀PKC antibody. These results suggest that 14E6 does not recognize inactive ⑀PKC in the nucleus of NG108-15/D2 cells. Translocation of ⑀PKC, as determined by the commercial antibodies, persisted at 30 min (Fig. 5A), when activated enzyme was no longer detected by 14E6 (Fig. 5B), suggesting that exposure of the epitope for 14E6 is more transient than is localization to sites usually associated with activated enzyme. Further, 30 min after treatment of cells, the commercial anti-⑀PKC antibody (Fig. 5A), but not 14E6 (Fig. 5B), stained the cell cytosol peripherally.
We also carried out immunoprecipitation experiments on the solubilized particulate fraction of lysates from NG108-15/D2 cells that were incubated with NPA over the same time course as the immunofluorescence studies. Western blots of immunoprecipitation studies with monoclonal anti-⑀PKC indicate that the total amount of ⑀PKC in the particulate fraction increases with time of incubation with NPA, as expected (Fig. 5C). In contrast, 14E6 only immunoprecipitates a protein recognized by antibodies to ⑀PKC at the 1 min time point (Fig. 5D), supporting the transient nature of the 14E6 epitope indicated in the immunofluorescence studies (Fig. 5B). The exact time of appearance of the 14E6 epitope in the immunofluorescence and immunoprecipitation studies cannot be compared since the cells are fixed immediately for the former and must be lysed and fractionated after the indicated time of incubation with NPA before immunoprecipitation can be carried out. Nevertheless, the immunoprecipitation studies support the conclusion of the immunofluorescence studies that the 14E6 epitope only occurs transiently after activation of the D2 receptor. 4. 14E6 binding to ⑀PKC in vitro increases following activation of the enzyme with PS and DG. A, partially purified PKC from rat brain (ϳ100 units/mg) was incubated at room temperature for 3 or 6 min with or without PS and DG (upper blot) and immunoprecipitation by 14E6 was carried out as described under "Experimental Procedures." The immunoprecipitated material was analyzed for ⑀PKC and ␦PKC by Western blot (lower blots). The molecular weight of ␦PKC was confirmed using the brain extract (data not shown). B, partially purified PKC was incubated at room temperature for 5 min with or without PS and DG, immunoprecipitated by either 14E6 or 6E10, and analyzed for ⑀PKC by Western blot.
Both the immunofluorescence and immunoprecipitation data indicate that the anti-⑀PKC antibodies recognize activated ⑀PKC at 30 min. However, there is no 14E6 staining at this time. One possible interpretation of this data is that ⑀PKC is active but bound to ⑀RACK and therefore cannot bind to 14E6 (Fig. 2). Therefore we incubated the blots from the immunoprecipitation experiments with antibodies to ⑀RACK. We found that ⑀RACK is co-immunoprecipitated by the anti-⑀PKC antibody (Fig. 5C), but not by 14E6 (Fig. 5D). Taken together, our data suggest that the 14E6 epitope becomes inaccessible when ⑀PKC is bound to ⑀RACK in cells. To further investigate this possibility, we used neonatal cardiac myocytes, where the localization of activated ⑀PKC is very characteristic and more easily discernable (3,4).
Activation of Cardiac Myocytes with Phorbol Ester Results in Induction of the Epitope for 14E6 -Our previous studies in resting cardiac myocytes using commercially available anti-⑀PKC antibodies raised against the V5 region of ⑀PKC localized ⑀PKC to the nucleus, with only some cells showing ⑀PKC at the perinucleus and in cross-striated structures in the cell body (3,4). After activation, ⑀PKC is localized to cross-striated structures and the perinucleus in most cells. This suggests that inactive ⑀PKC is found in the nucleus and activated ⑀PKC in the perinucleus and cross-striated structures (4). The localization of ⑀PKC in resting and activated cardiac myocytes was compared using commercial polyclonal anti-⑀PKC antiserum and the ⑀PKC-V1 antibody, 14E6. In agreement with our published observations, when cells were stained with the commercial anti-⑀PKC antibodies, the ratio of perinuclear and cell body staining to nuclear staining increased upon activation of the cells with PMA (Fig. 6A, top panels). In contrast, staining with 14E6 was always non-nuclear, consisting of perinuclear, punctate, and cross-striated patterns in the cell body (Fig. 6A, middle panels). In addition, there was an increase in the intensity of 14E6 immunofluorescence staining in PMA-treated cells compared with that in unstimulated cells (left versus right middle panels, Fig. 6A). 14E6 staining was not seen when the FIG. 6. 14E6 stains cardiac myocytes in an activation-specific pattern. A, cardiac myocytes were untreated (control) or treated with 100 nM PMA for 10 min, fixed, and immunofluorescence localization of ⑀PKC determined with either a polyclonal anti-⑀PKC V5 antibody or with 14E6. In lower panel, 1 mg/ml MBP-⑀PKC-V1 was added with 14E6 to block specific staining. B, cardiac myocytes were treated with 1 nM PMA or 1 nM PMA plus the ⑀PKC agonist ⑀RACK for 1 min.
FIG. 5. The epitope for 14E6 is induced by NPA treatment of NG-108/D2 cells. NG108 -15/D2 cells were treated with 50 nM NPA for the indicated times. Cells were fixed and stained with a commercial monoclonal anti-⑀PKC-V5 antibody (A) or 14E6 (B) and analyzed by confocal microscopy. Laser power was 10fold higher for B. The false color images represent staining intensity (black, green, yellow, orange, in order of increasing intensity). C, Triton-soluble lysates were prepared from NG108 -15/D2 cells treated with 50 nM NPA for the indicated times and immunoprecipitated using monoclonal antibody for ⑀PKC. D, Triton-soluble lysates were prepared as in C and the 14E6 monoclonal antibody was used for immunoprecipitation. Both C and D were probed for ⑀PKC using a commercial rabbit polyclonal antibody against ⑀PKC-V5 and a monoclonal antibody against ⑀RACK.
antibody was pre-incubated with 1 mg/ml MBP-⑀PKC-V1 (Fig. 6A, lower panels), indicating again that 14E6 is specific for the V1 domain of ⑀PKC. Both the extranuclear localization of the 14E6 epitope and the increase in staining following activation of PKC suggest that 14E6 is specific for activated ⑀PKC. Moreover, as was observed in NG108-15/D2 cells stimulated with a dopaminergic agonist, translocation of ⑀PKC observed with 14E6 could be detected in myocytes before that seen with the polyclonal antibody and it was more transient (not shown). Taken together, these data suggest that 14E6 is selective for activated ⑀PKC, recognizing a transient form of the activated enzyme.
We recently identified a peptide activator of ⑀PKC. This peptide, ⑀RACK, is thought to disrupt intramolecular interaction within PKC and thus expose both the RACK binding site and the catalytic site, rendering the enzyme active (9,30). Addition of this peptide to cardiac myocytes causes selective ⑀PKC translocation and function (9). In addition, it increases the function of ⑀PKC in the presence of suboptimal levels of PMA. We predicted that if 14E6 recognizes activated ⑀PKC, we should see an increase in staining when cells are treated with ⑀RACK. As seen in Fig. 6B, cardiac myocytes treated for 1 min with 1 nM PMA had immunostaining levels with 14E6 that were not different from those in control-treated cells (compare with Fig. 6A, middle left panel). However, in the presence of ⑀RACK and 1 nM PMA, immunostaining with 14E6 was similar to that seen with fully activated ⑀PKC (compare Fig. 6A, middle right panel and 6B, right panel.) These data support our hypothesis that 14E6 specifically recognizes the active state of ⑀PKC.
14E6 Recognizes Active but Not RACK-associated ⑀PKC in Cells-The ELISA data shown in Fig. 2 and the co-immunoprecipitation experiments shown in Fig. 5D support our hypothesis that following cellular activation, binding of ⑀PKC to ⑀RACK in cells prevents binding of 14E6 to ⑀PKC, thus leading to the transient appearance of the 14E6 epitope observed in NG108 -15/D2 cells (Fig. 5). To determine whether ⑀RACK binding to ⑀PKC precludes 14E6 binding to ⑀PKC in cardiac myocytes, we used confocal microscopy to assess co-localization of ⑀PKC and ⑀RACK in cardiac myocytes. Because both 14E6 and the only anti-⑀RACK antibodies available are mouse IgM mAbs, we first examined simultaneous staining of ⑀PKC with a rabbit polyclonal antibody (Fig. 7A) and anti-⑀RACK mAbs (Fig. 7B). In resting cells, there is very little overlap between ⑀RACK and ⑀PKC as stained with the polyclonal ⑀PKC antibodies (data not shown; see also Refs. 5,7,10). However, after activation most of the ⑀PKC co-localizes with ⑀RACK (Fig. 7C, yellow). After even brief and mild activation, only small areas of unique staining for polyclonal anti-⑀PKC (red staining) remain. This implies that most of the ⑀PKC stained by the polyclonal antibody is also bound to ⑀RACK. If binding of activated ⑀PKC to ⑀ RACK precludes 14E6 binding, then the 14E6 epitope should not co-localize with polyclonal anti-⑀PKC staining in activated cells. Therefore, we next examined simultaneous staining of ⑀PKC using the polyclonal anti-⑀PKC (Fig. 8, left, red) and 14E6 (Fig. 8, right, green). After activation, although both antibodies indicated translocation of ⑀PKC to cross-striated structures (see also Fig. 5), we observed very few areas where the cross-striated staining by the two antibodies merged (Fig. 8, merged). In most areas, there was alternate green-red staining. These data are consistent with the existence of at least two populations of activated ⑀PKC in the cross striations of cardiac myocytes. One is ⑀PKC that is stained by the polyclonal anti-⑀PKC antibody, which is co-localized with ⑀RACK and that makes up the overwhelming majority of ⑀PKC. The second, is a transient and small population of ⑀PKC that is stained by 14E6. We propose therefore that the commercial polyclonal antibodies stain the cross-striated structures by binding to activated ⑀PKC bound to its RACK; 14E6 recognizes a transient, activated, but non-anchored state of ⑀PKC that has not yet reached the site of ⑀RACK.
DISCUSSION
Monoclonal antibody 14E6, raised against the V1 domain of ⑀PKC, appears to recognize an epitope exposed only after activation of the enzyme. First, binding of 14E6 to ⑀PKC in ELISA or immunoprecipitation experiments is increased in the presence of lipid PKC activators when using limiting amounts of the enzyme (Figs. 3 and 4). Second, immunofluorescence localization of ⑀PKC indicates that the 14E6 epitope is induced upon activation of cardiac myocytes by PMA, by the selective ⑀PKC activator peptide, ⑀RACK (Fig. 6), or by stimulation with norepinephrine (data not shown); in NG108 -15/D2 cells, staining with 14E6 is only observed after activation of the D2 receptor. Third, in cardiac myocytes (Figs. 6 and 8) and in NG108 -15 cells (Fig. 5), 14E6 does not stain subcellular compartments where inactivate ⑀PKC is localized. In cardiac myocytes, for example, activation with PMA leads to an increased Cardiac myocytes were treated with 3 nM PMA for 1 min. Immunostaining was carried out with anti-⑀PKC-V5 polyclonal antibodies (red, left) and with 14E6 (green, right). Merge of the images (middle panel) suggests lack of co-localization of staining with the two antibodies as indicated by the distinct alternate red and green staining (arrows) and the almost complete absence of yellow in these panels. (Note that the intensity of staining and the ability to merge two images to give a yellow image if they are co-localized depends on the intensity of each staining; it is therefore limited to detect overlap in localization of the majority of each staining). ratio of extranuclear to nuclear ⑀PKC (Fig. 6), supporting our earlier observation (4) that nuclei contain a pool of inactive ⑀PKC that translocates to extranuclear structures upon activation. 14E6 staining of PMA-treated cardiac myocytes is exclusively non-nuclear, localized to sites near those where activated ⑀PKC is found (4), including a perinuclear ring, a punctate pattern in the cell body, and in cross-striated structures. Importantly, 14E6 only stains the activated but nonanchored enzyme, since staining by 14E6 of activated ⑀PKC does not overlap with staining of ⑀RACK where activated ⑀PKC is anchored (Fig. 8).
The V1 domain of ⑀PKC was used as the immunogen for raising activation-specific antibodies because it contains an activation -specific binding site for ⑀ RACK (7,22). The activation-specific epitope for 14E6 on ⑀PKC appears to be within or near the binding site for ⑀ RACK (Fig. 2), suggesting that 14E6 only recognizes activated but non-anchored ⑀PKC. Indeed, the data in Figs. 7 and 8 show that 14E6 stains cross-striations distinct from those stained by both an antibody to ⑀ RACK and a polyclonal anti-⑀PKC antibody, supporting this conclusion. Activation by DG, ⑀RACK, or PMA causes ⑀PKC to unfold into an active conformation (30) that is recognized by 14E6. We propose that this conformational change unmasks the ⑀ RACK binding site in the V1 domain, leading to subsequent binding of the enzyme to ⑀ RACK, which in turn prevents the binding of 14E6.
Our data suggest a model of activation of ⑀PKC where the enzyme has at least three states of activation: State I, inactive ⑀PKC; State II, a lipid-activated transient state of ⑀PKC to which 14E6 can bind; and State III, active, RACK-associated ⑀PKC to which 14E6 cannot bind (Fig. 9). This model agrees with the time course of immunofluorescence and immunoprecipitation studies in NG108-15/D2 cells (Fig. 5). In the immunofluorescence experiments, the 14E6 epitope appeared within 1 min following treatment with NPA, before translocation of the majority of the ⑀PKC from distinct perinuclear localization to a broader perinuclear/Golgi and cytosolic localization (Fig. 5, A and B; step 1, Fig. 9). Thirty minutes after treatment with NPA, the 14E6 epitope disappeared (Fig. 5B; step II, Fig. 9), whereas commercial anti-⑀PKC antibodies showed staining at sites where ⑀PKC is bound to ⑀RACK. 2 The absence of 14E6 staining appears to be due to masking of the 14E6 epitope by interaction with ⑀ RACK (Fig. 2). The transient nature of the 14E6 epitope was confirmed by immunoprecipitation studies of the solubilized particulate fraction of NG108-15 cells; 14E6 staining was only observed 1 min after activation of the D2 receptor. Moreover, at no time point was ⑀RACK co-immunoprecipitated with 14E6 antibodies (Fig. 5D). In cardiac myocytes, where the localization of ⑀PKC is more distinct, 14E6 and ⑀RACK do not appear to be co-localized, since 14E6 yields cross-striated staining similar to but distinct from the localization of ⑀RACK or the ⑀PKC-⑀RACK complex (Figs. 7 and 8). An elegant study of Walker and co-workers (22) using kinetic analysis of ⑀PKC and ⑀PKC-V1 translocation in cardiac myocytes demonstrated that ⑀PKC translocation is controlled by the rate of intramolecular conformational changes within the V1 domain. They also suggest that in vivo anchoring of PKC may involve sequential activation events due to binding via the C1 domain, the phorbol ester-and DG binding domain and binding via the V1 domain, the RACK binding domain (22). A recent study from our laboratory using real-time imaging of GFP-tagged ⑀PKC supports this conclusion (31).
In conclusion, 14E6, raised against the V1 domain of ⑀PKC, appears to recognize a transient epitope formed upon activation of purified ⑀PKC by lipid cofactors; this site is masked once the enzyme binds to its RACK. Therefore, this highly specific antibody has allowed us to identify an intermediate stage of ⑀PKC activation, which results from lipid activation, and which is translocated from its site in non-activated cells, but has not yet reached its final localization site bound to ⑀RACK. These data suggest that lipid binding precedes binding of the activated enzyme to ⑀RACK, since 14E6 recognizes lipid-activated ⑀PKC but not activated enzyme anchored to ⑀RACK. Because there is little staining of 14E6 in non-stimulated cells, this antibody should be a useful diagnostic antibody to study acute activation of ⑀PKC in tissues and in vivo. FIG. 9. A model of ⑀PKC activation. Shown are three stages of PKC activation: cytosolic inactive PKC (State I, red) anchors to membranes on elevation of DG (State II, yellow). This transient state (II) is selectively detected by 14E6 and is induced by PKC binding to DG in the presence of lipids (bold in lipid scheme). The activated enzyme then binds to ⑀RACK, but is not detected by 14E6 (green, state III). This last state represents the active stable form of PKC. On activation (step 1), PKC translocates from the cytosol to the membrane. This activation state (detected by 14E6) is transient and then this lipid-bound PKC binds to its RACK (step 2). Finally, by an as yet unknown mechanism, PKC detaches from its RACK and returns to the inactive state in the cytosol (step 3). | 2018-04-03T00:48:35.699Z | 2004-04-23T00:00:00.000 | {
"year": 2004,
"sha1": "54d684f2a150cc50b73ede6382ed32a090a40471",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/279/17/17617.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "15c5364651b2c748e650a556de61f40ebbcbcd70",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
121737095 | pes2o/s2orc | v3-fos-license | Gas Electron Multiplier (GEM) application for Time Projection Chamber (TPC) gating
A BSTRACT : A voltage-controlled Gas Electron Multiplier (GEM) can be used to block the re-injection of positive ions in large volume Time Projection Chambers (TPC). Through an accurate choice of proper geometry, gas filling and external fields it is possible to obtain a sufficient level of electron transmission at very low GEM voltages (Gating GEM) despite the degradation of energy resolution due to the loss of primary electrons. The addition of a pre-amplification GEM in front of the Gating GEM causes an improvement in energy resolution while keeping the ion feedback at the level of primary ionization. The measurements show that a small pulse of about 40 V completely closes the gate, stopping the ions produced in the amplification stage.
Introduction
The Gas Electron Multiplier (GEM) [2] , consists of a thin Kapton insulating foil copper-clad on both sides and perforated by a high density, regular matrix of holes (50 to 100 per square millimetre). The distance between holes (pitch) is typically 140 µm and the diameter is of about 70 µm. Upon an application of a potential difference between the GEM electrodes, a high dipole field develops in the holes, focusing the field lines between the drift electrode and the readout element. Electrons drift along the channel and the charge is amplified by a factor that depends on the field intensity and on the length of the channel.
Large volume gaseous Time Projection Chambers (TPCs) suffer from the problem of ionbuilt space charge. The amplification structures (nowadays still MPWC) amplifies the primary electrons created by the interacting particle and consequently generates ions that slowly drift back to the drift volume where they modify the electric field thus changing the time-space properties of the next track. Next generation TPC endcap will maybe be equipped with Micro Pattern Gas Detector (MPGD) such as GEM and Micromegas ( [4][5][6]). Gating electrodes are needed to prevent the re-injection of positive ions into the large gas volume and during LEP (and LHC) period the wire-gating technique was extensively used. The next generation gating structures can be made by MPGDs. When a GEM foil is powered at very low potential difference (from 10V up to 40V) [1] it does not act as an electron amplifier device. Its electron transparency (the ratio between the number of electrons that are able to pass trough the GEM holes and the numbre of electrons present in front of the top GEM electrode) is reduced to few tens of percent depending on the applied potential difference, on the external fields, on the GEM geometry and on the chosen filling gas. A voltagecontrolled Gas Electron Multiplier (GEM) powered at low potential difference can be used to block the re-injection of positive ions in large volume Time Projection Chambers. A gated pulse that inverts the GEM potential difference stops all the ions produced in the amplification stages below the gating GEM. In this paper we investigated the basic physics processes under this application but real pulse gating will be matter of future work. Figure 1 shows the small TPC scheme. Two different configurations were studied: in the first one the first GEM foil was used as Gating GEM; in the second one another GEM (Preamplification GEM) foil was added in front of the Gating GEM in order to get a preamplification factor. The final amplification stage was represented by a double GEM structure (Bottom GEMs). All the GEM foils used for the measurements are standard GEM foils (50 µm hole diameter, 140 µm pitch); the drift cathode was a metallized mylar plate and the anode was a full copper plane. The pulse height spectra were acquired using an ORTEC 142-IH preamplifier and an ORTEC 450 research amplifier; the current was read-out by means of a Keithley 6517A picoamperometer (∼ 1 pA resolution). The detector performance was studied by means of a collimated 8.9 keV X-Rays beam impinging either orthogonally or parellely to the drift cathode. When the direction was orthogonal to the drift cathode, the X-Rays could convert in three different detector gaps: in the drift gap (A region), in the transfer 1 gap (B region) or in the transfer 2 gap (C region); when the direction was parallel to the drift cathode, the conversions happened only in the drift gap (A region, see figure 8 ). The gas mixture used in all the characterization was Ar/CO 2 70%/30%.
Gating GEM transparency measurements
When a low potential difference (from 5V to 50V) is applied to a GEM foil [1], it does not act as an amplification device but rather it absorbs part of the charge that approaches it and its electron transparency (ε GatingGEM , the ratio between outcoming and incoming electrons) is not 100% but depends on the ∆V GEM and on the external electric fields [3]. Figure 2 shows that the measured electron transparency has a value around 30% for 10V ≤ ∆V ≤ 40V . The Gating GEM electron transparency was obtained by the ratio between the B pulse height peak position and the C pulse height peak position (see figure 3). The energy resolution is worsened by the presence of the Gating GEM (from 20% FWHM up to 60% FWHM) and this effect is clearly visible in figure 3, that shows the pulse hight spectra acquired for different ∆V GatingGEM . to get an improvement of energy resolution, another GEM foil (Preamplification GEM) was added before the Gating GEM. This is an useful method to preamplify primary electrons before loosing part of them in the gate electrode. The typical PH spectra obtained when the Preamplification GEM was operational is shown in figure 4.
As it is shown in figure 5, the ∆V applied on the Preamplification GEM varies between 350 V and 430 V, giving a gain (measured as G Preampli f icationGEM = A PHPeakPosition B PHPeakPosition ) between 2 and 5.5. This gain was also obtained through the optimization of external (E T 1 , E T 2 ) electric fields.
The aim of this exercise was to get a unitary first stage gain (G FirstStage = G PreampGEM * -4 - ε GatingGEM ) by adjusting ∆V PreampGEM . As it is shown in figure 6 this result was obtained with ∆V PreampGEM = 390V . It was also proven that the addition of the Preamplification GEM (figure 5) improved the energy resolution by a factor of 2, from 60% FWHM down to 30% FWHM.
Gating GEM voltage scan with preamplifier GEM
In order to prove the gating properties of the Gating GEM, a counting mode ∆V GatingGEM scan was made and the corresponding photon interaction rate was recorded. As it is shown in figure 7 the potential difference applied to the Gating GEM varied from -20 V up to +20V (where the positive : this step demonstrated that the gate is completely closed with a very small potential difference, from -10 V to -5 V. This is a very promising feature for high rate pulsed gate in TPCs. The closed gate plateau is not on the zero level because of X-Rays conversion in the C region.
Preamplification GEM Ion back flow measurement
In a real TPC operation all the ions produced in the amplification stages below the gate will be stopped by closing the gate. Being the Preamplification GEM placed above the gate, it is crucial to understand how many of the ions produced in this stage will drift back to the large gas volume. The setup used to measure the Preamplification GEM ion feedback is shown in figure 8: Figure 9 shows that in the scanned Preamplification GEM voltage range with a fixed drift field (E d ) value of 0.1 kV/cm, a NIF around 2-3 was measured, the same order of magnitude of primary ionization current. The conclusion is that the ion feedback contribution introduced by the preamplification stage is not much higher than the primary ionization itself.
Amplification stage voltage scan
The experimental setup shown in figure 8 was used in order to understand if the Gating GEM was working independently from the amplification stage. The measurement of the NIF was performed changing the potential difference on one of the two amplification GEM (∆V BottomGEM2 ), keeping the same ∆V PreampGEM = 390 V and closing the gate (∆V GatingGEM = −20V ). Figure 10 shows that the measured NIF corresponds to a value of 2-3: this number underlines that the NIF is only due to the Preamplification GEM. If the gate is opened (∆V GatingGEM = +20V ) the point @ ∆V BottomGEM2 = 400V will correspond to a Gain FullDetector of about 3000. Figure 11 shows a ∆V GatingGEM scan that summarizes the prototype behaviour.
Full detector behaviour
When the gate is completely closed (∆V GatingGEM = −20V ) the overall gain ( I readout I ionization ) is zero (no electron can get to the amplification stage) and the NIF is 2-3 and this corresponds to the PreampGEM NIF. When the gate is open (∆V GatingGEM = +20V ) the overall gain is around 3000 but it can be increased if one of the two amplification GEM voltages is increased.
Conclusions
The standard GEM foil can be used as a gate electrode for a TPC when a low potential difference is applied to its electrodes. A small pulse (20V-40V) is sufficient to open and close the gate, giving the possibility to have a very high rate pulsed gating. The measurements prove also that the energy resolution is improved by the addition of a properly operated Preaplification GEM in front of the Gating GEM and that the Preamplification GEM does not largely contribute to the Normalized Ion Feedback, NIF (Preamp GEM NIF = 2-3). In addition, since all the ions produced in the last stage are stopped by the gate, the amplification stage gain can be as high as required by the experiment. | 2019-04-19T13:02:32.472Z | 2010-03-01T00:00:00.000 | {
"year": 2010,
"sha1": "497b302e59bdc75b7f5cf942055b2e919d1d070c",
"oa_license": "CCBY",
"oa_url": "http://cds.cern.ch/record/1257947/files/jinst_5_03_P03001.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "3c59437ac2cee58e1aa35fc532edd0881fe69e64",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119114991 | pes2o/s2orc | v3-fos-license | Dependence of X-ray Burst Models on Nuclear Masses
X-ray burst model predictions of light curves and final composition of the nuclear ashes are affected by uncertain nuclear masses. However, not all of these masses are determined experimentally with sufficient accuracy. Here we identify remaining nuclear mass uncertainties in X-ray burst models using a one zone model that takes into account the changes in temperature and density evolution caused by changes in the nuclear physics. Two types of bursts are investigated - a typical mixed H/He burst with a limited rp-process and an extreme mixed H/He burst with an extended rp-process. When allowing for a 3$\sigma$ variation only three remaining nuclear mass uncertainties affect the light curve predictions of a typical H/He burst ($^{27}$P, $^{61}$Ga, and $^{65}$As), and only three additional masses affect the composition strongly ($^{80}$Zr, $^{81}$Zr, and $^{82}$Nb). A larger number of mass uncertainties remains to be addressed for the extreme H/He burst with the most important being $^{58}$Zn, $^{61}$Ga, $^{62}$Ge, $^{65}$As, $^{66}$Se, $^{78}$Y, $^{79}$Y, $^{79}$Zr, $^{80}$Zr, $^{81}$Zr, $^{82}$Zr, $^{82}$Nb, $^{83}$Nb, $^{86}$Tc, $^{91}$Rh, $^{95}$Ag, $^{98}$Cd, $^{99}$In, $^{100}$In, and $^{101}$In. The smallest mass uncertainty that still impacts composition significantly when varied by 3$\sigma$ is $^{85}$Mo with 16 keV uncertainty. For one of the identified masses, $^{27}$P, we use the isobaric mass multiplet equation (IMME) to improve the mass uncertainty, obtaining an atomic mass excess of -716(7) keV. The results provide a roadmap for future experiments at advanced rare isotope beam facilities, where all the identified nuclides are expected to be within reach for precision mass measurements.
INTRODUCTION
Type I X-ray bursts are frequently observed thermonuclear explosions on the surface of neutron stars that accrete matter from a nearby companion star (Schatz & Rehm 2006;Strohmayer & Bildsten 2006;Lewin et al. 1993;Parikh et al. 2013). The bursts are powered by nuclear reaction sequences that transform accreted hydrogen and helium into heavier elements via the 3α-reaction, which burns helium into carbon, the αp-process, a sequence of proton captures and (α,p) reactions, and the rapid proton capture process (rp-process), a sequence of proton captures and β + -decays (Wallace & Woosley 1981;van Wormer et al. 1994;Schatz et al. 1998;Schatz et al. 2001;Fisker et al. 2008;Woosley et al. 2004;José et al. 2010). Nuclear data on neutron deficient rare isotopes are needed to predict burst light curves that can then be compared with observations to constrain system parameters and neutron star properties (Heger et al. 2007;Galloway et al. 2004;Zamfir et al. 2012). Nuclear data are also needed to calculate the composition of the burst ashes to predict possible composition specific spectral signatures (Weinberg et al. 2006; Barrière et al. 2015;Kajava et al. 2016), and to predict the composition of the neutron star crust, which in turn influ-ences heat transport properties as well as strength and distribution of various deep nuclear heating and cooling processes (Haensel & Zdunik 2008;Gupta et al. 2007;Schatz et al. 2014). This relates to observations of crustal cooling during quiescence in transiently accreting systems, which can provide unique insights into the stellar interior of neutron stars (Brown & Cumming 2009;Horowitz et al. 2015).
Nuclear masses play an important role in X-ray burst models as they define the location of the proton drip line and therefore the path of the rp-process (Schatz et al. 1998;Schatz 2006). Motivated by the data needs of X-ray burst models, a large number of mass measurements on very neutron deficient isotopes have been carried out by taking advantage of new radioactive beam production capabilities and advances in experimental techniques such as Penning traps (Clark et al. 2004;Rodríguez et al. 2004;Schury et al. 2007;Clark et al. 2007;Weber et al. 2008;Savory et al. 2009;Elomaa et al. 2009;Haettner et al. 2011;Fallis et al. 2011;Kankainen et al. 2012) and storage rings (Stadlmann et al. 2004;Yan et al. 2013). These techniques provide mass data with the required accuracy of better than 10-100 keV or about 1:10 6 . Measurements have also reached beyond the proton drip line using β-delayed proton spectroscopy (Del Santo et al. 2014) and proton breakup (Rogers et al. 2011). In addition, improvements in nuclear theory enable the calculation of the masses of the most exotic rp-process nuclei for which the proton number exceeds the neutron number with an accuracy of about 100 keV (Brown et al. 2002). This approach uses predictions of Coulomb shifts to calculate masses from the measured mass of the less exotic mirror nucleus, where proton and neutron numbers are exchanged. Because of these developments, the need for global mass models has largely been eliminated, and as a consequence the uncertainties of nuclear masses along the rp-process are now rather well characterized, and correlations among uncertainties are much reduced. This paper takes advantage of the improved knowledge of nuclear masses to quantify the impact of the remaining mass uncertainties on X-ray burst models, and to provide guidance for future measurements that address them.
Nuclear masses m enter X-ray burst models directly in form of proton capture Q-values Q (p,γ) = m(Z, A) + m p − m(Z + 1, A + 1) (with nuclei of mass number A and charge number Z and proton mass m p ). Q (p,γ) values are used to calculate the (γ,p) photodisintegration rates λ (γ,p) from the (p,γ) proton capture rates < σv > (p,γ) via detailed balance (1) with partition functions of initial and final nuclei G i and G f , reduced mass µ, and temperature T (Schatz et al. 1998). The ratio of the (γ,p) reaction rate to the reverse (p,γ) rate depends exponentially on Q (p,γ) and is therefore very sensitive to nuclear masses. The ratio strongly increases as proton captures reach more and more proton rich nuclei with lower Q-values, and eventually becomes large enough to impede the proton capture process, which then has to wait for a slow β + decay to occur before proton captures can resume. Around these so called waiting points (p,γ) and (γ,p) reactions compete with each other. In this situation the net reaction flow depends directly on the ratio of the competing rates and therefore on nuclear masses.
In principle, nuclear mass uncertainties can also enter X-ray burst models as part of reaction rate uncertainties. Reaction rates are for the most part not measured directly but calculated, and in many cases this requires the input of nuclear masses. These cases include reaction rates predicted using the Hauser-Feshbach statistical approach, which depends on reaction Q-values, and reaction rates calculated from resonance properties, if resonance energies are not determined directly but deduced from excitation energies and the reaction Q-value (Schatz 2006;Wrede et al. 2010). However, this uncertainty is included in previous sensitivity studies, which took into account reaction rate uncertainties due to all nuclear structure ingredients, including masses (Cyburt et al. 2016). This work complements that study and focuses exclusively on the additional direct dependence of X-ray burst models on nuclear masses via the Q (p,γ) factor of Eq. 1 in situations of competing forward and reverse rates. This effect has not been included in reaction rate sensitivity studies such as Cyburt et al. (2016), which vary forward and reverse reactions together according to detailed balance (Eq. 1) and keep Q-values fixed. Our approach neglects correlations between these additional mass uncertainties, and the reaction rate uncertainties. This is justified as during local (p,γ)-(γ,p) equilibrium, when the mass uncertainty contribution studied in this work is most important, reaction rates and their uncertainties become unimportant and vice versa.
The importance of nuclear masses for rp-process calculations was shown in Schatz et al. (1998) who performed constant temperature and density calculations with different mass models. A single zone X-ray burst model similar to the one of the present study has been used to demonstrate the strong impact of nuclear mass uncertainties in specific cases, usually in the context of new mass measurements, including 68,70 Se (Savory et al. 2009), 105 Sn and 106 Sb (Elomaa et al. 2009), 65 As ), 69 Br (Del Santo et al. 2014, and 45 Cr (Yan et al. 2013). Only one pioneering large scale systematic study of mass uncertainties has been carried out before (Parikh et al. 2009), based on the 2003 Atomic Mass Evaluation (AME2003) (Audi et al. 2003). However, a large number of masses have been measured since that time. In addition the study had some limitations that are overcome in this work: It used the post-processing approach, where the impact of mass uncertainties on the temperature and density evolution is neglected and which therefore cannot determine the impact of mass uncertainties on burst light curves. The study was also limited to varying Q-values of less then 1 MeV.
METHOD
The sensitivity of X-ray burst models to nuclear masses is analyzed using a one-zone model (Schatz et al. 2001;Cyburt et al. 2016) that has been shown to predict nuclear processes, light curves, and final composition of the burst ashes with sufficient similarity to full 1D models to be useful to identify important nuclear uncertainties. X-ray bursts in nature show a broad range of characteristics depending on accretion rate, accreted composition, and neutron star properties. As a consequence, nuclear processes can differ significantly. In particular, the amount of hydrogen at ignition can vary, which strongly affects the extent of the rp-process. We use two different ignition conditions to span this range. Model A is characterized by a large initial hydrogen abundance of 0.66. Such ignition conditions would occur in a system that accretes low metallicity material at a relatively high accretion rate and has been used in previous work to map out the possible extent of an rp-process in X-ray bursts (Schatz et al. 2001). The rp-process in model A reaches all the way to the Sn-Sb-Te cycle. Model B is identical to the model ONEZONE in Cyburt et al. (2016) and has been tailored through comparison with a full 1D model to represent the mixed hydrogen and helium bursts observed in GS 1826-24. In model B the main rp-process ends in the A = 60 − 64 range, with a weaker reaction flow reaching into the A = 80 region. Tab. 1 summarizes the pressure at ignition depth P , initial hydrogen (X) and helium (Y ) mass fractions, and peak temperature T peak for both models. Nuclear reaction rates were taken from JINA reaclib V2.0 (Cyburt et al. 2010). Nuclear Q-values were calculated from atomic masses. Experimental atomic masses were taken from the Atomic Mass Evaluation AME2012 (Wang et al. 2012). Unknown atomic masses beyond the N = Z line were calculated from experimental masses of mirror nuclei using the Coulomb displacement energies from Brown et al. (2002) and adding in quadrature an additional uncertainty of 100 keV. For the remaining nuclei with unmeasured masses, the mass extrapolations provided by AME2012 were used. Using these Qvalues, reverse rates were calculated for a particular mass table using Eq. 1. For the mass variations, only (γ,p) reactions were recalculated as these are the only cases where reactions in forward and reverse direction compete, and the uncertainty on the ratio of forward and reverse reaction rates of interest here matters. We do not recalculate theoretically predicted forward rates < σv > (p,γ) with the modified masses.
For each X-ray burst model, the set of nuclei for which masses were varied was identified by requiring a net reaction flow integrated over the burst duration either leading to or from the nuclide of at least 10 −5 mole/g. The net reaction flow is defined as where (dY i /dt) i→f is the abundance change for initial nuclide i induced by the particular reaction under consideration that converts the initial nuclide i into final nuclide f. For comparison, the integrated net reaction flow through the 3α reaction, which is the major bottleneck for creating seed nuclei and can therefore serve as a useful normalization is of the order of 10 −2 mole/g. The number of selected nuclides N nuc is listed in Tab. 1. Because of the explicit calculation of reaction flows, (p,γ)-(γ,p) equilibria tend to result in erroneously large net reaction flows. This is a well-known problem (see for example Fig. 12 in Cyburt et al. (2016) and the unreasonable large flow from 59 Zn to 60 Ga), but helps here as it ensures that nuclei involved in such equi-librium clusters are not missed even if true net flows are weak. This is important, as the influence of mass uncertainties is greatest for nuclei participating in such equilibria. For each of the important nuclei, two burst calculations were carried out, one with the mass increased by 3σ, and one with the mass decreased by 3σ. Light curve and final composition were then compared to determine the impact of the respective mass uncertainty. Differences in light curves are quantified by the maximum light curve ratio r LC among all time steps, either max depending on which one is larger than 1, with time steps t and luminosity L up and L down for the burst light curves obtained with a mass increase or decrease, respectively.
Prior to the comparison a time offset is applied to align the light curve peaks in time. This prevents small changes in ignition time that simply shift the burst light curve in time and would be observationally irrelevant, to appear as large discrepancies. However, small changes in light curve shape around the peak that lead to a small offset correction can still lead to spuriously large r LC , especially during the steep burst rise. An example is the 36 Ca mass, which has a small effect on the late rise and the shape of the burst peak resulting in a small r LC =1.2. However, the burst shape change results in a small 0.14 s time shift of the burst peak and, once one corrects for the shift, an artificially large r LC =3.5 results from the steep early burst rise. These issues are identified by visual inspection. We divide light curve impacts in two qualitative categories based on the judgement of the authors, following the approach of Clayton & Woosley (1974) and Cyburt et al. (2016). Category 1 are impacts that are likely observationally relevant. We use the general criterion r LC >2.5, but make exceptions for smaller r LC values if the shape of the light curve is significantly changed beyond simple changes in burst duration. Category 2 are impacts that are small, but may become significant if uncertainties were larger. We use a threshold of r LC >1.7 , below which we judge changes to be unlikely to be observationally relevant.
The final composition is summed by mass number to focus on composition changes during the burst, and not on changes along isobaric chains that may occur towards the end of the burst model calculation due to long lived decays and continuum electron captures. As a measure of impact r Comp = y up /y down is used, with y up and y down being the final abundances, summed by mass number, obtained with a mass increase and decrease, respectively. Only r Comp values where at least one of the final abundances is above 10 −5 are considered.
RESULTS
The mass uncertainties σ impacting the light curve (when masses are varied by 3σ) for Model A, which has the most extended rp-process and there- fore the largest number of relevant mass uncertainties, are listed in Tab. 2 . Only four mass uncertainties have a major (category 1) effect on the light curve: 65 As, 66 Se, 80 Zr, and 91 Rh (Fig. 1). Two mass uncertainties, 62 Ge and 58 Zn, have an interesting effect on the shape of the early light curve cooling, and are therefore also classified as category 1 (Fig. 1). Two additional mass uncertainties have smaller (category 2) impacts, 82 Nb, and 95 Ag (Fig. 2). There are three mass uncertainties not listed in Tab. 2 that produce negligible effects that are barely noticeable in the graphs and that are mentioned for completeness -28 S (σ = 160 keV, r LC =1.5), 26 P (σ = 196 keV, r LC =1.3), and 86 Tc (σ = 298 keV, r LC =1.5) (Fig. 2).
Mass uncertainties that lead to more than a 20% abundance change in Model A are listed in Tab. 3. Fig. 3 shows an example. Mostly, the masses that affect the light curve strongly also affect the composition of the burst ashes significantly. However, there are many additional mass uncertainties listed in Tab. 3 that affect composition, but have only a negligible effect on the light curve (r LC < 1.3). This is simply a consequence of the fact that delay times at rp-process waiting points, where β + decay occurs, vary widely. Overall energy generation, and therefore the light curve, is controlled by the slowest waiting points. These waiting points control the overall reaction flow and therefore also tend to affect composition broadly. However, on top of this global effect, the abundance produced in an individual mass chain is strongly controlled by the local waiting point at that mass number.
Mass uncertainties affecting Model B, which is characterized by a moderate rp-process, are listed in Tab. 2 and Tab. 4. Only the mass uncertainties of 61 Ga and 27 P have a strong (category 1) effect on the light curve (Fig. 4). 65 As has a smaller impact (Fig. 4). The AME2012 uncertainty of the 31 Cl mass of 50 keV also has a small effect on the light curve (r LC = 2.0), however, the 31 Cl mass has recently been measured with 3.4 keV precision elim-inating this uncertainty (Kankainen et al. 2016). 10 mass uncertainties affect the composition of the burst ashes by more than 20% (Tab. 4) and an example is shown in Fig. 5.
DISCUSSION OF LIGHT CURVES
Owing to the large number of precision mass measurements of very neutron deficient isotopes in the past decade, the number of remaining nuclear mass uncertainties that affect X-ray burst light curve predictions are now relatively small (see Tab. 2). For the typical mixed H/He burst of model B, the 38 keV uncertainty of 61 Ga affects the light curve the strongest (when varied by 3σ). This uncertainty may be underestimated. measured the 61 Ga mass using the storage ring technique with an error of 55 keV. However, 61 Ga is at the upper end of the mass range covered, and systematic errors may be significant. The 38 keV uncertainty in AME2012 is derived from combining this result with the measurement of the 61 Ga electron capture Q-value by Weissman et al. (2002), who quote an uncertainty of 50 keV. However, errors quoted for β-endpoint measurements have often been demonstrated to be unreliable (for example Hager et al. (2006)). Given the importance of 61 Ga, the insufficient accuracy of the currently recommended mass value, and the potential of underestimated systematic errors, a new measurement of the 61 Ga mass with keV accuracy would be particularly important.
Only two additional mass uncertainties, 27 P and 65 As, contribute to the light curve uncertainty in model B. 27 P governs the ratio of proton capture on 26 Si to the inverse (γ,p) photodisintegration on thus produced 27 P and therefore the effective proton capture branch on 26 Si, one of several αp-process branch points where proton capture competes with (α,p) reactions (Schatz et al. 1999;Fisker et al. 2008;Cyburt et al. 2016). 65 As plays the same role for 64 Ge, one of the major waiting points in the rp-process where proton capture competes with β-decay.
For model A, which represents an extremely hydrogen rich burst, the ensuing rp-process extends beyond 64 Ge up to the Te region where masses are less well known. Consequently it is affected by more mass uncertainties. The most important mass uncertainties affecting the light curve are 65 As and 66 Se. 65 As affects the proton capture branch on the 64 Ge waiting point, similar to model B. However, at the higher temperatures reached in model A, (p,γ)-(γ,p) equilibrium is not only established between 64 Ge and 65 As, but also with 66 Se. Therefore, the 66 Se mass also affects the effective 64 Ge lifetime in the rp-process (Schatz et al. 1998).
Another mass uncertainty that strongly affects model A is 80 Zr. This is in part due to the large uncertainty of 1.49 MeV adopted for this isotope in AME2012, which is an artifact of including a low accuracy experiment (Lalleman et al. 2001) into the compilation. At this level of uncertainty, theoretical models that have not been considered in the recommended mass, should in principle become competitive. However, theoretical errors are difficult to quantify as 80 Zr lies in a region of strong deformation, and mass predictions for N = Z nuclei have the added complication of the need for a Wigner term that results in enhanced binding (see for example Goriely et al. (2010)). For all these reasons, a measurement of the 80 Zr mass with much improved accuracy would be important.
Two other important uncertainties for the light curve in model A are 58 Zn and 62 Ge. These nuclei play the same role for the 56 Ni and 60 Zn waiting points as 66 Se for the 64 Ge waiting point (see discussion above). While r LC is not particularly large, these mass uncertainties significantly affect the shape of the early cooling part of the burst light curve. Addressing these uncertainties would therefore be important for attempts to use the shape of this part of the light curve to constrain neutron star properties (Zamfir et al. 2012).
DISCUSSION OF COMPOSITION
A larger number of mass uncertainties affect the calculation of the composition of the burst ashes ( Tabs. 3 and 4). The synthesis of A = 79 nuclei is particularly strongly affected. In model A, the A = 79 abundance varies by three orders of magnitude from very small (0.01% mass fraction), to being one of the most important components in the composition (8% mass fraction, compare to the largest mass fraction of 23% for A = 105). In model B, the A = 79 variation is smaller but still among the largest sensitivities, almost a factor of 10 (0.06% to 0.6% mass fraction). The strong sensitivity of the A = 79 abundance to nuclear masses is primarily due to the large uncertainty of the 80 Zr mass discussed above, but also due to 79 Y (at least in model A). This is an important problem as the amount of odd mass nuclei in the burst ashes directly determines the amount of nuclear Urca-cooling in the outer neutron star crust (Schatz et al. 2014). There are a number of other important odd mass chains that are affected by mass uncertainties. The most abundant ones in model B are A = 65, 67, 69. These suffer from 30-40% uncertainties due to the mass uncertainty of 27 P and, to a lesser extent, 61 Ga. The most abundant odd mass chains in model A are A = 103 and 105. While A = 105 is not affected by mass uncertainties, A = 103 suffers a significant factor of 2 uncertainty due to the mass of 65 As. The overall most strongly affected mass numbers, besides A = 79, are A = 82 and A = 90 in model A, which are uncertain by an order of magnitude due to mass uncertainties in 83 Nb and 91 Rh, respectively.
At first sight, the list of important mass uncertainties in this paper differs significantly from previous work (Parikh et al. 2009) despite the Table IV of Parikh et al. (2009) are affected by mass uncertainties identified in this work. On the other hand, this work identifies 22 additional mass uncertainties that affect composition by more than a factor of 2 -the same criterion used in Parikh et al. (2009). There are two chief reasons for these differences. First, many new mass measurements have been carried out since AME2003, the mass table used in Parikh et al. (2009), and therefore many mass uncertainties have been eliminated. Second, Parikh et al. (2009) limited their study to Q-values with Q < 1 MeV. However, we find that there are many additional cases where photodisintegration rates for Q-values in the 1-2 MeV range are significant. Examples where mass sensitivities identified in this work affect Q (p,γ) values above 1 MeV include the 65 As(p,γ) 66 Se (2.18 MeV), 79 Y(p,γ) 80 Zr (1.56 MeV), 81 Zr(p,γ) 82 Nb (1.09 MeV), 82 Zr(p,γ) 83 Nb (1.76 MeV), 90 Ru(p,γ) 91 Rh (1.14 MeV), 99 Cd(p,γ) 100 In (1.67 MeV), and 100 Cd(p,γ) 101 In (1.71 MeV) reactions (Q (p,γ) values are given in parenthesis).
There are only a few cases where differences remain unexplained by these arguments. We find a strong sensitivity to the 27 P mass uncertainty related to the αp-process waiting point 26 Si that is not found in Parikh et al. (2009). The most likely explanation is that the impact of αp-process waiting points tends to depend strongly on the detailed temperature profile. The timing of the temperature rise defines the narrow time window where its hot enough for the reaction flow to have reached the waiting point, and for (γ,p) reactions to impede further proton capture, but where its still cold enough for the (α,p) reaction to be slow. In particular for a rapid temperature rise this time window may disappear altogether. This may also explain, Parikh et al. (2009) is that we do not find a sensitivity to the 50 Fe(p,γ) Q-value. However, Parikh et al. (2009) find this sensitivity only in one of their trajectories that was artificially scaled to simulate a shorter burst. Lastly, unlike Parikh et al. (2009), it is found here that 97 Cd and 98 In are important even though the Q-value for 97 Cd(p,γ) is only 0.58 MeV and should therefore be included in their study. It is not clear what the reason for this discrepancy is.
NUCLEAR PHYSICS UNCERTAINTIES
In this work we use X-ray burst models that employ a mass table using calculated Coulomb shifts to predict masses beyond the N = Z line with an accuracy of the order of 100 keV, in addition to the mass uncertainty of the neutron rich mirror nucleus. It is in principle possible to use the isobaric mass multiplet equation (IMME) to further reduce these uncertainties. The IMME relates the energy states of nuclei within an isospin multiplet, the isobaric analogue states, and has been shown, in cases of mass numbers up to 44, to be able to predict masses to within several tens of keV precision (MacCormick & Audi 2014), better than the 100 keV typical of Coulomb shift calculations. In cases where not all members of an isospin multiplet are experimentally determined, unknown IMME coefficients can be calculated with a semi-global fit function utilizing the homogeneous charged sphere approximation. The unknown mass can then be determined from known masses and excitation energies of other members of the multiplet. For most of the masses of interest in this paper, insufficient experimental information on the isobaric analogue states prevents the IMME from being a useful tool.
The exceptions are the two lightest nuclei, 27 P and 56 Cu. The 27 P ground state is part of the A = 27, T = 3/2 isospin quadruplet, where the masses of the other three members, the ground state of 27 Mg and the analogue states in 27 Al and 27 Si have been precisely measured with uncertainties of less than a few keV. We fit the IMME to these three masses and calculated the mass excess of 27 P to be -716(7) keV. The uncertainty was obtained by varying the three masses within their uncertainties using a Monte Carlo approach and represents a sig- nificant improvement over the the AME2012 value of 26 keV. We recommend therefore to use this value in X-ray burst calculations. Test calculations using the smaller uncertainty show, as expected, a much reduced sensitivity to the 27 P mass uncertainty (Fig. 6). Nevertheless, even the much smaller uncertainty still leads to an up to 30% uncertainty in some of the final abundances. There is also the possibility of isospin mixing affecting the validity of the IMME (Wrede et al. 2009). A precision measurement of the 27 P mass with an accuracy of the order of 1 keV would therefore still be helpful. The other case where the IMME can be applied to reduce mass uncertainties is 56 Cu. The 56 Cu ground state mass, which is part of the A = 56, T = 1 isospin triplet, has been recently calculated using the IMME to be -38.685(82) MeV by Ong et al. (2016). Independently, using β-delayed proton spectroscopy from the T = 2, J π = 0 + state, Tu et al. (2016) have calculated a mass of -38.697(88) MeV. These error bars are somewhat smaller than the 100 keV error using Coulomb shift calculations, reducing the A = 55 abundance uncertainty induced by the 56 Cu mass slightly (see Tab. 4).
Another significant nuclear uncertainty in x-ray bursts are reaction rates (Parikh et al. 2008;Cyburt et al. 2016). A particularly extreme case are (α,p) reactions, where recent experimental data on the level structure of the respective compound nuclei indicate that these reaction rates may be systematically overestimated by as much as 1-2 orders of magnitude (Long & Wiescher 2016). One may ask then the question how robust our results are against future changes of reaction rates due to improved experimental or theoretical data. To explore this question we use this extreme case and repeat the mass sensitivity study for model A with all (α,p) reaction rates above Ne, and their inverse, reduced by a factor of 100. Even though the shape of the light curve is changed significantly, the resulting set of masses that affect the light curve remains essentially the same. The main difference is the 160 keV uncertainty of 28 S, which before produced a negligible effect, and has now a significant (Category 1) impact on the light curve (Fig. 7) and composition (Fig. 8). However, there are some differences in the masses affecting the composition of the burst ashes. As expected from the light curve analysis, 28 S now also affects the composition. In addition, there are a number of additional mass uncertainties that would affect the composition in the case of a systematic reduction of (α,p) reaction rates, that have not been identified as important using the nominal reaction rates. These are the mass uncertainties of 71 Kr, 75 Sr, 84 Mo, 88 Tc, 87 Ru, 93 Pd, 94 Ag, and 96 Ag. Therefore, while the mass uncertainties affecting the burst light curve appear to be rather robust, in the future some iterative procedure will be needed to identify remaining mass uncertainties that affect the composition of the burst ashes as new reaction rate information becomes available, especially if the changes are large and systematic. Similarly, reaction rate sensitivity studies may have to be repeated as new mass data become available.
CONCLUSIONS
This work provides a systematic investigation of the impact of nuclear mass uncertainties on X-ray burst models. Unlike previous studies it uses a selfconsistent burst model, instead of a simplified postprocessing approach. This enables the investigation of the impact of mass uncertainties on burst light curve predictions, in addition to predictions of the composition of the burst ashes. The number of remaining mass uncertainties that need to be addressed is rather small. In a typical mixed H/He burst (Model B) only 3 mass uncertainties have a significant (category 1) effect on the burst light curve ( 27 P, 61 Ga, 65 As). Only three additional masses ( 80 Zr, 81 Zr, 82 Nb) affect the composition by more than a factor of 2, and this impact is limited to the A = 79 − 81 mass range in the tail end of the composition distribution. In an extreme burst with a maximally extended rp-process (Model A), only 8 masses affect the light curve (category 1 or 2), with 65 As, 66 Se, 80 Zr, 91 Rh, 62 Ge, and 58 Zn being the most relevant (category 1). 11 additional masses along the rp-process in the A = 78 − 101 mass range affect the composition by more than a factor of 2.
The models used in this study span the range from a moderate rp-rpocess reaching into the A = 60−64 mass range in model B to the most extreme rpprocess in model A. This work therefore likely covers the critical rp-process mass uncertainties for a broad range of models of typical mixed H/He X-ray bursts. This is also supported by the reasonable agreement with the results from Parikh et al. (2009), who used a broad range of thermodynamic burst model trajectories. Confirming our results with a full 1D Xray burst model would be useful. In addition, one could expand the investigation to bursts that have less initial hydrogen, but still enough hydrogen to not be dominated by helium burning where the additional mass uncertainty from competition of forward to reverse reaction rates investigated here is not expected to occur.
Additional mass measurements not listed here may be needed as input into some theoretical reaction rate calculations until direct rate measurements become possible. These need to be identified based on sensitivities of burst models to nuclear reaction rates (see (Cyburt et al. 2016)) and the mass sensitivity of the particular theoretical approach chosen to predict the critical reaction rates.
The mass uncertainties identified in this work as significant (at the 3σ level) range from 16 keV (for 85 Mo) to 1.49 MeV (Tab. 2,3,4). This indicates that a mass accuracy of much better than 10 keV should in most cases be sufficient to ensure mass uncertainties do not contribute significantly (at the 3σ level) to X-ray burst model errors. In the near future we can expect a significant enhancement in experimental capabilities to measure masses of extremely neutron deficient nuclei. The new MR-TOF (Schury et al. 2014) and RI-Ring (Yamaguchi et al. 2013) devices will enable precision mass measurments at the RIKEN/RIBF rare isotope beam facility, and Penning traps for precision mass measurements at the next generation rare isotope beam facilities FRIB (Redshaw et al. 2013) and FAIR (Rodríguez et al. 2010) should become online in the next 5-10 years. With these capabilities, all nuclear masses identified in this study should easily be within reach for a sufficiently accurate measurement. This work provides a roadmap for eliminating mass uncertainties in X-ray burst models in light of these developments. | 2017-06-15T14:43:55.000Z | 2016-10-22T00:00:00.000 | {
"year": 2017,
"sha1": "b16534637406cfe9d1064b838f5d68d2ffb67ea9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1610.07596",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b16534637406cfe9d1064b838f5d68d2ffb67ea9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
46954881 | pes2o/s2orc | v3-fos-license | Genetic variants associated with Alzheimer’s disease confer different cerebral cortex cell-type population structure
Background Alzheimer’s disease (AD) is characterized by neuronal loss and astrocytosis in the cerebral cortex. However, the specific effects that pathological mutations and coding variants associated with AD have on the cellular composition of the brain are often ignored. Methods We developed and optimized a cell-type-specific expression reference panel and employed digital deconvolution methods to determine brain cellular distribution in three independent transcriptomic studies. Results We found that neuronal and astrocyte relative proportions differ between healthy and diseased brains and also among AD cases that carry specific genetic risk variants. Brain carriers of pathogenic mutations in APP, PSEN1, or PSEN2 presented lower neuron and higher astrocyte relative proportions compared to sporadic AD. Similarly, the APOE ε4 allele also showed decreased neuronal and increased astrocyte relative proportions compared to AD non-carriers. In contrast, carriers of variants in TREM2 risk showed a lower degree of neuronal loss compared to matched AD cases in multiple independent studies. Conclusions These findings suggest that genetic risk factors associated with AD etiology have a specific imprinting in the cellular composition of AD brains. Our digital deconvolution reference panel provides an enhanced understanding of the fundamental molecular mechanisms underlying neurodegeneration, enabling the analysis of large bulk RNA-sequencing studies for cell composition and suggests that correcting for the cellular structure when performing transcriptomic analysis will lead to novel insights of AD. Electronic supplementary material The online version of this article (10.1186/s13073-018-0551-4) contains supplementary material, which is available to authorized users.
Background
Alzheimer's disease (AD) is a neurodegenerative disorder characterized clinically by gradual and progressive memory loss and pathologically by the presence of senile plaques (Aβ deposits) and neurofibrillary tangles (NFTs, Tau deposits) in the brain [1]. AD has a substantial but heterogeneous genetic component. Mutations in the amyloid-beta precursor protein (APP) and Presenilin genes (PSEN1 and PSEN2) [2,3] cause autosomal dominant AD (ADAD) which is typically associated with early-onset (< 65 years). In contrast, the most common manifestation of AD presents late-onset (LOAD) and accounts for the majority of the cases (90-95%). Despite appearing sporadic in nature, a complex genetic architecture underlies LOAD risk. APOE ε4 is the most common genetic risk factor, increasing the risk in three-to eightfold [4]. In addition, recent whole genome and whole exome analyses have identified rare coding variants in TREM2 [5,6], PLD3 [7], ABCA7 [8,9], and SORL1 [10,11] that are associated with AD and confer risk comparable to that of carrying one APOE ε4 allele. Besides age at onset, the clinical presentations of LOAD and ADAD are remarkably similar with an amnestic and cognitive impairment phenotype [12,13]. A minor fraction of cases of ADAD have additional neurological findings, sometimes also seen in LOAD [12,13].
Altered cellular composition is associated with AD progression and decline in cognition. Neuronal loss in the hippocampus is characteristic in the initial stages of AD, which could explain early memory disturbances [14,15]. As the disease progresses, neuronal death is observed throughout the cerebral cortex. Furthermore,2 5% of cognitively normal individuals who die by the age of~75 years also presented substantial cerebral lesions that resemble AD pathology, including amyloid plaque, NFTs, and neuronal loss [16]. Thus, the identification of the brain cellular population structure is essential for understanding neurodegenerative disease progression [17]. However, stereology protocols for counting neurons can be tedious, require extensive training, and are susceptible to technical artifacts which may lead to biased quantification of cell-type distributions [17].
Recently there has been a growing interest in understanding the transcriptomic changes attributed to AD [18][19][20][21][22][23][24][25], as these may point to underlying molecular mechanisms of disease. These studies are typically designed to analyze the expression profiles of large cohorts ascertained from homogenized regions of the brain (e.g. bulk RNA-sequencing [RNA-seq]) of affected and control donors. However, as bulk RNA-seq captures the gene expression of all the constituent cells in the sampled tissue; the altered cellular composition associated with AD has been reported to confound downstream analyses [20].
Digital deconvolution approaches enhance the interrogation of expression profiles to identify the cellular population structure of individual samples, alleviating the requirement of additional neurostereology procedures. These approaches have been developed, tested, and applied to ascertain cellular composition altered in many traits [26][27][28][29]. However, digital deconvolution has not been applied to identify the cellular population structure from RNA-seq from human brain of AD cases and controls. Technical constraints restrict the dissociation of cells from the brains for very specific conditions [30][31][32]. Nevertheless, a limited number of RNA-seq from isolated cell populations from the brain have been generated [30][31][32]. Using these resources, we are now able to generate a reference panel for digital deconvolution of human brain bulk RNA-seq data.
We sought to investigate the cellular population structure in AD by analyzing RNA-seq from multiple brain regions of LOAD participants. To do so, we assembled a novel brain reference panel and evaluated the accuracy of digital deconvolution methods by analyzing additional cell-type-specific RNA-seq samples and by creating synthetic admixtures with defined cellular distributions. Then we analyzed large cohorts of pathologically confirmed AD cases and controls (n = 613) and verified that our model predicts cellular distribution patterns consistent with neurodegeneration. Finally, we generated RNA-seq from the parietal lobe of participants from the Charles F. and Joanne Knight Alzheimer's Disease Research Center (Knight-ADRC) [33], including non-demented controls, LOAD cases, with enriched proportions of carriers of high-risk coding variants associated with AD, and also ADAD from The Dominantly Inherited Alzheimer Network [34] (DIAN). We compared the cell composition in ADAD and LOAD; and also evaluated differences among carriers of coding high-risk variants in PLD3, TREM2, and APOE ε4. Our findings indicate that cell-type composition differs among carriers of specific genetic risk factors, which might be revealing distinct pathogenic mechanisms contributing to disease etiology.
Subjects and samples DIAN and Knight-ADRC
Parietal lobe tissue of post-mortem brain was obtained with informed consent for research use and was approved by the review board of Washington University in St. Louis. RNA was extracted from frozen brain using Tissue Lyser LT and RNeasy Mini Kit (Qiagen, Hilden, Germany). RNA-seq paired-end reads with read lengths of 2 × 150 bp were generated using Illumina HiSeq 4000 with a mean coverage of 80 million reads per sample (Table 1; Additional file 1: Table S1). RNA-seq was generated for 19 brains from DIAN, 84 brains with LOAD and 16 non-demented controls from Knight-ADRC [33]. The AD brains selected from Knight-ADRC are enriched for carrier of variants in TREM2 (n = 20; Additional file 1: Table S1) and PLD3 (n = 33; Additional file 1: Table S1). The clinical status of participants was neuropathologically confirmed [35]. We identified three additional participants from the Knight-ADRC study with PSEN1 (A79V, I143T, S170F) mutations. Clinical Dementia Rating (CDR) scores were obtained during regular visits throughout the study before the subject's decease [36]. A range of other pathological measurements were collected during autopsy including Braak staging, as previously described [37].
RNA was extracted from frozen brain tissues using Tissue Lyser LT and RNeasy Mini Kit (Qiagen, Hilden, Germany) following the manufacturer's instruction. RIN (RNA integrity) and DV200 were measured with RNA 6000 Pico Assay using Bioanalyzer 2100 (Agilent Technologies). The RIN is determined by the software on the Bioanalyzer taking into account the entire electrophoretic trace of the RNA including the presence or absence of degradation products. The DV200 value is defined as the percentage of nucleotides > 200 nt. RIN and DV200 for all the samples can be found on Additional file 1: Table S1. The yield of each sample is determined by the Quant-iT RNA Assay (Life Technologies) on the Qubit Fluorometer (Fisher Scientific). The complementary DNA (cDNA) library was prepared with the TruSeq Stranded Total RNA Sample Prep with Ribo-Zero Gold kit (Illumina) and then sequenced by HiSeq 4000 (Illumina) using 2 × 150 paired-end reads at McDonnell Genome Institute, Washington University in St. Louis with a mean of 58.14 ± 8.62 million reads. Number of reads and other quality control (QC) metrics can be found in Additional file 1: Table S1.
Mayo Clinic Brain Bank
Mayo Clinic Brain Bank RNA-seq was accessed from the Advanced Medicines Partnership -Alzheimer's Disease (AMP-AD) portal (synapse ID = 5,550,404; accessed January 2017) ( Table 1). Paired end reads of 2 × 101 base pairs were generated by Illumina HiSeq 2000 sequencers for an average of 134.9 million reads per sample. Neuropathology criteria, quality control procedures, RNA extraction, and sequencing details are explained elsewhere [18].
RNA-seq based transcriptome data were generated from post-mortem brain tissue collected from cerebellum (CB; 189 samples) and temporal cortex (TC; 191 samples) of Caucasian subjects [18,38]. RNA was extracted using Trizol® reagent and cleaned with Qiagen RNeasy. RIN measurement was performed with Agilent Technologies 2100 Bioanalyzer. Samples with RIN > 5 were included. Library was prepared by Mayo Clinic Medical Genome Facility Gene Expression and Sequencing Cores with TruSeq RNA Sample Prep Kit (Illumina).
Mount Sinai Brain Bank
The Mount Sinai Brain Bank (MSBB) RNA-seq study was downloaded from the AMP-AD portal (synapse ID = 3,157,743; accessed January 2017) (
Induced pluripotent stem cell (iPSC)-derived neurons
Dermal fibroblasts were obtained from skin biopsies from research participants in the Knight-ADRC (Fibroblast lines: F11362, F12455, and F13504). Human fibroblasts were reprogrammed into iPSCs using non-integrating Sendai virus carrying OCT3/4, SOX2, KLF4, and cMYC [40,41]. iPSCs were manually selected and expanded on Matrigel in mTesR1 (StemCell Techologies). iPSCs were characterized for expression of pluripotency markers by immunocytochemistry and quantitative polymerase chain reaction (qPCR). qPCR with probes specific to the Sendai virus were used to confirm the absence of virus in the isolated clones. All cell lines were confirmed to have a normal karyotype based on G-band karyotyping. To generate cortical neurons, iPSCs were plated in a v-bottom plate in neural induction media (StemCell Technologies; 65,000 per well) to form highly uniform neural aggregates. After five days, neural aggregates were transferred onto PLO/laminin-coated tissue culture plates. Neural rosettes formed over 5-7 days. The resulting neural rosettes were then isolated by enzymatic selection (StemCell Technologies) and cultured as neural progenitor cells (NPCs). NPCs were then differentiated by culturing in neural maturation medium (neurobasal medium supplemented with B27, GDNF, BDNF, cAMP). RNA was collected from the cells and sequenced following the same protocol and processing pipeline as the DIAN and Knight-ADRC dataset.
In addition, we accessed RNA-seq data generated for iPSC-derived neurons from the Broad iPSC study [42] (synapse ID: syn3607401). Forebrain neurons from wild-type background were generated using an embryoid body-based protocol to produce neural progenitor cells Translating ribosome affinity purification (TRAP)-seq mice All animal procedures were performed in accordance with the guidelines of Washington University's Institutional Animal Care and Use Committee. The Rosa26 fsTRAP mice (Gt(ROSA)26Sor tm1(CAG-EGFP/Rpl10a,-birA)Wtp ) [43] (The Jackson Laboratory) were crossed with PV Cre mice (Pvalb tm1(cre)Arbr ) [44] (The Jackson Laboratory) to produce PV-TRAP mice directing expression of EGFP-L10a ribosomal fusion protein in parvalbumin (PV) expressing cells.
Purification of cell-type-specific messenger RNA (mRNA) by TRAP was described previously [45] with modifications. Briefly, PV-TRAP mouse brain was removed and quickly washed in ice-cold dissection buffer (1× HBSS, 2.5 mM HEPES-KOH (pH 7.3), 35 mM glucose, and 4 mM NaHCO 3 in RNase-free water). Barrel cortex was rapidly dissected and flash-frozen in liquid nitrogen, and then stored at − 80°C until use. Affinity matrix was prepared with 150 μL of Streptavidin MyOne T1 Dynabeads, 60 μg of Biotinylated Protein L, and 25 μg of each of GFP antibodies 19C8 and 19F7. The tissue was homogenized on ice in 1 mL of tissue-lysis buffer (20 mM HEPES KOH (pH 7.4), 150 mM KCl, 10 mM MgCl 2 , EDTA-free protease inhibitors, 0.5 mM DTT, 100 μg/mL cycloheximide, and 10 μL/mL rRNasin and Superasin). Homogenates were centrifuged for 10 min at 2000×g, 4°C, and 1/9 sample volume of 10% NP-40 and 300 mM DHPC were added to the supernatant at final concentration of 1% (vol/vol). After incubation on ice for 5 min, the lysate was centrifuged for 10 min at 20,000×g to pellet insolubilized material. Then 200 μL of freshly resuspended affinity matrix was added to the supernatant and incubated at 4°C for 16-18 h with gentle end-over-end mixing in a tube rotator. After incubation, the beads were collected with a magnet and resuspended in 1000 μL of high-salt buffer (20 mM HEPES KOH (pH 7.3), 350 mM KCl, 10 mM MgCl 2 , 1% NP-40, 0.5 mM DTT, and 100 μg/mL cycloheximide) and collected with magnets as above. After four times of washing with high-salt buffer, RNA was extracted using Absolutely RNA Nanoprep Kit (Agilent Technologies) following the manufacturer's instructions. RNA quantification was measured using Qubit RNA HS Assay Kit (Life Technologies) and the integrity was determined by Bioanalyzer 2100 using an RNA Pico chip (Agilent Technologies). The cDNA library was prepared with Clontech SMARTer and then sequenced by HiSeq3000. Single-end reads of 50 base pairs were generated for an average of 29.2 million reads per sample (24 samples).
iPSC-derived microglia
The data were accessed from the AMP-AD portal (synapse ID: syn7203233). This dataset comprises iPSC-derived microglia (n = 10) from human primitive streak-like cells [46]. Within 30 days of differentiation, myeloid progenitors co-expressing CD14 and CX3CR1 were generated. These iPSC-derived microglia were able to perform phagocytosis and elicit ADP-induced intracellular Ca 2+ transients that asserted their microglia identity as opposed to macrophage. Single-ended RNA-seq data were generated with the Illumina HiSeq 2500 platform following the Illumina protocol.
RNA-seq QC and alignment
FastQC was applied to DIAN and Knight-ADRC RNA-seq data to perform quality checks on various aspects of sequencing quality [47]. The DIAN and Knight-ADRC dataset was aligned to human GRCh37 primary assembly using Star (ver 2.5.2b) [48]. We used the primary assembly and aligned reads to the assembled chromosomes, un-localized and unplaced scaffolds, and discarded alternative haploid sequences. Sequencing metrics, including coverage, distribution of reads in the genome [49], ribosomal and mitochondrial contents, and alignment quality, were further obtained by applying Picard CollectRnaSeq-Metrics (ver 2.8.2) to detect sample deviation. Additional QC metrics can be found in Additional file 1: Table S1.
Aligned and sorted bam files were loaded into IGV [50] to perform visual inspection of target variants. Samples carrying unexpected variants or missing expected variants were labeled as potential swapped samples. In addition, variants were called from RNA-seq following BWA/GATK pipeline [51,52]. The identity of the samples was later verified by performing IBD analysis against genomic typing from genome-wide association study chipsets.
Expression quantification
We applied Salmon transcript expression quantification (ver 0.7.2) [53] to infer the gene expression for all samples included in the reference panel and participants in the Mayo, MSBB, DIAN, and Knight-ADRC. We quantified the coding transcripts of Homo sapiens included in the GENCODE reference genome (GRCh37.75). Similarly, we quantified the expression of the mice samples included in the reference panel using the Mus musculus reference genome (mm10).
Reference panel Reference samples
We assembled a cell-type-specific reference panel from publicly available RNA-seq datasets comprising both immunopanning collected or iPSC-derived neurons, astrocytes, oligodendrocytes, and microglial cells from human and murine samples. For immunopanning collected cells, antibodies for cell-type-specific antigens were utilized to bind and immobilize their targeted cell types in order to immunoprecipitate and purify each cell type from the suspensions [30]. cDNA synthesis was accomplished using Ovation RNA-seq system V2 (Nugen 7102) and library prepared with Next Ultra RNA-seq library prep kit from Illumina (NEB E7530) and NEB-Next® multiplex oligos from Illumina (NEB E7335 E7500). TruSeq RNA Sample Prep Kit (Illumina) was used to prepare library for paired-end sequence on 100 ng of total RNA extracted from each sample. Illumina HiSeq 2000 Sequencer was used to sequence all libraries [30].
Both human adult TC tissue, collected from patients receiving neurological surgeries, and mice cells were disassociated, sorted and sequenced as described elsewhere [31], and deposited in the Gene Expression Omnibus GSE73721 and GSE52564. We also accessed neural progenitor cells (day 17) and mature human neurons (days 57 and 100) from Broad iPSC deposited in the AMP-AD portal [42] and neural progenitor cells and iPSC-derived neurons from [54]. Broad iPSC-derived neurons accessed from the AMP-AD portal were generated using an embryoid body-based protocol to differentiate into forebrain neurons. Wild-type cells used in the protocol were obtained from UConn StemCell Core. RNA was purified using PureLink RNA mini-kit (Life Technologies) and libraries were prepared by Broad Institute's Genomics Platform using TruSeq protocol. Please refer to Additional file 1: Table S2 for additional information.
Marker genes
The reference panel was assembled with samples from four distinct cell types. A redundant set of well-known cell-type markers was selected from the literature [31,55,56] (Additional file 1: Table S3). Principal component analysis (PCA) was performed on the reference panel using R function prcomp (version 3.3.3) to verify that the expressions of these gene were clustering samples by their cell types (Additional file 1: Figure S1b; Additional file 1: Figure S2a).
Inference of the cellular population structure
We ascertained alternative computation deconvolution algorithms implemented in the CellMix package (ver 1.6). Based on accuracy and robustness evaluation results, we compared and reported the following three algorithms that outperformed the others: Digital Sorting Algorithm (named "DSA") [27], which employs linear modeling to infer cell distributions; the method population-specific expression analysis (PSEA, also named meanProfile in CellMix implementation) [29] that calculates estimated expression profiles relative to the average of the marker gene list for each cell type [29]; and a semi-supervised learning method that employs non-negative matrix factorization (ssNMF in CellMix implementation) [57]. We employed a leave-one-out cross-validation (LOOCV) procedure to evaluate the accuracy provided by each method. The best performing algorithm ssNMF integrates cell-type marker genes to resolve the drawbacks of completely unsupervised standard non-negative matrix factorization. We followed the standard procedure described in the CellMix package, which included the extraction of marker genes from the reference samples (function extractMarkers from the CellMix package), and the posterior invocation of the function ged to infer cellular population from the gene expression of bulk RNA-seq data. Besides, we tested additional methods which provided considerably lower accuracy (least-squares fit [58], quadratic programing [59]) or no significant difference (support vector regression [26] or latent variable analysis [60]) to the methods presented.
We selected the reference samples that provide the most faithful transcriptomic profile for their respective cell types by following a LOOCV approach. We trained iteratively deconvolution models using all but one of the samples that was tested. Only samples predicted with a composition > 80% were kept for the reference panel (Additional file 1: Table S2; Additional file 1: Figure S2b).
Accuracy and robustness evaluation Chimeric validation
To emulate heterogeneous tissue with known and controlled cellular composition, we generated chimeric libraries pooling reads (to a total of 400,000) contributed from the human reference samples (see Additional file 1: Table S2). This process was repeated 720 times, using alternative reference samples to model each cell type. The proportion of reads that the libraries of neurons, astrocytes, oligodendrocytes, and microglia provided to the chimeric libraries varied in predefined ranges (Additional file 1: Figure S3). As a result, each of the chimeric libraries contained reads that followed 32 different distributions (neuronal reads contributed 2-36% of reads, astrocytes 22-76%, oligodendrocytes 6-62%, and microglia 1-5%). Refer to Additional file 1: Table S4 for detailed description of the 32 different distributions. We quantified the chimeric reads using Salmon (v0.7.2) [53] and employed the reference samples that did not contribute reads to the chimeric library as reference panel for the deconvolution methods.
Overall, we quantified the expression of 23,040 (720 × 32) chimeric libraries. We evaluated the accuracy using the root-mean-square error (RMSE, Eq. 1 to compare the digital deconvolution cellular proportion estimates (method ssNMF) vs the defined proportion of reads specific to each of the chimeric libraries: We also tested whether the deconvolution results were dominated by the expression of any specific marker gene and ascertained the robustness of the inferred cellular population structure to any possibly altered expression of marker genes. To do so, we performed the deconvolution analysis discarding each of the marker genes one at a time and evaluated how these distributions differed in comparison to the full gene reference panel.
Statistical analysis
We employed linear regression models to test the association between cell-type proportions and disease status (R Foundation for Statistical Computing, ver.3.3.3). We used stepwise discriminant analysis (stepAIC function of R package MASS, version 7.3-45) to determine significant covariates and to correct for confounding effects. We included RIN, batch, age at death, and post-mortem interval (PMI) as covariates for the Mayo Clinic analyses. For MSBB analyses, we corrected for RIN, PMI, race, batch, and age at death. We also used linear-mixed models to perform multiple-region association analysis, employing random slopes and random intercepts grouping by observations and by donors [61], and correcting for the same covariates previously described.
To analyze the DIAN and Knight-ADRC studies, we applied linear-mixed models (function lmer and Anova, R packages lme4 ver.1.1 and car ver.2.1, respectively), clustering at family level to ascertain the effect of the neuropathological status in the cell proportion and corrected for RIN and PMI. For late-onset specific analyses we also corrected for age at death. Cellular composition shown as proportions were plotted using R package ggplot2 (ver 2.2.1).
Study design
To infer cellular composition from RNA-seq, we first assembled a reference panel to model the transcriptomic signature of neurons, astrocytes, oligodendrocytes, and microglia. The panel was created by analyzing expression data from purified cell lines. We evaluated alternative digital deconvolution methods and selected the best performing for our primary analyses. We tested the digital deconvolution accuracy on iPSC-derived neurons/microglia cells and neuronal TRAP-seq (Fig. 1).
Finally, we verified its accuracy by creating artificial admixture with pre-defined cellular proportions.
Once the deconvolution approach was optimized, we calculated the cell proportion in AD cases and controls from the different brain regions of Mayo and MSBB datasets. The RNA-seq data for the Mayo Clinic study (n = 191) [18] and MSBB (n = 300) [39] are deposited in the AMP-AD knowledge portal (synapse ID: syn5550404 and syn3157743; Table 1). The Mayo study includes RNA-seq from the TC and CB for AD affected and non-demented controls, in addition to pathological aging (PA) participants (Fig. 1). The MSBB also profiled four additional cerebral cortex areas: APC; STG; PGH; and IFG; Table 1 and Fig. 1). We restricted the case-control analysis to subjects with definite AD and autopsy-confirmed controls. In addition, we generated RNA-seq from parietal lobe for participants of the Knight-ADRC (84 late-onset cases, carriers of genetic risk factors and 16 controls; Additional file 1: Table S1) and The Dominantly Inherited Alzheimer Network (DIAN; 19 carriers of mutations in APP, PSEN1, PSEN2) ( Table 1; Fig. 1). We employed the same pipeline to process all of the samples in order to avoid any bias. Furthermore, RNA-seq from the Knight-ADRC and DIAN studies allowed us to compare the cell composition from ADAD vs LOAD brains, and similarly to test for differences in brains of controls, sporadic AD who do not carry any known high-risk variant, carriers of high-risk variants in TREM2 (n = 20), PLD3 (n = 33), and APOE ε4 allele.
Development of a reference panel to estimate brain cellular population structure Due to limited availability of brain cell-type-specific transcriptomic data, we compiled reference samples from different sources, including single-population RNA-seq from mice and human (immunopan-purified oligodendrocytes, neurons, astrocytes and microglia, and iPSC-derived neurons and astrocytes).
We first tried to create a transcriptome-wide reference panel by selecting the genes that are differentially expressed among cell types [26,60,62]. However, the species heterogeneity of the reference samples we compiled ruled out this attempt, as the PCA showed that differences between the human and mice donor samples dominated the transcriptome-wide profiles (Additional file 1: Figure S1a). For this reason, we curated a list of marker genes that have been described to tag these distinct cell types [31,55,56] (Additional file 1: Table S3). A visual inspection of the expression of these marker genes in the samples we compiled suggested a divergent transcriptomic profile among the cell types (Additional file 1: Figure S2a). The PCA showed that their expression was sufficient to cluster samples of neurons, astrocytes, oligodendrocytes, and microglia with their respective cell types, regardless of the species of the reference samples (Additional file 1: Figure S1b; Additional file 1: Table S2). We observed that some samples did not cluster with their expected cell types and coincidently the LOOCV indicated that these samples had an expression signatures that differed from the other samples of the same cell type. However, we found that all of these outliers correspond to samples not correctly purified or that were sequenced in early stages of differentiation (Additional file 1: Supplementary Results). After discarding these samples, we assessed six digital deconvolution algorithms implemented in the CellMix package [62] and found that the ssNMF [57] calculated the most accurate estimates (see "Methods"). Our final reference panel (Additional file 1: Table S2; Additional file 1: Table S3) had a very high confidence to predict cell types with a mean predicted accuracy = 95.2%, s.d. = 4.3 (Additional file 1: Figure S2b), and a RMSE = 0.06 (Additional file 1: Table S5).
Optimization, validation, and accuracy estimation of the reference panel and digital deconvolution method
Once we identified the optimal approach to perform digital deconvolution from brain RNA-seq, we benchmarked it by using three sets of independent pure cell populations and simulated chimeric libraries.
We first validated the accuracy to predict neuronal composition by generating RNA-seq for eight iPSC-derived cortical neurons (see "Methods"). We observed an accurate prediction in these independent cell lines (mean neuronal proportion = 94.8%, s.d. = 1.1%; Additional file 1: Figure S4a). We also ascertained the cellular composition of mRNA extracted from the barrel cortex neurons isolated by TRAP in 24 mice. TRAP is a method that captures cell-type-specific mRNA translation by purifying tagged ribosomal subunit and capturing the mRNA it bound to [45]. We observed an average of neuronal proportion = 96.7% and s.d. = 1.2% (Additional file 1: Figure S4b). Similarly, we assessed the RNA-seq data generated for iPSC-derived microglia (n = 10) deposited in the AMP-AD portal (synapse ID: syn7203233) and inferred their cellular population structure and observed a mean microglia proportion = 86.6% and s.d. = 7.1% (Additional file 1: Figure S4c).
To evaluate the accuracy of digital deconvolution for measuring cell-type proportion from cell-type admixtures, we simulated RNA-seq libraries by pooling reads from individual cell types into well-defined proportions. We combined randomly sampled reads from neurons, astrocytes, oligodendrocytes, and microglia to create chimeric libraries that mimic bulk RNA-seq from brain, but with a range of pre-defined cell-type distributions (Additional file 1: Figure S3). We then quantified the gene expression for the chimeric libraries and inferred the cell-type distribution (employing for the reference panel samples that did not contribute reads to the chimeric libraries). This process was repeated 23,040 times, choosing distinct human samples to represent each cell type and varying the proportions in 32 alternative distributions (see "Methods" and Additional file 1: Table S4). The overall error (RMSE) compared to known proportions = 0.08.
Finally, we evaluated whether any gene included in the reference panel was dominating the inference of cell proportions. We re-calculated the cell-type distributions of the chimeric libraries but dropping each of the genes from the reference panel one at a time. We observed a negligible difference between the cellular population structure inferred using the full reference and the gene-dropped panels (average RMSE = 0.022, s.d. < 0.01). In this way, we verified that the proportions inferred using the reference panel are not driven by the expression of a single gene. This reassured us the inference should be robust to any bias introduced by the potential association of a single gene included in the reference panel with a particular trait.
Deconvolution of bulk RNA-seq of non-demented and AD brains shows a characteristic signature for neurodegeneration
Pathologically, AD is associated with neuronal death and gliosis specifically in the cerebral cortex. We evaluated whether we could exploit deconvolution methods using our reference panel to detect altered cellular population (See figure on previous page.) Fig. 1 Study design development of the brain cell-type transcriptomic reference panel (left column): the expression signatures of key cell types of the brain were curated by compiling publicly available RNA-seq data from neurons, astrocytes, oligodendrocytes, and microglia. The panel was curated iteratively to retain only those samples that showed the most faithful expression signature, while evaluating alternative digital deconvolution methods. The accuracy of digital deconvolution to estimate brain cellular proportion was validated using additional cell-type-specific samples and also by generating chimeric libraries. To study cellular population structure in AD (right column), we accessed publicly available data from the AMP-AD, including Mayo Clinic and MSBB datasets. In addition, we generated RNA-seq from participants of the Knight-ADRC and DIAN studies. These three studies generated RNA-seq data from PA brains, AD cases, and neuropath-free controls in a total of six cerebral cortex regions and cerebellum. We quantified the gene expression for all of the samples included in these studies using the same RNA-seq processing pipeline. Using digital deconvolution methods, we estimated the brain cellular proportions of the samples and compared the proportion between AD cases and controls. We studied the cell structure of brain carriers of Mendelian pathological mutations and variants that confer high-risk to AD. APC anterior prefrontal cortex, STG superior temporal gyrus, PHG parahippocampal gyrus, IFG inferior frontal gyrus, MSBB Mount Sinai Brain Bank, AD Alzheimer's disease, PA pathological aging structure from the bulk RNA-seq and whether this corresponded to known pathological alterations.
The distribution of microglia was similar in the TC and CB from AD and control brains ( Table 2; Additional file 1: Figure S5). The proportion of microglia was lower than any other cell types. The Mayo dataset also includes brains from individuals with PA (Table 1); which is neuropathologically defined by amyloid-beta (Aβ) senile plaque deposits but little or no neurofibrillary tau pathology [18,63]. We observed a significant lower relative proportion of microglia in PA brains compared to AD in both TC and CB (Additional file 1: Table S7; Additional file 1: Figure S6). Therefore, we speculated that the lack of changes in the AD microglial population was neither due to low statistical power nor the inability of our method to estimate the microglial proportions but reflected unaltered neuropathological observations in AD brains.
We also analyzed data from the MSBB, which contains bulk RNA-seq for four additional cerebral cortex areas (APC, STG, PHG, IFG). Replicating our findings from the Mayo dataset, we observed a significant lower relative proportion in neurons and increase in astrocytes in all four areas (Table 2; Fig. 2; and Additional file 1: Table S6). The strongest effect size was detected in the PHG and STG (p < 3.49 × 10 −07 ) ( Table 2; Additional file 1: Table S8). Neuropathological studies have described that the PHG is one of the first brain areas in which AD pathology occurs [64][65][66]. We also observed a significant and strong correlation between neuronal and astrocyte relative proportions and the last ascertained clinical status (CDR), the number of amyloid plaques and Braak staging (Table 2; Fig. 2; Additional file 1: Figure S7).
The cellular population structure differs between ADAD vs LOAD
While the loss of neurons is a common feature of AD, it is not clear whether the mechanism holds true across different forms of AD or AD cases carrying different genetic risk variants. Therefore, we investigated whether AD with distinct etiologies showed different cellular compositions. We generated RNA-seq data from the parietal lobe of participants enrolled in Knight-ADRC (84 LOAD, 3 ADAD, and 16 neuropath-free controls) and DIAN (19 ADAD) studies (Table 1; Additional file 1: Table S1). We selected the LOAD and ADAD participants to match for CDR at death, brain weight, and sex distributions (see Additional file 1: Table S1).
Next, we compared the cell proportion of LOAD vs ADAD and found that the cell composition differs between them. We first selected the LOAD brains (n = 25) to match the Braak staging distribution of ADAD brains (n = 17). The ADAD brains showed a significant lower relative neuronal proportion compared to LOAD brains (β = − 0.08; p = 1.03 × 10 −02 ; Table 3) and an increased relative astrocyte proportion (β = 0.11; p = 9.26 × 10 −04 ; Table 3). Then, we analyzed the entire Knight-ADRC LOAD brains, by extending the model to correct for Braak stages. We also observed significant lower relative neuronal proportion (β = − 0.09; p = 4.71 × 10 −03 ; Table 3; Fig. 3a; Additional file 1: Table S9) and increased relative astrocyte proportion (β = 0.11; p = 5.24 × 10 −04 ; Table 3; Fig. 3a; Additional file 1: Table S9) in ADAD brains compared to LOAD. We observed the same cellular differences when we corrected for CDR at death (β = − 0.12; p = 2.11 × 10 −03 for neurons and β = 0.13; p = 6.29 × 10 −04 for astrocytes; Table 3; Fig. 3b, c). In summary, our results indicate that ADAD individuals present a higher neuronal loss even in the same stage of the disease, suggesting that in ADAD neuronal death plays a more important role in pathogenesis compared to sporadic AD, in which other factors such as inflammation or immune response may be involved. The cell-type proportions from AD cases and control were inferred from bulk RNA-seq using the ssNMF method. Effects of AD and associations with additional clinical and pathological phenotypes in cell-type distributions were estimated using linear regression model CB cerebellum, TC temporal cortex, APC anterior prefrontal cortex, SGT superior temporal gyrus, PHG parahippocampal gyrus,
Specific genetic variants confer a distinctive cell composition profile
A variety of genetic variants increase risk of LOAD; however, it is unclear if the cellular mechanisms are the same across these distinct risk factors. Therefore, we tested the hypothesis that distinct genetic causes of LOAD have characteristic cellular population signatures. We initially ascertained the effect of APOE ε4 on the cell-type composition. We observed a significant lower relative proportion of neurons (β = − 0.06 for each of the ε4 alleles; p = 9.91 × 10 −03 ) and increase of relative proportion of astrocytes (β = 0.10; p = 4.15 × 10 −02 ) from the TC included in the Mayo Clinic dataset (Additional file 1: Table S10; Fig. 4a; Additional file 1: Figure S9a). This finding was replicated when we performed a multi-area analysis of the MSBB dataset (β = − 0.04; p = 2.60 × 10 −03 and β = 0.05; p = 1.31 × 10 −03 for neurons and astrocytes, respectively; Table 4; Fig. 4a; Additional file 1: Table S10; Additional file 1: Figure S9a). Given the strong risk conferred by the APOE ε4 allele [4], we studied its effects on the cell-type composition by restricting our analysis to AD brains. We observed a significant association in the multi-area analysis of the MSBB dataset (β = − 0.03 p = 4.01 × 10 −02 ; Table 4; Fig. 4b; Additional file 1: Table S11; Additional file 1: Figure S9b) and also a significant increase in relative proportion of astrocytes (β = 0.03; p = 1.23 × 10 −02 ; Table 4; Fig. 4b; Additional file 1: Table S11; Additional file 1: Figure S9b). We also a b c d
AD includes both autosomal dominant AD (ADAD) and late-onset AD (LOAD)
The cellular population structure was inferred using the ssNMF method. Effects and p-values for the association with disease status, clinical dementia rating and Braak staging using generalized mixed models. We identified similar trends with approximately the same significance levels Table 4) and a significant association for the relative proportion of astrocytes in the MSBB (β = 0.04; p = 4.89 × 10 −02 ; Table 4). Furthermore, we performed a meta-analysis to combine the evidence of both studies and observed a significant association of the relative neuronal proportion with APOE ε4 allele (p = 1.86 × 10 −02 ) and marginally significant association for the relative astrocytic relative proportion (p = 0.09). Next, we analyzed the cellular composition in PLD3 carriers (n = 33). PLD3 carriers exhibited significantly lower relative proportion of neurons compared to controls (β = − 0.10; p = 1.60 × 10 −04 ; Fig. 3d) and a significant higher relative proportion of astrocytes (β = 0.13; p = 2.84 × 10 −03 ; Table 4; Fig. 3d). Sporadic AD non-carrier cases also exhibited significantly lower relative proportion of neurons compared to controls (β = − 0.11; p = 5.45 × 10 −03 ) and significant higher relative proportion of astrocytes (β = 0.13; p = 2.95 × 10 −04 ; Table 4; Fig. 3d). The cell proportion between sporadic AD non-carriers and PLD3 carriers did not show any significant difference (p > 0.05).
Finally, we performed similar analyses with TREM2 carriers. TREM2 is involved in the immune response and its role in amyloid-β deposition or clearance remains controversial [67]. Our analysis on the Knight-ADRC data showed significantly higher relative astrocytic proportion in AD affected TREM2 carriers (n = 20) compared to controls (β = 0.11; p = 1.05 × 10 −02 ; Table 4; Fig. 3d). Despite TREM2 carriers presenting lower neuron relative proportion compared to controls, this difference was not statistically significant (p > 0.05; Table 4; Fig. 3d). We analyzed whether the TREM2 carriers provided sufficient power to detect a significant association. Our empirical estimates showed that TREM2 sample size provides 96% of power to detect an association with an effect size comparable to that observed for sporadic AD (β = − 0.11). We also investigated the cellular proportion of the 11 TREM2 carriers in the MSBB dataset. The multi-region analysis showed TREM2 carriers do not show a significant difference in relative neuronal proportion compared to controls (p > 0.05; Table 4; Fig. 4e), whereas in the AD TREM2 non-carriers the relative neuronal and astrocytic proportions are significantly different from controls (β = − 0.07; p = 1.91 × 10 −08 ; and β = 0.08; p = 1.25 × 10 −08 respectively; Table 4; Fig. 4e).
In fact, our analyses indicate that TREM2 carriers have a unique cellular brain composition distinct than the other AD cases. TREM2 brains showed significantly higher relative neuronal proportion (β = 0.05; p = 1.98 × 10 −02 ) and significantly lower relative astrocyte proportion than the AD non-carries (β = − 0.05; p = 1.58 × 10 −02 ; Table 4). The distribution of CDR, mean number of amyloid plaques, and Braak staging do not differ between strata. Nonetheless, we verified that the cellular proportions were still significantly different after correcting for each of those variables (Table 4). These results suggested that the mechanism that lead to disease in TREM2 carriers is less neuron-centric than in the general AD population.
Discussion
We have developed, optimized, and validated a digital deconvolution approach to infer cell composition from bulk brain gene expression that integrates publicly available cell-type specific expression data while addressing the heterogeneity of the phenotypic differences of samples and technical characteristics of transcriptome ascertainment. We acknowledge that the accuracy of this platform might be affected by the phenotypic diversity of the reference panel or the disease-induced dysregulation of genes it includes. However, the deconvolution approach proved to be robust to the genes included in the reference panel, as we demonstrated that the proportions it inferred are not driven by the expression of any single gene. This platform produced reliable cell proportion estimates, as was shown by the evaluation of independent datasets of iPSC-derived neurons and microglia, mice cortical neurons (Additional file 1: Figure S4), and simulated chimeric libraries.
We used this approach to deconvolve studies that include large numbers of neuropathologically defined AD and control brains with their transcriptome ascertained in distinct brain regions. We observed consistently significant lower relative neuronal proportion and increased relative astrocyte proportions in the cerebral cortex suggesting neuronal loss and astrocytosis. Compatible with other studies, we also identified that the altered cellular proportion is also significantly associated with decline in cognition and Braak staging [68]. In contrast, we did not identify a significant difference in the cellular population structure in the cerebellum, a region not affected in AD (Table 2; Fig. 2a).
We generated RNA-seq data from brains carrying pathogenic mutations in APP, PSEN1, and PSEN2, which cause alterations in Aβ processing and lead to ADAD, and also generated RNA-seq from brains of LOAD and neuropath-free controls. We observed altered cell composition in both ADAD and LOAD compared to controls. However, we identified that ADAD brains have a different cell-type composition than disease-stage-matched LOAD, as the ADAD has a significantly lower relative neuronal proportion and more pronounced astrocytosis. Given the specific cellular population structure of the TREM2 carriers, we compared the neuronal and astrocytic relative proportion of ADAD to that of LOAD non-carriers of variants in TREM2 and observed significant differences (β = − 0.09 and p = 6.89 × 10 −03 for neurons and β = 0.10; p = 1.49 × 10 -03 for astrocytes). This indicates that the difference of the relative proportion between ADAD and LOAD are not driven by TREM2 carrier brains. Based on our results, we would hypothesize that this change in Aβ processing of ADAD would lead to more direct to neuronal death than the pathological processes of LOAD. Similarly, decreased neuronal and increased astrocyte relative proportions were significantly associated with APOE ε4 allele. It has been reported APOE ε4 allele increases the risk for AD by affecting APP metabolism or Aβ clearance [69,70], suggesting a direct link between APP metabolism and neuronal death.
In contrast, the analysis of the Knight-ADRC brains showed that the neuronal relative proportion decrease is less pronounced in TREM2 carriers than in other LOAD cases. We replicated this finding in a multi-area analysis from the MSBB dataset. These results may implicate that TREM2 risk variants lead to a cascade of pathological events that differ from those occurring in sporadic AD cases, which is also consistent with the known biology of TREM2. Further longitudinal neuroimaging analysis is required to validate our findings. TREM2 is involved in AD pathology through microglia mediated pathways, implicated on altered immune response and inflammation [71]. Recent studies in TREM2 knock-out animals showed that fewer microglia cells were found surrounding Aβ plaques with impaired microgliosis [72]. Furthermore, TREM2 deficiency was reported to attenuate tauopathy against brain atrophy [73]. We found no significant difference in the proportion of microglia between AD cases and controls. However, we found significantly decreased microglia in brains exhibiting PA (Additional file 1: Table S7; Additional file 1: Figure S6), proving that these studies are sufficiently powered to identify significant differences. In any case, we cannot rule out the possibility of a change in the activation stage of microglia in these individuals. Overall, these results suggest that TREM2 affects AD risk through a slightly different mechanism to that of ADAD or LOAD in general. Therefore, other pathogenic mechanisms should contribute to disease. We believe that a detailed modeling of immune response cells, reflecting the alternative microglia activation states, will generate more accurate profiles to elucidate the immune cell distribution in AD.
Conclusions
There is a large interest in the scientific community to use brain expression studies to try to identity novel pathogenic mechanisms in AD and to identify novel therapeutic targets. These efforts are generating a large amount of bulk RNA-seq data, as single-cell RNA (scRNA-seq) from human brain tissue in large sample sizes is not feasible. Single-cell sorting needs to be performed with fresh tissue [74], which restrains the analysis of highly characterized fresh-frozen brains collected by AD research centers. Our results indicate that digital deconvolution methods can accurately infer relative cell distributions from brain bulk RNA-seq data, but we recognize the importance of obtaining traditional neuropathological measures to validate the results we observed. Having this approach validated for AD can have an important impact in the community, because digital deconvolution analyses can: (1) reveal distinct cellular composition patterns underlying different disease etiologies; (2) provide additional insights about the overall pathologic mechanisms underlying different mutations carriers for variants as in genes such as TREM2, APOE, APP, PSEN1, and PSEN2; (3) correct the effect that altered cell composition and genetic statuses have in addition to downstream transcriptomic analyses and lead to novel and informative results; and (4) help the analysis of highly informative frozen brains collected over the years.
In conclusion, our study provides a reliable approach to enhance our understanding of the fundamental cellular mechanisms involved in AD and enable the analysis of large bulk RNA-seq data that may lead to novel discoveries and insights into neurodegeneration. in constructing the RNA-seq libraries and generating sequence data for our project. This work was also supported by accessing to equipment made possible by the Hope Center for Neurological Disorders and the Departments of Neurology and Psychiatry at Washington University School of Medicine. We also thank Allison M. Lake for her comments and suggestions. | 2018-06-08T14:07:25.420Z | 2018-02-15T00:00:00.000 | {
"year": 2018,
"sha1": "92097c827c5d6c0f9c29a23614b73cbf49e06690",
"oa_license": "CCBY",
"oa_url": "https://genomemedicine.biomedcentral.com/track/pdf/10.1186/s13073-018-0551-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df79ae8dcca98cfab425b1968b8b98d438c0560f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
248986892 | pes2o/s2orc | v3-fos-license | Physical Activity Behavior of Patients at a Skilled Nursing Facility: Longitudinal Cohort Study
Background: On-body wearable sensors have been used to predict adverse outcomes such as hospitalizations or fall, thereby enabling clinicians to develop better intervention guidelines and personalized models of care to prevent harmful outcomes. In our previous work, we introduced a generic remote patient monitoring framework (Sensing At-Risk Population) that draws on the classification of human movements using a and the extraction of indoor localization using Bluetooth beacons, concert. Using the same framework, this addresses the longitudinal analyses of a of patients in a skilled nursing facility. We try to investigate if the metrics derived from a remote patient monitoring system comprised of physical activity and indoor localization sensors, as well as their association with therapist assessments, provide additional insight into the recovery process of patients receiving rehabilitation. Results: Of the 110 individuals in the analytic sample with mean age of 79.4 (SD 5.9) years, 79 (72%) were female and 31 (28%) were male participants. The energy intensity of an individual while in the therapy area was positively associated with transfer activities ( β =.22; SE 0.08; P =.02). Sitting energy intensity showed positive association with transfer activities ( β =.16; SE 0.07; P =.02). Lying down energy intensity was negatively associated with hygiene activities ( β =–.27; SE 0.14; P =.04). The interaction of sitting energy intensity with time ( β =–.13; SE 0.06; P =.04) was associated with toileting activities. Conclusions: This study demonstrates that a combination of indoor localization and physical activity tracking produces a series of features, a subset of which can provide crucial information to the story line of daily and longitudinal activity patterns of patients receiving rehabilitation at a skilled nursing facility. The findings suggest that detecting physical activity changes within locations may offer some insight into better characterizing patients’ progress or decline.
Introduction
The population aged 65 years and older is projected to double in size to 83.7 million by 2050 only in the United States [1]. With the increase in the geriatric population, health care use is expected to increase drastically with the concomitant demand for rehabilitation and in-home care after hospitalization. Many hospitalized older adults are discharged with new or worse participation in activities of daily living (ADL). Identification of patients' unmet ADL needs in terms of functional status at the time of discharge and after they return home could help address vulnerabilities prior to hospital discharge. Functional disability, prevalent among geriatrics, is a multidimensional concept that involves factors reflected in a person's basic actions including mobility, ADL, cognition, and vision. Whether a patient has sufficient ability to perform their ADL and mobility can be a predictor of whether they are able to remain in the community. Functional status is an important predictor of health outcomes, and emphasis on better quantifying it and understanding its limitations over longer periods of time is warranted [2][3][4][5].
In rehabilitation settings, patients work with physical and occupational therapists depending on their disability. Their functional status is measured by standardized scales to evaluate impaired motor functions, limitations in performing daily activities, reaching, grasping capabilities, and so on. While such scales may not always fully capture the motor functions, completion of a task by patients may also not always reflect improvement in motor functions in that patients learn to adopt different "synergistic patterns to compensate for lost functions" [2]. In such scenarios, physical activity wearable sensors can provide quantifiable and accurate measures of human body movements through which the effect of an injury or a disease on the movement system can be investigated. However, despite the widespread use of such technologies, their clinical use has yet to translate from "bench to bedside" [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16].
With the advent of commercially available low-cost and lightweight sensors over the past decade, the development of remote health monitoring systems has been extensively fostered and largely investigated as a tool to provide constant vigilance to patients. Their portability and ease of use make them widely practical and applicable in a variety of living settings, providing a comprehensive illustration of activities of daily living for patients living with mobility deficits as well as healthy individuals.
In a previous study [16] we reported on the performance of our developed remote monitoring system, Sensing At-Risk Population (SARP), which is comprised of activity tracking wearable sensors and indoor localization sensors. We monitored the first 3 days of patients in subacute rehabilitation environment (baseline) using SARP. This paper extends that analysis by looking at the longitudinal data captured by SARP system in a skilled nursing facility. The goal of our analysis was to determine if longitudinal changes of sensor-based physical activity and indoor localization features of patients receiving rehabilitation can complement changes captured by therapist assessments over the course of rehabilitation in the skilled nursing facility.
Participants
From June 2016 to November 2017, patients were recruited after admission to a subacute rehabilitation center in Los Angeles. A longitudinal study of the physical therapy, occupational therapy, and sensor-based data assessments was performed. The study cohort contains patients admitted to a skilled nursing facility for an intended rehabilitation course of no more than 21 days. After this period, patients were either re-admitted to hospital or stayed in the community or in their residence in long-term care.
Participants were eligible if older than 60 years of age, English speaking, and able to sign a consent form approved by University of California, Los Angeles, Institutional Review Board (IRB# 16-000166 entitled Sensing in At-Risk Populations). Exclusion criteria were movement disorders or complete paralysis of the upper or lower extremities. The diversity of cohort comprised patients who were postsurgical and poststroke and had functional limitations because of medical illnesses.
Study Design
Patients were given a smartwatch every morning at 9 am, and the watches were collected from them at around 6 PM daily.
Sensors placed throughout the facility collected data passively without any interaction required from patients. Patients normally stayed in the resident room (bedroom) and were scheduled for an hour of daily exercise and activity in the therapy area of the nursing home.
SARP System Overview
The core of SARP is comprised of the following: hardware-(1) commercially available Sony SmartWatch 3 with built-in EM7180 ± 2 g triaxial accelerometer, 420mA battery, and BCM43340 Bluetooth module; (2) proximity beacons (MCU ARM Cortex-M4 32-bit processor) mounted at locations of interest within resident rooms (bedrooms) and therapy area, shown with red color dots in Figure 1; clinically validated software-activity recognition, indoor localization, and data visualization algorithms, all encompassed within a Health Insurance Portability and Accountability Act-compliant infrastructure. Figure 1. Skilled nursing facility map with beacon placements shown with red dots [16]. Details of the system architecture can be found in [16][17][18][19][20], and the patent is described in [21]. Activity tracking and indoor localization models were built, validated, and refined prior to this study on a separate cohort of patients [17].
Clinical Features
Clinical assessments in this study are 2-fold: physical therapy (PT) and occupational therapy (OT). PT and OT metrics included functional activities such as bed mobility (includes rolling, moving between supine and sitting, scooting in supine, scooting on the edge of the bed), gait (movement patterns that make up walking and associated interpretations), transfers (moving body from one surface to another without walking), hygiene, toileting, and lower body dressing. Those activities were scored based on the functional levels (1 to 6), from independent to completely dependent [22]. A comprehensive collection of PT and OT key metrics were performed every week; hence, patients were expected to have ≥3 PT or OT assessments within 21 days. In this study, a subset of clinical features was chosen; these features were common in more than 65% (n=72) of patients' PT and OT visits. The most common PT functional activities, performed by more than 65% of the cohort, are as follows: gait distance (in feet), transfer activity, and bed mobility, including movement from supine to sit. Common OT functional activities are comprised of lower body dressing, toileting activity, hygiene, and overall ability to tolerate daily activities (activity tolerance).
Sensor-Based Features
Time and frequency domain characteristics of the accelerometer signal (main, median, variance, skewness, kurtosis, peak frequency, and peak power) were used to determine physical activities. Indoor localization was achieved by using beacons mounted on locations of interest.
The metrics captured from smartwatches and beacons were used to infer the following features: (1) activity recognition measures such as sitting time and standing time; (2) indoor localizations, such as time in bed, time in the bathroom, or therapy area; and (3) raw acceleration quantification (ie, mean absolute deviation, which is approximately equal to energy spent). By combining these attributes, we achieved features such as sitting time in bed, energy spent while walking, lying down time in bed, and so on. Equations resulted in sensor-based feature quantifications can be found in Table 1.
To simplify the result and avoid unnecessary complexity, we focused on the most comprehensive and significant sensor-based feature (ie, energy intensity trends), consistent with analysis shown in [16]. Table 1. Sensor-based features.
Summary Equation Number
Signal magnitude (1) MAD a of accelerometer magnitude signal≈energy spent (2) Hand displacement in 10 s when threshold on MAD=0.02 m/s 2 (3) Energy spent in walking, sitting, standing, laying, or in locations of interest divided by their corresponding time spent. In addition to energy intensity spent at each location, we calculated the total energy intensity in resident room and therapy room. is resident room. Energy intensity for therapy room was similarly calculated.
Analysis Inclusion Criteria
Analysis inclusion criteria were defined to ensure all patients satisfy a minimum amount of daily sensor data and collected PT and OT assessments. Analysis criteria include patients with the following data: (1) ≥3 days of watch data; (2) each day ≥4 hours of watch wear time; and (3) ≥3 sessions of PT or OT or a combination of both PT and OT.
Cohort data were agglomerated for analyses according to the consort diagram shown in Figure 2.
Statistical Analyses
Visualization of prior analysis was generated to unveil any longitudinal patterns. The time trends of sensor-based features appeared to be approximately linear; hence, we decided to use linear models for longitudinal analysis.
Descriptive statistics (medians and IQR) were computed for clinical assessments (ie, PT and OT) at each session. Generalized linear mixed effect model was used to understand the longitudinal relationships between the clinical measures and the sensor-based features [23][24][25][26]. Due to the frequency difference in which sensor and clinical assessments were collected, we merged a day of clinical assessment data with its corresponding day or closest day containing the sensor data (SD 3 days). Note that a valid day of sensor data should satisfy the analysis inclusion criteria 1 and 2.
Three models, each with different sets of sensor-based features, were constructed for each clinical outcome. Model 1 included overall energy intensity as covariate. Model 2 considered energy intensity at resident room and energy intensity at therapy area as covariates. Additionally, sensor-based activity parameters (eg, energy intensity of sitting) were used in model 3. Linear time indicates the number of weeks since the enrollment day. Interaction effects of sensor features with time were also included.
Ethics Approval
The Ethics Board reviewed this study.
Demographic Analysis
From 184 consented patients, 110 (60%) met the watch wearing time protocol with mean age of 79.4 (SD 5.9) years. Moreover, 97 (88%) patients were included in PT-watch paired analysis and 60 (54%) in OT with watch analytics. Most participants were female (n=79, 72%) and of White race or ethnicity (n=84, 76%). Additionally, 62% (n=69) of the patients had pain, 99% (n=109) of them needed some level of assistance with functional mobility activities (transfer activity), and 75% (n=83) needed assistive devices for walking. Table 2 presents detailed sociodemographic and clinical characteristics of the 110 patients. ADL parameters and their significance in determining the outcome are presented based on initial assessments, at the time of admission, or within one day.
Longitudinal Analysis of All Features (Sensor and Clinical Measurements)
The community group spent higher overall energy intensity and energy intensity at the resident room compared to the hospital group, as seen in Figures S1 (a) and S1 (b) of Multimedia Appendix 1. However, energy intensity during therapy sessions tends to have similar values between two groups, especially toward the end of the rehabilitation period, as seen in Figure S1 (c) of Multimedia Appendix 1.
The descriptive statistics of clinical parameters are summarized in Table 3. It shows that "gait distance feet" increases over time (median and IQR after the first week), and "activity tolerance" increases (IQR after first week and median after second week). The table indicates no clear improvements in other clinical-based measures gauged by PT and OT functional levels within 3 weeks.
Longitudinal Association Between Clinical Measures and Sensor-Based Features
The associations of repeated PT, OT, and sensor-based measurements are modeled through three generalized linear mixed models. On PT and sensor associations, according to On OT and sensor associations, Table 4 shows that lower body dressing, toileting activity, and activity tolerance in general improved every week in all three models. The higher value of overall energy intensity in model 1 implied a higher functional score of lower body dressing (β=.19; SE=0.09; P=.03) and toileting activity (β=.23; SE=0.09; P=.01).
Longitudinal Analyses of Location Occurrences Between 2 Outcome Categories of Patients
The occurrence of a location is equal to the number of times a patient spends more than 40 continuous seconds within that specific location. In other words, if the smartwatch receives Bluetooth low energy signal of a beacon corresponding a location for 40 seconds, the occurrence of that location increases by one unit. Figure 3 (a and b) shows total occurrences of patients in various nursing facility locations (daily) normalized by the number of patients in each category. Darker colors indicate higher frequency of patients visiting a particular location. In short, patients in outcome category "home" traveled within the facility (resident and therapy area) much more frequently than patients eventually admitted to a longer-term care or the "hospital" group. Additionally, no patient in the hospital category used upper body exercise (SciFit), Endorphin, and stair equipment in the therapy area.
Overview
To the best of our knowledge, this paper and what we described in [13] are first to explore a combination of indoor localization and physical activity tracking to assess older residents. Following baseline investigations [13], in this paper, we highlight significant findings in longitudinal analyses of clinical and sensor-based features.
Activity With Therapist Versus Resident Time Alone and the Value of Indoor Localization
One of the principal findings of this study is that the energy intensity spent in therapy sessions, unlike in resident room, tend to have similar values in both outcome groups, more significantly toward the end of the rehabilitation period ( Figure S1 in Multimedia Appendix 1). Perhaps the therapists in both patient groups are encouraged to complete their therapy activities and are part of an individually designed therapeutic program that aimed to improve functional activity. Moreover, energy intensity spent in the resident room is very similar to overall energy intensity in that patients generally spend most of their time in the resident room. Resident room activity levels are likely to be crucial in determining the outcome of patients, even at early stages of their rehabilitation. Further understanding of the therapeutic skills learned during therapeutic intervention and carryover into the resident room warrants further study. Table 3, the PT and OT features investigated in this study all improved over time along with the sensor-based feature, energy intensity. However, improvements are more distinguishable between admission day and weeks 1 and 2. On week 3, the mean value for sensor-based features such as overall energy intensity declines. Similarly, OT and PT features show less change compared to week 1 and admission day. One possible reason could be the drop in sample size after week 2 as patients are likely to be discharged earlier. Note that despite the steady PT and OT functional scores in later times, the interquartile range decreases over time, which indicates less variations in functional levels. This could mean that residents achieved their functional goals or plateaued in functional progression. Other aspects that limit a resident's functional ability need to be examined to determine if nonmotor parameters are limiting a resident's progress. Cognition, vision, and psychological factors are some of the areas that may limit functional progression. Table 3 also shows that except the "gait distance in feet," the improvement of features was not evident after the 2nd and 3rd
Based on
week. Further exploration of therapy treatment intensity or type of intervention is warranted. Significant improvements in "gait distance in feet" suggest the importance of this feature in clinical assessment. The rest of the gait measures showed they were less likely to change over time. Dynamic gait parameters and their relation to mobility in daily activities need more investigation.
Sensor-Based Features and Changes in Clinical Assessments
The captured sensor-based longitudinal changes such as lying down, sitting, and overall energy intensity reflect changes in PT and OT features (Table 4). This finding confirms the benefit of remote patient monitoring systems as adjunct tools to further reveal patients' daily story lines. Such systems can bear valuable information in further understanding the type and intensity of therapy interventions that impact overall functional outcome. Brisk features remained surprisingly unchanged over time when patients were expected to become less sedentary during recovery of functional abilities, at least partially. Average sedentary time among all patients was more than 99.8% and remained unchanged. In other words, the cohort was walking less than 0.2% of the time, measured objectively by the SARP wrist-worn sensor. This finding strongly suggests that focusing on sedentary features among elderly patients is beneficial, confirming the studies in [27][28][29], contrary to the emphasis many patient monitoring systems place on using activity trackers to count steps [30,31]. This study shows the importance of translating all movements into measurements such as energy, or energy intensity, rather than solely relying on steps. This may shed light on the type of intervention needed for improving the mobility of the elderly resident population.
Study Limitations
This study had some limitations. Wrist-worn accelerometers used for activity recognition are popular due to their ease of use and ability to capture a comprehensive set of activities. However, interpreting users' data in sedentary positions such as sitting or standing can be quite challenging. Movements (or lack thereof) in sedentary positions are hard to be distinguished by wrist-worn sensors [32]. Compliance to technology is another obstacle faced in this study. Patients accepting to use the technology is a challenge expected to be generic and present in similar studies.
Battery consumption of smart watches can be problematic when trying to transmit data, hourly or daily. Battery lifetimes are normally insufficient in almost all smartwatch manufacturing brands. Their operating systems are designed to perform sophisticated tasks, many of which are not needed for patient remote monitoring such as receiving messages and calls. Furthermore, consumer-grade wearables have wide variability in their accuracy across a range of functional activities depending on their placement, the individuals' movement characteristics, speed of walking, using assistive devices, and so on. The best way to tackle this problem is to use wearable sensors specifically designed (hardware and software) for patient monitoring. However, commercially available research-grade sensors are very expensive and not yet clinician and patient friendly [33].
The study cohort had two outcome groups that were not equally presented. The data set predominantly comprised majority class instances and contained only a few instances of patients who were re-admitted to a long-term care. Akin to most imbalanced medical data sets, analyzing such data poses a great challenge [34].
Conclusions
This study aimed to show that wearable activity trackers, despite raising concerns about their efficacy in quantifying residents' health, can result in a better understanding of patients' well-being when tailored for a specific cohort. Such studies can hopefully pave the way in early prediction of hospitalization, developing intervention alerts and improving overall quality of care. As discussed, our remote patient monitoring system, SARP, captures a combination of indoor localization and physical activity features. SARP information on daily and longitudinal activity patterns can be incorporated into mobile health technology platforms to provide a better assessment of underrepresented, particularly frail, populations. | 2022-04-10T15:16:47.941Z | 2020-08-27T00:00:00.000 | {
"year": 2022,
"sha1": "b3fdebf1238e1aeae431ad8aecfb6d72afcc0774",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2196/23887",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f75aa347b748e6df1725f40cd2d27e298c54e16e",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17372535 | pes2o/s2orc | v3-fos-license | Streamlines and Detached Wakes in Steady Flow past a Spherical Liquid Drop
The flow interior and exterior to a viscous liquid drop in steady motion in an unbounded quiescent fluid is investigated using the perturbation solution of Taylor and Acrivos (1964) to first order in the Reynolds number. New analytical results are derived for the detached wake behind the drop. It is found that as the viscosity of the drop tends to infinity the wake becomes attached to the surface of the drop and the results of Proudman and Pearson (1957) for a solid sphere are rederived.
INTRODUCTION
The objective of the paper is to derive new analytical results for the streamlines and for the wake in the steady flow of a viscous fluid past a spherical liquid drop.
Van Dyke [1] applied the singular perturbation solution of Proudman and Pearson [2] for slow viscous flow past a solid sphere to analyse the attached wake behind the sphere.He found that although the perturbation solution was derived for Reynolds numbers Re < 1 the results obtained were in good agreement with experimental and numerical results for values up to Re = 60.The prediction of Proudman and Pearson [2] that standing eddies first appear behind the sphere at Re = 8 agrees well with the numerical value of 8.5 obtained by Jenson [3] using the full Navier-Stokes equation and with the experimental value of 12 obtained by Taneda [4].Van Dyke also found good agreement, up to Re = 60, between the perturbation solution to first order in Re and the experimental and numerical values for the length of the attached wake.For Re > ∼ 60 the flow behind the sphere becomes unsteady.We will derive new analytical results for the axisymmetric flow past a viscous liquid drop with constant surface tension using the singular perturbation solution of Taylor and Acrivos [5].We will assume that the inter-facial tension is large so that the Weber number is small and therefore the deformation of the spherical drop is small.The perturbation solution depends on two parameters, the Reynolds number Re and the ratio of the viscosity of the drop to the viscosity of the surrounding fluid, κ .Although the perturbation solution was derived for Re < 1, we will consider Re > 1 as was done with the perturbation solution of Proudman and Pearson for flow past a solid sphere.There is evidence that the predictions of the perturbation solution of Taylor and Acrivos may be applicable for Re > 1.For instance Wellek et al [6] found that the Taylor and Acrivos solution quite accurately predicted drop eccentricities for drop Reynolds numbers up to Re = 20.The perturbation expansions for large κ should be applicable to flow past a very viscous drop.
STREAM FUNCTIONS
The singular perturbation solution of Taylor and Activos [5] describes the steady axisymmetric motion under gravity of a viscous drop slightly deformed from the spherical shape in an unbounded quiescent fluid.The fluids are incompressible, immiscible and the interfacial tension σ between the viscous drop and the surrounding fluid is uniform.Physical variables inside the drop are distinguished from corresponding variables outside the drop by a circumflex.A fixed spherical polar coordinate system (r, θ, ϕ) is used with origin at the centre of mass of the viscous drop.All the fluid dynamical variables are dimensionless and independent of ϕ.The characteristic length is the radius, a, of the spherical drop with the same volume, and the characteristic velocity is the terminal velocity U of the drop.The Reynolds number, defined in terms of the parameters of the exterior fluid, and the viscosity ratio κ are where η is the shear viscosity and ν = η/ρ.
Taylor and Acrivos used the method of matched asymptotic expansions for the solution exterior to the drop.The straightforward expansion in powers of Re is the inner expansion exterior to the drop.The inner expansion is used to analyse the exterior flow close to the drop which includes the attached wake.It was found that there is no deformation of the drop at zero order in Re.The boundary conditions for the first order solution are therefore imposed at r = 1.The inner expansion exterior to the drop and to first order in Re can be written as as Re → 0. The stream function inside the liquid drop to first order in Re is as Re → 0. The stream functions (2.2) and (2.3) depend only on Re and κ and are independent of the Weber number W e and density ratio γ where This is because the boundary conditions for the order Re solution are imposed on the zero order surface of the drop which is not deformed.The deformation of the drop to first order in Re is proportional to W e and depends on γ and κ.The results therefore apply only for small Weber number.
DETACHED WAKE
We now investigate the properties of the wake behind the drop using the stream function (2.2).
End points of the boundary of the standing eddy
From (2.2), ψ(r, θ) = 0 on the surface of the drop r = 1, along the axis of symmetry θ = 0 and θ = π, and also along the curve ] .(3.1) Equation (3.1) is the boundary of the standing eddy behind the drop.It generates a surface of revolution about the line θ = 0.The end points of the boundary are its points of intersection with the axis of symmetry θ = 0 and are obtained by putting cos θ = 1 in (3.1).The points therefore satisfy the cubic equation where Re .
For a solid sphere, κ = ∞, and (3.3) reduces to (r − 1) The roots of (3.4) are In the limit κ = ∞, (3.2) has one negative root which is not physical and two positive roots.Since r = 1 is one end point, the boundary of the standing eddy is attached to the solid sphere.For the eddy to lie in the flow field the third root in (3.5) must be greater than unity, that is Re > 8 [1, 2].In the limit κ = 0, which describes an inviscid gas bubble, (3.2) reduces to r 3 = 0. Equation (3.2) then has three coincident roots at r = 0 which is in agreement with the result that there is no standing eddy behind a spherical bubble because no vorticity is generated upstream on the surface of an inviscid bubble [11].Consider now 0 < κ < ∞.General properties of the roots of (3.2) can be obtained from Descartes' rule of signs [12].Since there are two changes of sign in the coefficients of P (r) equation (3.2) cannot have more than two positive roots.Since P (0) > 0 and P (r) → ∞ as r → ∞ there will be either two distinct positive roots, two coincident positive roots or no positive roots.Further, since P (1) > 0, the two positive roots, when they exist, will either both be greater than unity or both between 0 and 1.When they exist, the two positive roots greater than unity are the end points of the boundary of the standing eddy.There is one change of sign in the coefficients of P (−r) and therefore (3.2) cannot have more than one negative root.Since P (0) > 0 and P (r) → −∞ as r → −∞ there will be exactly one negative root.This negative root is not physical.
In order to transform (3.2) to the standard form of a cubic equation let [12] where Re . (3.9) The standing eddy exists when all three roots of (3.7) are real, that is when [12] For condition (3.10) to apply it is necessary that H < 0 which is satisfied by (3.8).When (3.10) holds, the solution of (3.7) is [12] S where Hence, transforming back from s to r using (3.6), the three real roots of (3.2) when they exist are ) , (3.13) where n = 0, 1, 2 and ϕ is the solution of (3.12) in the range 0 ≤ ϕ ≤ π/3.Consider now the special case in which the two positive real roots are coincident.Coincident roots occur when the standing eddy first appears in the downstream wake.Equation (3.7) has three real roots with two roots the same and one different if [12] G 2 + 4H 3 = 0 (3.14) and H ̸ = 0 and G ̸ = 0. Thus for two coincident real roots and If cos 3ϕ = +1 then ϕ = 0 and it can be verified that r 1 = r 2 < 0. To obtain the coincident positive roots we therefore consider cos 3ϕ = −1.When cos 3ϕ = −1, ϕ = π/3.Equation (3.13) with n = 1 gives r 1 < 0. Equation (3.13) with n = 0 and n = 2 gives the two coincident positive roots, r 0 and r 2 , Let r A = r 0 = r 2 .Then The Reynolds number Re A satisfies (3.14) and has still to be determined.It is the Reynolds number at which the standing eddy first appears as Re is increased from zero.The distance from the centre of mass of the drop to the point on the axis of symmetry where the eddy first appears is r A .The points of intersection of the boundary of the standing eddy with the axis of symmetry are r 0 and r 2 given by (3.13) for n = 0 and n = 2.The solution r 1 for n = 1 yields the negative root.Graphs of r 0 and r 2 plotted against Re for a range of values of κ are shown in Figure 1.For κ = ∞, r 2 = 1 and the boundary of the standing eddy is attached to the surface of the solid sphere.For finite κ, r 2 > 1 and the boundary of the standing eddy is detached as shown for κ = 2, 3, 5, 10 and 30.For finite κ the wake is detached from the surface of the drop in agreement with numerical predictions [10].When Re = Re A for given κ, then r 0 = r 2 = r A .As Re increases from Re A the length of the wake increases.The end point r 0 moves downstream while the end point r 2 moves upstream towards the surface of the drop.The point on the axis of symmetry where the eddy first appears will be outside the drop if r A > 1.It follows from (3.17 It is readily verified that for 0 < κ < ∞, Re * > 8 and that as κ → ∞.In the limit of a solid sphere, κ = ∞ and Re * = 8.For a solid sphere r A = 1 and Re * = Re A = 8 which is the prediction of Proudman and Pearson [2].For a drop, Re * ̸ = Re A because r A > 1 which is clearly seen in Figure 1.In the next subsection we will investigate Re A and r A . Taylor and Acrivos [5] have observed that in all cases of physical significance the drop will be deformed into an oblate (flattened at θ = 0 and θ = π) rather than a prolate (flattened at θ = π/2) spheroid.Thus r A > 1 will be exterior to the deformed drop in all cases of physical significance even when the deformation is included.
Reynolds number for which standing eddy first appears
The Reynolds number Re A for which the standing eddy first appears satisfies equation (3.14).Substituting (3.8) and (3.9) into (3.14)gives for Re A the cubic equation where
(
Re + 1 3 Thus when κ = ∞, (3.20) has two coincident roots at Re = 8 and one negative root, Re = −1/3 which is not physical.Hence for a solid sphere, Re A = 8 in agreement with the result derived from (3.5).Consider now 0 < κ < ∞.There are two changes of sign in the coefficients of F (Re) and therefore, by Descartes' rule of signs, F (Re) = 0 cannot have more than two positive roots.But F (∞) = +∞, F (Re * ) < 0, F (8) < 0 and F (0) > 0 where Re * is given by (3.18).Thus F (Re) = 0 has at least two positive roots and hence there are exactly two positive roots, Re S and Re L , where Re S < Re L .Then 0 < Re S < 8 < Re * < Re L . (3.23) Since Re L > Re * , r A (Re L ) > 1 where r A is given by (3.17) and therefore Re L = Re A , the value of the Reynolds number at which the standing eddy first appears behind the drop.Since Re S < Re * , r A (Re S ) < 1 and therefore r A will lie inside the drop which is not physical.There is one change of sign in the coefficients of F (−Re) and therefore F (Re) = 0 cannot have more than one negative root.Since F (0) > 0 and F (−∞) = −∞, F (Re) = 0 has at least one negative root and hence exactly one negative root.This negative root is not physical.
In order to evaluate (3.17) analytically we will derive a perturbation solution for Re A for large values of κ.The perturbation solution is in a more suitable form to interpret the results than the exact solution.Let ε = 1/κ and y = Re.Then(3.20)becomes ) y as ε → 0. When y 0 = 8, it follows from (3.28) that y 1 = ∞.The assumed form (3.25) for the expansion of y when y 0 = 8 is therefore not correct.
.38)
as ε → 0. Only Re A , given by (3.39), is physically significant.In Figure 2 the numerical solution of (3.20) for Re A and the perturbation solution (3.39) are compared.The Reynolds number Re * given by (3.18) is also plotted.The perturbation expansion (3.39) is a good approximation for 5 < ∼ κ ≤ ∞.The numerical curve Re = Re A divides the (κ, Re) plane into two regions.For Re > Re A the standing eddy exists downstream of the drop while for Re < Re A it does not exist.For Re = Re A the standing eddy first appears for the given value of κ. given by (3.18), plotted against κ.For a given value of κ, the standing eddy exists in the downstream wake if Re > Re A .
The perturbation solution for r A is obtained by substituting (3.39) into (3.17) and expanding for large κ: as κ → ∞.The expansion (3.40) clearly shows that the standing eddy first appears at a point in the flow downstream of the drop.In Figure 3, r A calculated from (3.17) using the numerical solution of (3.20) for Re A is compared with (3.40).
The perturbation expansion underestimates the numerical solution but is a good approximation for 5 < ∼ κ ≤ ∞.The distance from the surface of the drop which we approximate as r = 1, to where the standing eddy first appears, r A − 1, is approximately proportional to κ −1/2 : as κ → ∞.As κ increases the distance behind the drop to the point where the eddy first appears decreases and in the solid sphere limit, κ = ∞, the eddy may be imagined to penetrate through the surface and appear in the flow [1].
Stagnation points
The radial and tangential components of the fluid velocity are (3.42) Thus using (2.2) for ψ(r, θ), v r (r, 0) = (r − 1) where P (r) is defined by (3.3).Now the two distinct roots of P (r) = 0 greater than unity are the end points of the boundary of the standing eddy on the axis of symmetry θ = 0.The end points of the boundary of the standing eddy are therefore stagnation points where v r = v θ = 0.The point at which the eddy first appears is also a stagnation point.At this point the two stagnation points are coincident and the radial velocity v r (r, 0) attains a local minimum value.The third stagnation point on the line θ = 0 is the rear stagnation point r = 1.As κ → ∞ the stagnation point closest to the drop as well as the point at which the eddy first appears tend to the rear stagnation point.
STREAMLINES INSIDE THE LIQUID DROP
The stream function inside the drop to first order in Re is (2.3).From (2.3), ψ(r, θ) = 0 on the surface of the drop r = 1 and along the axis of symmetry, θ = 0 and θ = π.When 0 ≤ r < 1 and θ ̸ = 0 and θ ̸ = π, ψ(r, θ) < 0. There is therefore no boundary curve, ψ(r, θ) = 0, which divides the interior flow into two regions similar to the boundary of the standing eddy in the exterior flow.There is only one flow region in the axial plane inside the drop and the streamlines form closed curves which extend over the whole of the axial plane.This is in agreement with numerical solutions [8,9,10].
In Figure 4 the streamlines inside and outside the drop are plotted for κ = 5 and Re = 40.The standing eddy exists downstream of the drop since from Figure 1, Re > Re A .
RESULTS AND DISCUSSION
The analysis predicts a detached wake for flow past a liquid drop, in contrast to the attached wake for flow past a solid sphere, which is consistent with numerical results and experiment.The existence of standing eddies downstream of the drop is due to the accumulation of vorticity generated upstream on the surface of the drop [10].If Re < Re A convection will transport away the vorticity generated at the interface and no standing eddy will form.If Re > Re A then the vorticity generated at the interface will form a standing eddy behind the drop.The wake grows in size as Re increases and also as κ increases as the no slip condition at the interface becomes a more effective source of vorticity.The wake is detached from the drop because of the internal flow of the drop.A liquid drop with an attached wake would require a secondary interior vortex [10].As κ increases the strength of the flow inside the drop decreases and the wake moves closer to the surface of the drop.In the limit κ → ∞ there is no internal flow and the wake is attached.
The stream functions (2.2) and (2.3) to first order in Re and therefore the results derived from them depend only on two parameters, Re and κ, and apply only for small Weber number, W e ≪ 1.The numerical and experimental results in the literature depend on four parameters, Re, κ, W e and the density ratio γ.We will compare our analytical predictions with the numerical results of Dandy and Leal [10].The Reynolds number and Weber number used by these authors is twice the value defined in (2.1) and (2.2).
Dandy and Leal [10] investigated the dependence of the structure of the wake on W e for Re = 50 and κ = 4.For W e = 1 the drop is almost spherical and r 0 , the maximum extension of the wake downstream, is approximately 2.4 while the analytical prediction (3.13) is r 0 = 2.14.As W e is increased the drop becomes an oblate spheroid and for W e = 2, 3 and 4, r 0 ≃ 3, 4.3 and 4.6.The comparison between the analytical and numerical predictions is quite good for W e = 1 but there is strong dependence on W e and the analytical result is not reliable for W e > 1.
The numerical plots of the streamlines in Dandy and Leal [10] can also be used to check the predictions of Figure 1.For Re = 50 and W e = 2 their streamline plots show that the standing eddy has not appeared for κ = 2 but is present in the flow for κ = 4 while for Re = 30 the standing eddy has not appeared for κ = 4 (W e = 0.25) but is present for κ = 10 (W e = 2).These numerical results are consistent with Figure 1.
Equation(3.41) predicts that for W e ≪ 1, the distance from the point where the eddy first appears to the surface of the drop decreases like κ −1/2 as κ → ∞.We were not able to find data to check this prediction.
The perturbation solution of Taylor and Acrivos was derived for Re < 1 but it gave a good qualitative description of the flow features for Re < ∼ 60.Van Dyke also found that the perturbation solution of Proudman and Pearson for Re < 1 gave good predictions for the downstream end of the boundary of standing eddies for Re < ∼ 60.There are other examples in fluid mechanics where predictions have been made by giving the parameters values greater than permitted in the derivation of the solution.Longuet-Higgins [14] extended the range of the solution for capillary waves to predict the entrainment of air bubbles in wave troughs and Sostarecz and Belmonte [15] extended their model of a viscoelastic drop to predict that the boundary will self-intersect which could describe internal pinch-off at the trailing edge of the drop.
CONCLUDING REMARKS
Several new analytical results were presented.It was shown analytically that the wake is detached from the drop and that the end points of the boundary of the standing eddy on the axis of symmetry and the point at which the eddy first appears are stagnation points.We also saw analytically that inside the drop the streamlines from closed curves which extend over the whole of the axial plane.The end points of the boundary of the standing eddy were obtained as functions of the Reynolds number Re and the viscosity ratio κ.Useful singular perturbation expansions in powers of κ −1/2 were derived for the Reynolds number at which the downstream eddy first appears and for the point in the flow at which it first appears.It was predicted that the distance from the surface of the drop to the point where the eddy first appears is approximately proportional to κ −1/2 for large κ.
Figure 1 :
Figure 1: Points of intersection of the boundary of the standing eddy with the axis of symmetry plotted against Re for κ = 2, 3, 5, 10, 30 and ∞.
Figure 2 :
Figure 2: Graphs of (a) the numerical solution for Re A , (b) the perturbation solution (3.39) for Re A and (c) Re *given by (3.18), plotted against κ.For a given value of κ, the standing eddy exists in the downstream wake if Re > Re A .
Figure 3 :
Figure 3: Distance from the centre of mass of the drop to the point where the standing eddy first appears, r A , plotted against κ: numerical solution (--), perturbation solution (3.40) (− − −−).
Figure 4 :
Figure 4: Streamlines for flow past a drop with k = 5 and Re = 40.The direction of the flow is from left to right. | 2016-04-07T22:52:55.727Z | 2010-12-01T00:00:00.000 | {
"year": 2010,
"sha1": "a6c76e1722e87d209b1ca6a9a2253bb467946f00",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2297-8747/15/4/543/pdf?version=1460096640",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "a6c76e1722e87d209b1ca6a9a2253bb467946f00",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
237090137 | pes2o/s2orc | v3-fos-license | Wild bats briefly decouple sound production from wingbeats to increase sensory flow during prey captures
Summary Active sensing animals such as echolocating bats produce the energy with which they probe their environment. The intense echolocation calls of bats are energetically expensive, but their cost can be reduced by synchronizing the exhalations needed to vocalize to wingbeats. Here, we use sound-and-movement recording tags to investigate how wild bats balance efficient sound production with information needs during foraging and navigation. We show that wild bats prioritize energy efficiency over sensory flow when periodic snapshots of the acoustic scene are sufficient during travel and search. Rapid calls during tracking and interception of close prey are decoupled from the wingbeat but are weaker and comprise <2% of all calls during a night of hunting. The limited use of fast sonar sampling provides bats with high information update rates during critical hunting moments but adds little to their overall costs of sound production despite the inefficiency of decoupling calls from wingbeats.
INTRODUCTION
Animals balance a trade-off between acquiring sufficient sensory information from their surroundings and the costs of doing so (Laughlin, 2001). For most animals, this cost is primarily due to the neural processing of passive sensory signals and high maintenance costs of sensory and nervous systems (Laughlin, 2001). However, for active sensing animals, the cost of sensory acquisition is also influenced by the production and implementation of the energy needed to investigate their environment (Nelson and MacIver, 2006). For example, electric fish have converted a large part of their swimming muscles to electrocytes and tilt their body when capturing prey to improve their active sensing performance at the cost of a reduction in swimming efficiency (MacIver et al., 2010). Toothed whales use pneumatic sound production in their nasal complex to generate powerful clicks with very small air volumes that must be recycled after a series of clicks. Toothed whales must therefore choose between emitting numerous and weak or fewer and loud clicks per recycling (Foskolos et al., 2019), and all but the sperm whale cannot breathe and click at the same time (Wahlberg et al., 2005). To the contrary, echolocating bats must breathe to vocalize (Speakman and Racey, 1991). Like most other terrestrial mammals that synchronize breaths with locomotory strides (Bramble, 1989), bats generally produce echolocation calls on exhalations that are synchronized with each upstroke of their wingbeat cycle (Koblitz et al., 2010;Suthers et al., 1972) to make an otherwise expensive sound production much cheaper (Lancaster et al., 1995;Speakman and Racey, 1991). The maximum vocal output is achieved at the top of the upstroke when abdominal muscle force to produce the downstroke is likely maximal (Koblitz et al., 2010). Despite this coupling, a recent laboratory study measured strongly increasing metabolic costs for the production of calls with source levels above 110 dB root mean square (RMS) at 0.1 m (Currie et al., 2020). Sound production at other times in the wingbeat cycle leads to lower call levels and presumably to less efficient sound production (Koblitz et al., 2010). However, depending on the behavioral task, bats can break the tight relationship between sound emission and wingbeats by emitting calls throughout the entire wingbeat cycle as observed in the laboratory (Lancaster et al., 1995;Moss et al., 2006) and field (Kalko and Schnitzler, 1989). In one captive study, Eptesicus fuscus emitted calls throughout the entire wingbeat cycle, leading to the suggestion that the bats' vocal control could override the wingbeat cycle during the buzz, but no direct quantification of the relationship was established (Moss et al., 2006). Thus, although the decoupling between wingbeats and buzz calls is widely assumed to take place (Kalko, 1994;Kalko and Schnitzler, 1989;Koblitz et al., 2010;Lancaster et al., 1995;Suthers et al., 1972), the precise relationship between buzz call timings, levels, and the wingbeat phase remains to be understood and quantified. Moreover, bats may echolocate differently in the wild as compared to controlled laboratory settings. For example, bats can call up to 20 dB louder in the wild and yet must navigate and catch food in complex environments likely calling for a more adaptive relationship between wingbeat, respiration, and biosonar sampling. So far, little is known about when and how often echolocating bats dispense with cheap sound production to improve sensory flow over a range of echolocation behaviors in the wild.
In the wild, many species of echolocating bats fly long distances every night to their foraging habitats (spanning 9.8 to 53.5 km for five different species (Egert-Berg et al., 2018)), where they spend additional time on the wing to catch prey. To meet the energy requirements of such long flights, bats must strive for a low cost of transport and at the same time high capture success rates. Indeed, wild Pipistrellus kuhli adjust their flight speed to the task so as to balance energetic costs. They fly faster when commuting (i.e. closer to the speed required for minimizing energy expenditure per distance traveled) and slower during hunting (i.e. closer to the speed required for minimizing energy expenditure per time spent flying). This indicates that bats tailor their flight speed in a context-dependent manner to optimize energy efficiency (Grodzinski et al., 2009). Sensory sampling studies from the wild show that bats can emit calls up to 220 times per second during buzzing to successfully capture aerial prey (Ratcliffe et al., 2013). Since sensory sampling must be tied to wingbeats to be efficiently produced, sampling faster might result in either less efficient flight (if wingbeats are adjusted to sensory sampling) or less efficient sound production (if sound production is decoupled from wingbeats) (Jones, 1994(Jones, , 1999Waters and Wong, 2007). Here, we investigate how bats resolve that trade-off between biosonar information flow and energetic efficiency of flight and sound production during natural foraging behaviors. We hypothesize that wild bats have distinct sensory-motor combinations of wingbeat, call rate, and call intensity adapted to the different behavioral task of either aerial capture or commuting flight. To test this, we used high-resolution biologging tags to measure biosonar source levels and sampling rates as a function of wingbeat cycle in greater mouse-eared bats as they hunt and orient in the wild during nightlong foraging trips.
Sensory update rates are strictly coupled to wingbeat only when commuting and searching for prey To quantify the coupling between sensory sampling and biomechanics, we equipped ten female greater mouse-eared bats (M. myotis) with sound and motion recording tags (Stidsholt et al., 2018) during one night of activity. The tagged bats commuted to foraging sites and captured on average 48 (35 standard deviation) insects on the wing as well as various insects on the ground during one night of foraging (Table S1). We first investigated the relationship between repetition rates and wingbeat frequencies when commuting and hunting. When commuting or searching for prey, the bats employ a stereotyped flight gait with wingbeat frequencies of around 7 Hz ($140-160 ms wingbeat period; Figures 1 and 2B). Bats of this size using wingbeat frequencies of $7 strokes per second (Figures 1C and 2B) commute at an average flight speed of 7 m/s (calculated using a weight of 34 gram derived from a bat weight of 30 g and a tag weight of 4 g (Von Busse et al., 2013)). A flight speed of 7 m/s for this size bat likely minimizes the cost of transport during commute (Rayner, 1987). In this behavioral mode, the call intervals are closely tied to the wingbeats with calls emitted on either every, every other, or every third wingbeat (Figure 2A, green). Sensory sampling during commute is therefore comparatively slow and strictly coupled to the wingbeat cycle of the animal.
While call emissions are coupled to wingbeats in a one-to-one fashion when searching for prey ( Figures 1B, 2A, and 2B) (Schnitzler and Kalko, 2001), this coupling breaks down during the course of prey capture. Upon prey detection, the bats transition into an approach phase with an increased call rate ( Figure 2C, purple). This increase is partly mediated by an increase in the wingbeat rate ( Figures 1B, 1C, and 2C, pink), reflecting the kinematic demand to orient toward the prey, but the higher call rate is mainly achieved by an increase in the number of calls emitted per wingbeat ( Figures 1B and 2D, purple). When buzzing during the final stage of prey capture, the bats emit numerous weak calls with short call intervals leading to on average 11 calls emitted per wingbeat cycle ( Figure 2D, purple). The average buzz duration is $100 ms ( Figure S1), which is similar to the duration of the last wingbeat ( Figure S1) meaning that bats extend their buzz call emissions throughout an entire wingbeat cycle. This observation provides the first quantitative field validation of earlier laboratory (Lancaster et al., 1995;Moss et al., 2006) and field (Kalko, 1994) studies suggesting that bats transition from full wingbeat sensorimotor coupling (search and commute) to partial (approach) or complete (buzz) uncoupling during hunting (Moss et al., 2006), thereby achieving a 20 fold increase in the rate of sensory information across these stages of echo-guided prey capture. We next investigated how these varying sampling rates affect the sensory update rates per distance traveled when bats fly at fast speeds in the wild. To compare the relationship between sensory flow and speed across echolocating animals varying in size, speed of locomotion, and maneuverability, we calculated the sensory update rate per distance and body length traveled for bats and toothed whales. During search, the tight sensorimotor coupling in bats results in maximum one sensory update per meter flown. This is comparable to the sensory update rate per distance traveled of searching sperm whales (2 clicks/sec, 2 m/s swimming speed (Madsen et al., 2002); Table 1): a predator more than two orders of magnitudes longer and six orders of magnitude heavier. This is a surprise as small animals should use higher sensory sampling because they perceive temporal changes on finer time scales and have a higher maneuverability (Healy et al., 2013) compared to large animals traveling at the same speed. To account for these differences across animals varying dramatically in size, we compared sensory update rate of bats and toothed whales per body length traveled. When accounting for body length using a size-specific sampling rate, bats receive just one sensory update per ten body lengths traveled while searching (Table 1). This is an extremely low sensory update rate for a small animal flying at a high speed (Healy et al., 2013) (Table 1). In comparison, echolocating toothed whales use a >200 Figure 1. Example of the coupling and de-coupling between echolocation calls and movement during searching for, and capturing of, an aerial insect (A) Call output levels (left-hand axis: measured in energy flux density, right-hand axis: RMS sound pressure level) vary over time and strongly decrease toward the capture. Colors indicate the number of calls emitted per wingbeat (wb) cycle. (B) Wingbeats generate sinusoidal acceleration signals that are logged by the tag (gray). Call emissions are initially coupled one to one to wingbeat cycle (blue) in the search phase but increase to 2-3 calls per wingbeat (light blue) in the approach phase. In the buzz (buzz I, yellow, and buzz II, red), more than eight calls are emitted per wingbeat. (C) The instantaneous wingbeat frequency and amplitude during the course of the capture. Two seconds before the capture, the wingbeat frequency increases dramatically from 6 to 13 Hz presumably marking the adjustments in flight behavior after prey detection. After prey capture, the bat transitions to a wingbeat rate of 9 Hz while chewing. The spectrogram was produced with an Fast Fourier Transform and window length of 128, an overlap of 100 samples at an accelerometer sampling rate of 100 Hz and a dynamic range of 30 m/s 2 .
OPEN ACCESS
iScience 24, 102896, August 20, 2021 3 iScience Article times higher size-specific sampling rate when searching than bats (Table 1). Thus, the cheap sound production system uncoupled from other biomechanical processes of toothed whales supports much higher redundancy in sensory scenes relative to their maneuverability.
Assuming that perceptual time constants are similar between these species, it follows that bats rely on very sparse sensory inputs to detect and identify prey and to guide motor patterns and decision-making when commuting. We posit that this sparse sensory input might be the result of the (bio) physical constraints of a fast-flying echolocator in air that must couple call rates to a relatively low wingbeat rate to minimize the energy expenditure of echolocating. Despite this sparse sensory input and the very limited range of ultrasonic echolocation in air compared to ultrasound in water (Madsen and Surlykke, 2013), their powerful calls enable the bats to still sample the same volume of air multiple times with successive calls (Stidsholt et al., 2021) apparently providing sufficient sensory information to navigate and avoid obstacles. In concert with acute spatial memory in known habitats (Genzel et al., 2018;Ratcliffe et al., 2005), this low information redundancy appears to be sufficient for routine navigation. However, such slow sensory sampling for searching bats might result in lower prey detection rates and more reactive sensory motor operation in comparison to toothed whales (Hein and McKinley, 2013). If so, we speculate that bats may compensate by capturing prey with high success rates (Stidsholt et al., 2021) or by extracting more information from each sensory input than toothed whales due to the higher time bandwidth product of their echolocation calls (Woodward, 1953). Such low sampling rates coupled to wingbeats may be speculated to explain iScience Article the evolution of their complex calls and high-level auditory processing that potentially increase information extraction for each call-echo pair (Corcoran and Moss, 2017).
During the final stage of capture, the tagged bats combine faster sampling with a slower flight speed resulting in a 70-fold increase in size-specific sampling rate in comparison to the search phase. This suggests that high temporal update rates may be necessary for capturing aerial prey in a three-dimensional space, as has recently been proposed in different taxa of visually hunting predators (dragonflies, flies, small birds (Boströ m et al., 2017)). The high size-specific update rates in toothed whales during commute and capture may reflect that whales due to their cheap sound production (Foskolos et al., 2019) might over-sample their surroundings, thereby supporting better discrimination of their prey and allowing faster and more precise guidance of their less maneuverable bodies compared to the agile aerial hunters (Madsen and Surlykke, 2013). Due to the large variations in sensory update rates per body length traveled in commuting and hunting bats, we next investigated how the emitted energy of calls and thereby detection range varied with these different sampling strategies.
Decoupling calls from wingbeat phase allows faster calling but at weaker levels The energy needed to produce sound in laryngeal-vocalizing bats is delivered by airflow (Suthers and Fattu, 1973). We therefore tested the hypothesis that bats can produce a maximum amount of sound energy per wingbeat that is constrained by the kinetic energy in the exhaled airflow. This energy can in principle be used to produce either one loud or several weaker calls. It therefore follows from this hypothesis that the summed call energy flux density (EFD, i.e., taking both the number of calls, call levels and their durations into account) per wingbeat should approach a constant. The highest call levels are emitted in commute (Figure 3, green) and search flight (Figure 3, blue), where bats emit a maximum of one call per wingbeat with call intervals of 100 ms or more. Summed call energy levels do not decrease significantly when up to two calls per wingbeat are emitted which supports previous laboratory findings (Waters and Wong, 2007). However, the summed call levels decreases when bats emit more than two calls per wingbeat (call intervals below 70 ms, Figure 3, blue). This causes an up to 100x decrease (À20 dB) in the summed call energy of all calls per wingbeat ( Figure 2D, pink) during the approach and buzz phases, despite the increasing number of calls emitted per wingbeat ( Figure 2D, purple). The bats therefore appear to under-utilize the available energy when calling at high rates. The disproportionately low source levels of greater mouse-eared bats when emitting more than two calls per wingbeat may have several explanations. Bats may actively call faintly to reduce sensory volumes, thus simplifying their sensory scenes during prey tracking (Stidsholt et al., 2021), or to maintain echo levels in a dynamic range suited for their hearing system iScience Article at close ranges where they call at higher rates (Denzinger and Schnitzler, 1998). Alternatively, the spreading of the calls over a larger proportion of the wingbeat cycle in the buzz may make sound production less efficient causing the 100x (À20 dB) reduction in summed energy output per wingbeat cycle ( Figure 2D, pink). Irrespective of whether the reduction in call levels is driven by physical constraints or a need to simplify the acoustic scene to facilitate sensory processing, faster sampling during hunting unequivocally involves weaker echolocation calls spread over more of the wingbeat cycle.
To uncover how bats balance energy expenditure and sensory update rates during commuting and foraging, we next investigated where in the wingbeat cycle call emissions occur. If sound production efficiency is greatest at the end of the upstroke, call emissions would be expected to occur in the range of 160-180 with respect to a wingbeat cycle beginning at the start of the upstroke (0 ). In keeping with that prediction, we find that the tagged bats emit most calls with a phase symmetrically centered around 176 of the wingbeat cycle (quartile range: 135-235 , Figures 4A and 4B, gray patch), corresponding to the last part of the upstroke to the beginning of the downstroke where the wings are pointing upwards. During commute and search flights, calls are likewise emitted at the last part of the upstroke ( Figure S2) and the first part of the downstroke and occur either every or every second wingbeat (Kalko and Schnitzler, 1989;Schnitzler et al., 1987) (Figures 4A and 4B, dark blue and black). When commuting or searching, bats only produce high call source levels ( Figure 3) with mean source levels of 107 (quartile range: 93 to 115) dB re 20 mPa RMS. Most of their search calls are therefore emitted with source levels close to or below the 110 dB re 20 mPa RMS limit associated with an increase in metabolic rate for Nauthusis's pipistrelle bats (Pipistrellus nathusii) (Currie et al., 2020). This indicates a strong sensory demand for intense calls that provide large sensory volumes but by synchronizing these search and commute calls with upstrokes (green) is characterized by intense call source levels (SLs) and call intervals of either~150 ms (i.e., every wingbeat),~300 ms (i.e., every second wingbeat), or~450 ms (i.e., every third wingbeat). In aerial hawking mode (blue), the bats search for prey by using intense call SLs emitted every wingbeat (~150 ms call interval). As they transition into the approach phase, they reduce call interval by emitting two calls per wingbeat (~75 ms). The last part of the approach phase and the buzzes are characterized by SLs below 70 dB on an energy basis and short call intervals that are uncoupled with wingbeat. The reduction of call source level takes place within a narrow range of call intervals between 60 and 70 ms. When the bats glean insects off the ground (purple), they emit calls with low SLs but with long call intervals (~200-500 ms). The bats do not produce loud calls with short intervals of less than~50 ms, and no weak calls at call intervals around 150 ms, i.e., emitted every wingbeat. The black line to the right marks the call source-level distribution for all bats measured in RMS. The call source level is quantified as energy flux density (left-hand axis) and as RMS (right axis) here approximated by adding 25 dB (corresponding to a fixed 3 ms call duration) to the call levels in EFD to facilitate comparison to the literature. (N = 10 bats, one full night of foraging per bat, 6 x 10 5 calls) ll OPEN ACCESS iScience 24, 102896, August 20, 2021 iScience Article bats can likely produce them in-expensively but at inherently low rates as dictated by their stable flight gaits.
When transitioning into the approach phase with multiple calls per wingbeat, call emissions occupy a larger fraction of the wingbeat cycle and therefore extend beyond the upstroke also into the end of the downstroke (Figure 4, light blue, Figure S3) (Kalko and Schnitzler, 1989;Koblitz et al., 2010). These calls are timed to the wingbeat cycle in a similar fashion as the dyads and triplets emitted during a landing exercise in Eptesicus fuscus (Koblitz et al., 2010) highlighting a stereotyped pattern across different species of bats. The timing of buzz I and II calls is shifted even further from the peak, being mainly timed to the first and middle part of the downstroke ( Figure 4B, yellow and red). Buzz calls thus are regularly emitted after the upstroke ( Figure 4B, light blue). In the buzz, bats must therefore extend the exhalation throughout the downstroke to continuously produce calls with repetition rates above 160 calls/second. Bats therefore actively choose to override the wingbeat cycle when fast streaming of echo information is needed by decoupling sound emissions and breathings from the upstroke. Interestingly, the decoupling is still not complete (see clear peak of buzz calls at $320 ), which likely reflects a stereotyped motor pattern just before prey capture.
Costs of sound production may have put an evolutionary premium on low biosonar sampling rates in bats
The need for fast sensory sampling during the approach and buzz phases results in calls being emitted well outside of the optimal wingbeat phase implying reduced sound production efficiency for these calls. However, the energy per buzz call is on average 1000 times lower (À30 dB) than search call energy (Figure 3, blue boxes), and buzz calls comprise <2% ( Figure 4A, red and yellow) of the total emitted calls throughout a night of foraging, suggesting small absolute costs of such decoupling. However, if we use a less conservative estimate by including all approach calls emitted in a sub-optimal wingbeat phase (<90 and >270 ), these call emissions comprise 13% of all calls and have a median call level 12 dB louder than the buzz calls indicating that wingbeat-decoupled sound emissions during hunting may incur added energetic costs to the bats. While such energetic implications perhaps are specific to the species and study conditions, we consider it parsimonious that they are representative for other frequency-modulated, insectivorous bats. Future studies should address how the relationship between call rate, levels, and wingbeat phase varies in bats feeding exclusively on aerial prey or in bats using longer duration calls such as constant frequency (CF) calls. These CF bats would be expected to break the coupling between wingbeat and call emissions more frequently possibly incurring larger energetic costs.
In contrast to the low relative number of buzz calls in our tagged bats, up to 75% of all emitted clicks in echolocating beaked whales are buzz clicks . However, buzz clicks in toothed whales are produced by a pneumatic sound production system that likely has a very low energetic cost (Foskolos et al., 2019) providing cheap sensory sampling despite extremely high sampling rates. Thus, for bats, the frugal use of short epochs of fast but weak sonar sampling decoupled from wingbeats allows bats to achieve high information update rates to optimize auditory streaming of their complex environments during critical hunting moments at little additional cost to their overall costs of sound production. Conversely, it may be speculated that the bats' low sampling rates during commute and prey search have been selected for because of the high energetic cost if these calls were uncoupled from the wingbeat (Speakman and Racey, 1991). The much lower biosonar sampling rates per traveled body length in bats compared to toothed whales have previously been ascribed to the slower sound speeds in air (Madsen and Surlykke, 2013). Our data, in contrast, support the interpretation that wingbeats restricting cheap and powerful vocalizations to specific phases of the wingbeat cycle have been a major driver underlying slow sensory sampling rates of bats (Jones, 1999). We speculate that these biomechanical constraints have put an evolutionary premium on complex echolocation signals (Woodward, 1953) and movement patterns (Hedenströ m and Christoffer Johansson, 2015) in bats to make the most of the infrequent sampling to maximize echo information while avoiding obstacles. The converse is true for toothed whales that have very cheap click production (Noren et al., 2017) decoupled from both locomotion and breathing (Foskolos et al., 2019), allowing them to sample orders of magnitude faster per body length traveled with simple biosonar signals.
Limitations of the study
This study did not include measurements of energy consumption of the wild bats due to methodological limitations. This would be crucial in future studies to address the direct energetic requirements and costs of sensory acquisition in wild bats.
OPEN ACCESS
iScience 24, 102896, August 20, 2021 7 iScience Article Figure 4. Timing of intense calls is synchronized to wingbeat phase The timing of the calls to the wingbeat phase is plotted in actual numbers (A) to illustrate the true proportions of calls and in normalized values (B) to visualize the coupling to the wingbeat phase. Calls are strictly coupled to wingbeat phase when commuting and more loosely coupled when foraging. Calls are divided into groups based on their call intervals: All calls from all tagged individuals (light gray patch). Calls with call intervals between 200 and 400 ms (1 call per 2 wingbeats, black), between 100 and 200 ms (1 call per wingbeat, dark blue), between 14 and 100 ms (above 1 call per wingbeat, light blue), between 8 and 14 ms (buzz I, yellow), and between 5 and 7 ms (buzz II, red). Upstroke is defined from 0 to 180 . (N = 10 bats, one full night of foraging per bat, 6 x 10 5 calls). Some of the spread in apparent production phase may be due to estimation noise in inferring wingbeat angle from the measured acceleration signals.
Lead contact
Further information and requests for resources should be directed to and will be fulfilled by the lead contact, Laura Stidsholt (laura.stidsholt@bio.au.dk).
Materials availability
This study did not generate new unique reagents.
Data and code availability
Data generated in this study have been deposited at Mendeley Data (https://doi.org/10.17632/ 9zfncc3t8j.1) and is publicly available.
Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.
Experimental setup
Bats were caught with a harp trap at Orlova Chuka cave, close to Ruse, NE-Bulgaria, in the early mornings as they returned to the roost. The bats were kept at the Siemers Bat Research Station in Tabachka to measure the forearm lengths, CM3 and body weights (Table S1). Bats weighing above 28 grams were tagged and released the following night between 10-11 p.m. at a field 8 km from the roost (Decimal degrees: 43.6220, 25.8649. The tags were wrapped in balloon rubber for protection and glued to the fur on the back between the shoulders with skin-bond latex glue (Ostobond, Montreal, Canada). The microphone on the tag was located at the center of the body approximately 10 cm behind the head of the bat. The bats spent 2 to 14 days equipped with the tags until they were recaptured at the cave or until the tags detached from the bats and fell to the ground below the colony. Upon recapture, the bats were weighed and checked for any sign of discomfort from the tagging before they were released back to the colony.
On-board tag and its effect on the bats
The acoustic tag used for these studies recorded audio with an ultrasonic microphone (FG-23329, Knowles Electronics, Itasca, IL, USA) and sampled the bat's behavior by synchronized tri-axial accelerometers and (Stidsholt et al., 2018). The audio was recorded at a sample rate of 187.5 kHz (16 bit resolution) and with a clip level of 121 dB re 20 mPa. The microphone output was filtered with a one-pole, 10 kHz high-pass filter and an antialiasing filter of 80 kHz before sampling. The accelerometers sampled at 1000 Hz (16 bit resolution, 8 g clip level) with a 250 Hz anti-alias filter, while the magnetometers sampled at 50 Hz. The tags including radio transmitters weighed between 3.5 -3.9 grams in the field (Table S1) and therefore weighed 11 to 14 % of the body mass of the bats. The bats on average lost $2.5 g during the tagging period which is less than the average diurnal loss in body mass of 5.5 g during the one day spent at the station prior to release (Table S1). In addition, these bats caught prey up to several hundred times per night with high success rates (Table S2) suggesting that the tags did not have large effects on their ability to maneuver and catch prey as seen in previous studies (Egert-Berg et al., 2018;Stidsholt et al., 2021).
Definitions of behaviors
Commuting flights (used in Figures 2A and 2B) were identified as lasting for approx. 100 continuous seconds per tag recording, where the bats were flying without attempting to catch either aerial or ground prey and with wingbeat frequencies of approximately 7 Hz. Aerial foraging attempts (used in Figures 1 and 2) were manually identified if bats emitted a buzz while they were in flight to exclude landing buzzes.
In an average of $80% of the aerial captures, chewing sounds were audible in the recordings. The aerial captures were divided into search, approach and buzz phases. Five participants manually marked the beginning of the approach phase based on call intervals and source levels plotted against time to prey capture. Whenever three of the five participants marked the transition into the approach phase in the same time interval (+/-120 ms), the mean value was used as the onset of the approach phase. The time of capture was defined as the emission time of the last buzz call. Buzz I was defined as calls with call intervals from 7 to 14 ms prior to buzz II; buzz II included calls with call intervals from 4 to 6 msec (Melcó n et al., 2009).
Data analysis
Tag data were adjusted for the frequency response of the microphone and high-pass filtered by a 4-pole 10 kHz high pass Butterworth filter to extract only the echolocation calls. Accelerometer data were low-pass filtered with a delay-free linear phase finite impulse response (FIR) filter with a cut-off frequency of 30 Hz. All analyses were conducted using custom-written scripts (Matlab, 2019a, The Mathworks, Natick, MA, USA). All calls in the recordings were automatically extracted and visually inspected for correct detections. Calls were not extracted in time periods where loud sounds from e.g. wind or conspecific calls appeared in the recordings to avoid false detections. As the calls of the bats were emitted in a directional beam in front of the bat, the tag-recorded call levels were lower than the actual on-axis call levels. The difference between the off and on-axis call levels for this species was estimated at 14 dB (Stidsholt et al., in review). Source levels were therefore estimated by adding 14 dB to the call levels measured in energy flux density (dB re 20mPa 2 s) over a -6 dB energy window from the tag-recordings. This conversion does not take head movements into account, which may shade some calls, but not in an extend affecting our conclusions of the study.
The call source level was also quantified in RMS approximated by adding 25 dB (corresponding to a fixed 3 ms call duration) to the call levels in EFD to facilitate comparison to the literature.
Relationship between flight speed and wingbeat frequency Bullen and McKenzie (2002) measured wingbeat frequencies across different flight speeds for 23 bat species and found the relationship: Wingbeat frequency = 5.54 -3.068*log10(body mass) -2.857*log10(flight speed) [S4] Using a body mass of 34 gram (tag-weight included) and a flight speed of 7 m/s as a mean from GPS positions when bats return to their roosts (unpublished data), the wingbeat frequency in commuting flight would be 7.3 Hz. This is close to the wingbeat frequency we found for commuting bats of 6-7 Hz. ll OPEN ACCESS | 2021-08-17T05:22:50.201Z | 2021-07-22T00:00:00.000 | {
"year": 2021,
"sha1": "1188d98c195a29eb37900bd85ad4ebcd58742544",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S2589004221008646/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1188d98c195a29eb37900bd85ad4ebcd58742544",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216409655 | pes2o/s2orc | v3-fos-license | Data Aggregation Privacy in WSN
When WSN is applied to monitoring, the privacy of monitoring data from monitoring objects becomes an important issue for the successful application of WSN, which requires effective privacy protection for data aggregation. CPDA (Cluster-based Private Data Aggregation) has a problem that energy costs become higher with increase of nodes within a cluster. In this paper, UCPDA (Upgrade Cluster-based Private Data Aggregation) is established to perform data aggregation while preserving data privacy. According to required privacy, the sensors in a cluster are partitioned into some groups. Data preprocessing is performed only in the same group. Compared to CPDA, UCPDA has lower energy consumption for required privacy.
Introduction
When wireless sensor networks are applied in the field of monitoring, the privacy of monitoring data becomes a key issue for the successful application of WSN [1], [2]. WSN consists of a large number of sensors, which generally have limited power, computing, storage, sensing and communication capabilities. In order to save energy consumption and communication bandwidth, sensors may need to collaborate in processing fine-grained raw data collected within the network to reduce the amount of raw data sent. Data aggregation is one of the methods for in-network processing [3][4][5][6][7]. Without privacy protection, the data obtained by WSN can easily recover sensitive information even if the data is encrypted [8], [9]. In this paper, we have created a data aggregation privacy model called UCPDA (Upgrade Clusterbased Private Data Aggregation). The model enables the sensor network to obtain accurate aggregation results while ensuring the privacy of sensor data. The remainder of this paper is organized as follows: In Section II, direct related work. In Section III, WSN data aggregation model. In Section IV, data aggregation privacy model. In Section V, UCPDA performance analysis. In Section VI, conclusions.
Direct Related Work
Wenbo He et al. built a clust-based private data aggregation model (CPDA), which USES the algebraic properties of polynomials to calculate the desired aggregation values and ensures that no single node knows the data values of other nodes [9]. Inspired by [9] and send to . In UCPDA, nodes in the same cluster are divided into groups, and only nodes within the same group can exchange information. All preprocessed data in the same cluster is sent to the cluster header. The cluster header collects all the pre-processed data and then aggregates the data.
Sensor Network and Data Aggregation Model
In the UCPDA model, WSN is composed of multiple sensor nodes. Sensors are divided into three categories: base station, cluster head and common sensor nodes. The base station has abundant energy and resources; Cluster head next; sensor node is the lowest. Each node can establish a communication link with other nodes in the cluster through their different Shared key. The cluster have sensor nodes, a cluster head and sensor nodes. We have defined data aggregation function is as formula (1) when the sensor node gets sensing data at time [9]. (1) According to formula (1), the data aggregation can be calculated in layered WSN. In UCPDA, Only sum aggregate functions is focused on, because other aggregate may be reduced to sum aggregate.
Upgrade Cluster-based Private Data Aggregation
In order to reduce the traffic, UCPDA divides a cluster with sensor nodes into several in-network pre-processing groups according to the level of privacy protection. At the lowest level of privacy protection, the pre-processing group may include 3, 4, or 5 nodes, because may equal 0, 1, 2. For the lowest level of privacy scenario, each group include 3 sensor nodes while . If , there is only a pretreatment group. For , a cluster has two pretreatment groups, one is 3-member group: A, B, C. held private data respective a, b, c; the other is 4-member group: E, F, G, H. held private data respective e, f, g, h.
To the group with 3-member, Node A, B, C assembled values respective are where , .
To the group with 3-member, first, each node uses a public non-zero number as the seed, , , , which are different from each other. Then node E compute ( )
Performance Analysis
In this section, the UCPDA and the CPDA In this section, UCPDA and CPDA are compared in terms of privacy, efficiency, and accuracy.
Privacy Protecting
In UCPDA, a node can build links with the others by their shared keys, probability of privacy violation by overhear is zero. Consequently, the collusion attacks is considered only.
To prevent collusion, a group has the least 3-member, the privacy levels may be defined based on size of groups. The least privacy level is a 3-member group, secondly is a 4-member group, the rest may be deduced by analogy, the most privacy level is that the member of a group equal to the member of a cluster. From this defined, grouping is significance only when the member of a cluster is more than 4-member.
The privacy protecting levels are determined by the least grouping of a cluster. ( ) Figure 1. Privacy Protecting Performance. Figure 1 compare UCPDA with CPDA for privacy protecting performance while the cluster has twenty nodes. UCPDA can provide for different privacy protecting while required privacy levels. CPDA only provide for the most privacy protecting level and a larger communication overhead. Figure 2 compare UCPDA with CPDA about efficiency when a cluster has 20-member. In UCPDA, a node only communicates with the others except sending pretreated data to cluster head, but in CPDA, a node communicates with each other in a cluster, the CPDA' overhead is larger than UCPDA'. In Figure 2, owing to unbalanced grouping, the UCPDA' unconventionality for the efficiency has appeared at privacy levels 5; 6; 7; 8. At privacy level 5, twenty nodes are partitioned two groups, one has 7-member and the other has 13-member. Similarly, at privacy 6; 7; 8, twenty nodes are partitioned two groups, one has 8-member, 9-member, 10-member and the other has 20-member,11-member, 10member, respectively. The ratio of the overhead in the 13-member group and in the 12-member group is larger than the ratio of the overhead in the 8-member group and in the 7-member group. The rest cases are similar with the case.
Accuracy
Both UCPDA and CPDA yield accurate aggregate results.
Conclusions
WSN is applied in the field of monitoring, and the data privacy of monitored objects has become an important issue. This paper has a data aggregation privacy protection model -UCPDA. In UCPDA, all nodes are partitioned into some groups while required privacy. Compared to CPDA, UCPDA has lower energy costs when required privacy. | 2020-04-09T09:03:57.840Z | 2020-04-07T00:00:00.000 | {
"year": 2020,
"sha1": "fc06c8fbce383c458874fd9dd5b41e18cb310dd6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/790/1/012142",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a392455596a75ea4832952c4910e42733480982d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
30003590 | pes2o/s2orc | v3-fos-license | Feasibility of subcutaneous antibiotics for palliative care patients
The number of patients requiring palliative care is increasing every day due to population ageing and the increase in degenerative and chronic diseases [1]. These patients can be particularly vulnerable to infection due to immune dysfunction, especially after chemotherapy regimens in those with cancer, or as a result of multiple comorbid conditions and complex diseases [2-5]. One third of terminally ill patients develop infections during their final phase of care [4]. The percentage of patients using antibiotics in hospice care ranges from 36% to 84% [5-6]. Antibiotic therapy for the management of neutropenia and non-neutropenic haematologic diseases has been widely described and is regularly updated. However, there is little information regarding the management of antibiotic therapy in palliative care patients [4-7].
Introduction
The number of patients requiring palliative care is increasing every day due to population ageing and the increase in degenerative and chronic diseases [1]. These patients can be particularly vulnerable to infection due to immune dysfunction, especially after chemotherapy regimens in those with cancer, or as a result of multiple comorbid conditions and complex diseases [2][3][4][5]. One third of terminally ill patients develop infections during their final phase of care [4]. The percentage of patients using antibiotics in hospice care ranges from 36% to 84% [5][6]. Antibiotic therapy for the management of neutropenia and non-neutropenic haematologic diseases has been widely described and is regularly updated. However, there is little information regarding the management of antibiotic therapy in palliative care patients [4][5][6][7].
Using antibiotics in palliative care patients is complex. As palliative care focuses on relieving and preventing suffering and improving quality of life, treatment choices should be made based on symptom improvement and control. Antibiotic treatment might be considered part of a good palliative care plan in the presence of life-threatening infections, but the decision to treat can also lead to burdens due to diagnostic tests, adverse reactions to antibiotics or the use of intravenous lines [7]. Other aspects, such as patient and family wishes, the patient's overall condition and prognosis, and the potential for symptom control, must also be considered [3,[7][8].
In palliative care the oral route of administration is advocated as the first choice for the treatment of symptoms [1,[6][7]. However, there are certain situations where this is impossible (gastric intolerance, swallowing disorders, persistent nausea and vomiting, intolerance to oral administration of opioids, malabsorption, extreme weakness, delirium, severe pain) and an alternative route is required [9]. In these situations, intravenous, rectal, intramuscular, sublingual and transdermal routes are alternatives to oral administration, but they all have their disadvantages. In the case of intravenous administration, the need for qualified personnel, difficulty of administration at home, frequent infections and other limitations such as less patient autonomy and high cost are the main drawbacks [10]. Rectal administration is a low-cost alternative, but there are only a few drugs that may be used this way and none of them are antibiotics. Moreover, absorption and bioavailability are variable and not predictable [11]. The main downside of intramuscular administration is pain [12]. Patients with low levels of muscular mass (cachexia) also restrict the usefulness of this route [13]. There are no antibiotics available that can be administered sublingually [13,14]. Transdermal administration can also be a good alternative, but the delay (12-24 hours) before reaching the steady state plasma concentration makes it difficult to control symptoms in the first 72 hours. It also has long-lasting effects after it is withdrawn and high individual variability.
Between 53-70% of terminal cancer patients require an alternative route of drug administration [13,15,16]. Subcutaneous administration is an alternative in those situations where the oral, intravenous or intramuscular routes are not suitable in palliative care patients [16].
The subcutaneous route: advantages and disadvantages
The subcutaneous tissue is located below the dermis. The amount of subcutaneous tissue varies from person to person and decreases with disease progression. There are no significant barriers to absorption. Medications delivered subcutaneously easily enter the bloodstream by passing through the spaces between cells of the capillary wall. They enter the bloodstream by a combination of perfusion, diffusion, hydrostatic pressure and osmotic pressure. There is therefore no firstpass metabolism by the liver, in contrast with the oral route [17].
The subcutaneous route of administration is widely used in palliative treatment (60% of palliative care patients will need the subcutaneous route) [16,17]. The subcutaneous route is a safe, low-cost and effective option for drug administration [17,18].
There are currently no firm recommendations on subcutaneous administration of antibiotics and few studies regarding their use in the treatment of infection in palliative care patients.
Method Search strategies and data sources
We searched various bibliographic databases: PubMed, the Cochrane Library, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), EMBASE and Trip Database. small sample group size, differences in comparative groups and no randomisation. In the end, we selected 10 articles on the use of subcutaneous antibiotics (5 in healthy volunteers and 5 in patients with infections). A descriptive review of the results of the 10 studies selected is provided below.
Use of subcutaneous antibiotics
As many antibiotics are approved for intramuscular administration and a high percentage of intramuscular injections (85-95%) are actually administered into subcutaneous fat, the subcutaneous route can be proposed as an alternative in palliative care [19][20][21]. Most of the studies on subcutaneous administration of antibiotics are case reports, or their aim is to evaluate the pharmacokinetics of the drug administered subcutaneously but neither the efficacy nor the safety have been determined [15]. A summary of the evidence found in the literature is provided in (Tables 1-2).
Ceftriaxone
We found 5 studies on the subcutaneous administration of ceftriaxone. All of them had a small sample size, between 4 and 44 patients, and compared the pharmacokinetics of the drug administered subcutaneously vs. intravenously. In 1 study, the subcutaneous route was used in palliative patients when the patient refused intravenous administration or venous access was difficult [22]. The doses reported ranged from 0.5 g to 2 g, administered over a period of 10-12 minutes. The maximum number of days of subcutaneous treatment reported was 10.
All of the studies conclude that ceftriaxone can be administered subcutaneously to palliative care patients because of the similar plasma concentrations reported when comparing the pharmacokinetics of subcutaneous and intravenous administration. The area under the curve (AUC) was considered similar in all the studies [15,[22][23][24][25][26].
The studies report several local adverse effects, such as induration, bleeding or pain [15,[22][23]. Only 1 study reports a severe adverse effect: subcutaneous necrosis with slow healing [24].
Subcutaneous ceftriaxone administration is approved in France. The ceftriaxone monograph recommends a dose of 1 to 2 g per day dissolved in water for injection (WFI), 0.9% saline solution (SS), or 5% glucose solution (GS). In a bolus administration the amount of Initially, the search focused on papers on palliative care and subcutaneous drug administration. For this first search strategy, we used the following MeSH headings: injection subcutaneous, infusions subcutaneous, palliative care, terminally ill, as well as free text words included in the article titles or abstracts. We selected the papers most relevant to our topic and reviewed their keywords and bibliography to find other articles of interest. Secondly, we performed another search for studies on subcutaneous administration of antibiotics, including those performed in volunteers and in critically ill patients. Here we included the following MeSH headings: amoxicillin, cefotaxime, ceftazidime, ceftriaxone, ciprofloxacin, clarithromycin, clindamycin, fluconazole, imipenem or cilastatin imipenem drug combination, levofloxacin, metronidazole, teicoplanin, tobramycin and anti-bacterial agents. We also performed a manual search of references of interest identified in the different studies.
Inclusion and exclusion criteria
Our search included English, Spanish and French-language literature. The search focused on studies performed in humans over the course of 12 years (2000-2012). All types of research design were considered: original articles, letters to the editor, conference papers, clinical practice guidelines, reviews and other studies on subcutaneous administration as an alternative route in palliative care.
We excluded studies on subcutaneous administration in children and studies in animals. We reviewed the bibliographies of all the publications found to look for other articles of interest.
Results
We found 55 relevant papers using the first search strategy. These included original articles, letters to the editor, case reports, drug mixture compatibility studies and pharmacokinetic studies. The data provided came from clinical trials and review articles in just a few cases.
Patient age ranged from 17 to 84 years old, terminally ill cancer patients. Most of the studies were performed in Europe and the United States. These studies were conducted in hospitals, hospices or private homes. There were also several compatibility studies. Sample size ranged from 2 to 60 in studies on subcutaneous drug administration. Length of subcutaneous treatment was 4 to 21 days.
Most of the studies had shortcomings in their method due to PK: pharmacokinetics, IV: intravenous, SC: subcutaneous, AUC: area under the curve, IM: intramuscular.
Cefepime
This antibiotic is approved for intravenous and intramuscular use [28]. A study of 10 healthy volunteers, of a mean age of 27 years, compared subcutaneous vs. intramuscular cefepime administration, and found similar plasma concentrations. Subcutaneous administration was well tolerated. Mild local side effects were reported (pain, swelling and erythema). The dose administered was 1 g diluted in 50 mL 5% GS over 30 minutes. In this study, patients were assessed for global acceptability of the technique and the mean value was "strongly agreeable" [15,20].
Tobramycin
In a cross-over study of 20 healthy volunteers, 80 mg tobramycin diluted in 50 mL 0.9% SS was administered subcutaneously over 20 minutes or intravenously over 30 minutes and similar AUCs were found [29].
Amikacin
In a case report of an 85-year-old patient with urine infection, 15 mg/Kg/day amikacin was administered subcutaneously in combination with ampicillin. Skin necrosis was reported as an adverse effect [30].
In a comparative, non-randomised, pharmacokinetic study with 5 healthy volunteers aged 20 to 45, 3 mg/Kg/day amikacin was administered intravenously over 3 days followed by 7.5 mg/Kg/ day intramuscular amikacin over 3 days, and finally 7.5 mg/Kg/ day subcutaneous amikacin over 3 days. The authors concluded that subcutaneous administration of amikacin has a longer T max (time to reach C max ) than intravenous administration, and amikacin bioavailability was 54% [30].
Subcutaneous administration of amikacin is approved in France [31]. The amikacin monograph recommends a dose of 15 mg/Kg/ day in patients with normal renal function. The 50mg intravenous presentation must be dissolved in 1 mL WFI before subcutaneous administration. Doses must be adjusted in patients with renal failure [32].
Ampicillin
In a study of 22 healthy volunteers, 1 g ampicillin diluted in 50 mL 0.9% SS was administered subcutaneously over 20 minutes and compared with the same solution administered intravenously over 30 minutes. A delay in the time it took to reach peak plasma concentration was reported but the AUC was similar [29].
Teicoplanin
A maintenance dose of 6 mg/Kg/day of intravenous teicoplanin was compared with the same dose administered subcutaneously [33]. The study reported a higher subcutaneous C max (the peak plasma concentration of a drug after administration) but it took longer to achieve compared with intravenous administration. The reported AUC was similar in both groups [33].
Ertapenem
In a pharmacokinetic study in 6 patients with infection, intravenous administration of 1 g ertapenem diluted in 50 mL 0.9% SS over 30 minutes was compared with subcutaneous administration of the same. Peaks were reduced with subcutaneous administration (C max IV >C max SC ), and time to maximum concentration was delayed (T max IV <T max SC ). However, the AUC was similar (AUC 0-24h SC /AUC 0-24 IV =0.99 ± 0.18) after both routes of administration, and confirmed by complete bioavailability following the subcutaneous infusion.
Ertapenem antimicrobial activity is considered to be time dependent, so a peak reduction after subcutaneous administration may not have important consequences on efficacy. Therefore, this study suggests that subcutaneous ertapenem administration could be equivalent to the intravenous infusion [19].
Other studies on subcutaneous antibiotics
Subcutaneous administration of other antibiotics such as gentamicin, sisomicin or netilmicin has also been studied. Skin reactions have been reported after subcutaneous administration of all of these aminoglycosides [30]. Subcutaneous administration of thiamphenicol is approved in France [32].
Can the subcutaneous route be used to treat infection in palliative care patients?
The subcutaneous route is an alternative in palliative care patients when there are problems with venous access. It is important to take into account aspects such as the patient's general condition and prognosis, as well as their preference and that of their relatives, before deciding how to treat an infection in a palliative care patient [15]. 3 antibiotics have been approved for subcutaneous administration in France: ceftriaxone, amikacin and thiamphenicol [27,32,33].
The 2 major determinants of bacteria killing include antibiotic concentration and the time that the antibiotic remains on the bacteria binding sites. The area under the serum concentration curve (AUC) after a dose of antibiotic measures how high (concentration) and how long (time) the antibiotic levels remain above the target MIC (concentration of antibiotic that is necessary to inhibit bacteria growth) during any one dosing interval. Most of the pharmacokinetic studies comparing subcutaneous administration with intramuscular or intravenous administration have found that the subcutaneous route reduces C max . However, the AUC is similar to that of intravenous or intramuscular administration. Antibiotic effectiveness can be timedependent or concentration-dependent. Time-dependent antibiotics such as beta-lactams (penicillins, cephalosporins, carbapenems, monobactams, macrolides, glycopeptides) are effective when their serum concentration exceeds the minimum inhibitory concentration (MIC) for the microorganism. In this case, the time that antibiotic serum concentrations remain above the MIC during the dosing interval (t>MIC) is the key to effectiveness. Higher serum concentrations will not lead to higher eradication of microorganisms. In this case, reduction of C max when antibiotics are administered subcutaneously may not influence effectiveness because the AUCs are similar. Aminoglycosides (tobramycin, gentamycin, amikacin) are considered concentrationdependent antibiotics. Higher concentrations of antibiotics in this group means higher effectiveness, but subcutaneous administration can reduce effectiveness.
Although our findings are interesting, there are some limitations to this review. There are few articles regarding the use of subcutaneous antibiotic administration. Most of them are prospective, cross-over, short-term, small size, with healthy volunteers and their objective is to determine the pharmacokinetics of the drugs. There are not many studies that have been designed with the assessment of efficacy and safety as primary objectives. It is therefore very difficult to extrapolate these findings to larger populations.
More studies, with more robust designs, are needed to confirm the efficacy and safety of this alternative route of administration. However, research in patients near the end of life involves numerous ethical challenges: dying patients are very vulnerable, adequate informed consent may be difficult to obtain, balancing research and clinical roles is difficult, and the risks and benefits of palliative research are difficult to assess.
Conclusion
Patients requiring palliative care at the end of life may benefit from subcutaneous administration for the treatment of infection when the oral route is not possible or when venous access is difficult. Ceftriaxone, amikacin and thiamphenicol are approved in France for subcutaneous administration. Ceftriaxone, cefepime, ampicillin, amikacin, tobramycin, ertapenem and teicoplanin are time-dependent antibiotics, so effectiveness is not affected by changes in concentration when a different route of administration is used. These 7 antibiotics can cover almost all infections caused by Gramnegative, Gram-positive, aerobic, anaerobic and extended-spectrum beta-lactamase (ESBL) microorganisms in palliative care patients. We can therefore conclude that palliative patients with infections can be treated with ceftriaxone, cefepime, ampicillin, amikacin, tobramycin, ertapenem and teicoplanin administered subcutaneously when appropriate off-label use authorisation has been obtained and a benefit assessment performed. | 2019-03-17T13:01:36.194Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "b1ec74f4ce0a3ecf9eb03e3daa93ab0025398d11",
"oa_license": "CCBY",
"oa_url": "https://www.oatext.com/pdf/GDT-2-121.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d14b824ce03b20dc6441ba46359457733a71a668",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52004597 | pes2o/s2orc | v3-fos-license | A Review of Prebiotics Against Salmonella in Poultry: Current and Future Potential for Microbiome Research Applications
Prebiotics are typically fermentable feed additives that can directly or indirectly support a healthy intestinal microbiota. Prebiotics have gained increasing attention in the poultry industry as wariness toward antibiotic use has grown in the face of foodborne pathogen drug resistance. Their potential as feed additives to improve growth, promote beneficial gastrointestinal microbiota, and reduce human-associated pathogens, has been well documented. However, their mechanisms remain relatively unknown. Prebiotics increasing short chain fatty acid (SCFA) production in the cecum have long since been considered a potential source for pathogen reduction. It has been previously concluded that prebiotics can improve the safety of poultry products by promoting the overall health and well-being of the bird as well as provide for an intestinal environment that is unfavorable for foodborne pathogens such as Salmonella. To better understand the precise benefit conferred by several prebiotics, “omic” technologies have been suggested and utilized. The data acquired from emerging technologies of microbiomics and metabolomics may be able to generate a more comprehensive detailed understanding of the microbiota and metabolome in the poultry gastrointestinal tract. This understanding, in turn, may allow for improved administration and optimization of prebiotics to prevent foodborne illness as well as elucidate unknown mechanisms of prebiotic actions. This review explores the use of prebiotics in poultry, their impact on gut Salmonella populations, and how utilization of next-generation technologies can elucidate the underlying mechanisms of prebiotics as feed additives.
Prebiotics are typically fermentable feed additives that can directly or indirectly support a healthy intestinal microbiota. Prebiotics have gained increasing attention in the poultry industry as wariness toward antibiotic use has grown in the face of foodborne pathogen drug resistance. Their potential as feed additives to improve growth, promote beneficial gastrointestinal microbiota, and reduce human-associated pathogens, has been well documented. However, their mechanisms remain relatively unknown. Prebiotics increasing short chain fatty acid (SCFA) production in the cecum have long since been considered a potential source for pathogen reduction. It has been previously concluded that prebiotics can improve the safety of poultry products by promoting the overall health and well-being of the bird as well as provide for an intestinal environment that is unfavorable for foodborne pathogens such as Salmonella. To better understand the precise benefit conferred by several prebiotics, "omic" technologies have been suggested and utilized. The data acquired from emerging technologies of microbiomics and metabolomics may be able to generate a more comprehensive detailed understanding of the microbiota and metabolome in the poultry gastrointestinal tract. This understanding, in turn, may allow for improved administration and optimization of prebiotics to prevent foodborne illness as well as elucidate unknown mechanisms of prebiotic actions. This review explores the use of prebiotics in poultry, their impact on gut Salmonella populations, and how utilization of next-generation technologies can elucidate the underlying mechanisms of prebiotics as feed additives.
INTRODUCTION
Salmonella can be spread through the fecal-oral route (1,2), and is a concern for pathogenic contamination of poultry meats and eggs used for human consumption. Previously this concern had been mitigated through the use of antibiotics, which also promoted animal growth (3). However, with the rise of multidrug-resistant bacteria (4-6), the food industry has been pursuing alternative control measures for pathogenic Salmonella contamination. These approaches include but are not limited to chemical-based interventions, such as organic acids and essential oils, or biological-based treatments, such as bacteriophage, probiotic, and prebiotic therapies.
The recent use of prebiotics has been well documented. The term "prebiotic" was first coined by Gibson and Roberfroid in 1995 and defined as "a nondigestible food ingredient that beneficially affects the host by selectively stimulating the growth and/or activity of one or a limited number of bacteria in the colon, and thus improves host health" (7). Gibson and Roberfroid (8) demonstrated that the intake of prebiotics could regulate specific gastrointestinal tract (GIT) microorganisms to alter the microbiome. Over the years, further findings have led to several suggested modifications of the definition such as the addition of the term "selectively fermentable" (9) or the term "nonviable" (10,11). More recently, an expert consensus from the International Scientific Association for Probiotics and Prebiotics (ISAPP) defined prebiotics as "a substrate that is selectively utilized by host microorganisms conferring a health benefit" (12).
Prebiotics have been used to influence the growth of reported beneficial bacteria in the GIT, such as Bacteroides and Bifidobacterium (13)(14)(15)(16). Van Loo et al. (17) detailed several natural sources of prebiotics including garlic, onions, and asparagus. Typically including fiber and oligosaccharides (18), prebiotics in chickens increase amylase production in the GIT and therefore improve the overall growth rate of broilers (16). They reduce colonization of Salmonella during hen molting (19). Some prebiotics have also influenced protection against Salmonella by providing binding sites for bacteria to be flushed out of the digestive tract (18). Numerous studies have also seen the reduction of Salmonella populations by increasing short chain fatty acids (SCFAs) concentrations (20)(21)(22) which can be accomplished through prebiotic administration (23,24).
Furthermore, several studies (25)(26)(27)(28)(29) investigated prebiotic effects on the GIT microbiota through 16S microbiome sequencing. By also noting changes in metabolite concentrations or metabolomics, this approach may be able to correlate changes in the microbiome to changes in the metabolite concentration such as SCFAs and other, possibly unknown, metabolites that can stymie Salmonella growth. The scope of this paper to provide an overview of the literature linking the use of prebiotics to the overall reduction in the number of foodborne Salmonella and the repression of virulence factors. The scope of this paper will not detail the other benefits of prebiotics in poultry such as impact on growth performance or antioxidant capacity, as they are covered extensively in Dhama et al. (30,31), Yadav et al. (32), and other literature reviews. By investigating SCFA production, microbiomic, and metabolomic technologies, and currently utilized prebiotics, notably oligosaccharides, this review attempts to elucidate novel avenues of research into the reduction of virulent pathogens via prebiotics, which may improve the safety of the poultry industry and improve the overall public health by reducing the incidence and or severity of poultry-acquired salmonellosis.
THE POULTRY GASTROINTESTINAL TRACT
The gastrointestinal tract of chickens is complex due to the bird's large energy requirements (33). The chicken GIT includes the crop, gizzard, duodenum, ileum, and cecum, which are microbiologically abundant with over 900 documented bacterial species (34). Included in the upper segment of the GIT, is the crop, which is used for fermentation, hydrolysis of starch to sugar, food storage, and as an acid barrier with a pH of ∼4.5. The gizzard grinds food particles in a highly acidic environment (pH 2.6) (35)(36)(37)(38). While the mean retention time throughout the GIT is ∼6 h, feed can remain in the crop and gizzard for as little as 8 and 50 min, respectively (39). The crop contains numerous anaerobic bacteria attached to the epithelium, including Lactobacillus, and they produce SCFA's and lactic acid (40,41). The continuous layer of Lactobacillus, enterococci, coliforms, and yeast promote digestion of most carbohydrates, with the remainder digested in the ceca after passage through the lower GIT (37,42).
Lower in the GIT is the duodenum, ileum, and cecum. Digestive enzymes and bile from the pancreas and gallbladder are added to the duodenum to break down food further, allowing for better absorption into the bloodstream through the villi (43). This process is continued through the ileum in the lower small intestine (43). The small intestine is dominated by anaerobic bacteria (44), and contains Lactobacillus and Bifidobacterium species in high concentrations as well as Enterococcus faecium and Pediococcus spp. (35,45,46). However, despite the presence of these bacteria in the small intestine, the concentrations of bacteria in the ceca are reported to be the highest in the chicken GIT, at ∼10 11 bacteria/g (35,47,48).
The ceca are located where the small and large intestines meet, and while they serve no identifiable purpose for digestion in mammals, it is important in chickens for fermentation and overall animal health (33,35,43). Due to culturing poultry cecal microbiota on arabinoxylan, it has been suggested the cecum may be involved in the breakdown of grains (42). The cecum plays additional roles in water adsorption and urea recycling, although the full nutritional significance remains unclear (49,50). However, despite its importance, in an experiment involving ligation of the cecum, it was shown that while nitrogen availability was disturbed by a cecectomy, it was not necessary for survival (51,52). The ceca, from a food safety standpoint, is also of major significance because it is one of the leading sites for Salmonella colonization along with the crop (53)(54)(55).
Salmonella can be found in varying concentrations in all regions of the poultry GIT of challenged chickens (56,57). In Fanelli et al. (56), 1 day after the birds were challenged with Salmonella, the duodenum and the small intestines were examined, and 5-45% of the samples tested positive depending on the region viewed. However, cecal samples in this study were nearly 100% positive for Salmonella colonization (56). This trend continued throughout the 13-day trial. Additional studies found that, when challenged with a lower concentration, Salmonella was not recoverable from the duodenum and small intestine despite being isolated from the crop, because bacteria were often destroyed in passaging through the acid lumen of the proventriculus and gizzard (58). While other studies have focused on the crop and even the gizzard as colonization sites of Salmonella, the ceca remain the most commonly investigated section of GIT for Salmonella (39,55,58,59). This is likely because of the relatively high bacterial counts of up to 10 11 cells/g of digesta by the day three post-hatch (35,60). Other reasons may include the ceca being the environment in the GIT most advantageous for Salmonella to colonize (56), and because the ceca can be ruptured during processing. However, it should be noted, Hargis et al (55) found that crops was 86 fold more likely to rupture than ceca during processing. Despite this focus on the ceca, with the potential for each organ's microbial composition to influence the next downstream, it is vital to understand the microbiota of each region of the avian GIT.
Stanley et al. (35) compiled data from several papers detailing the most prevalent microbial groups in each of the GIT regions. They found that while Lactobacillus was prominent, if not dominant in all systems, a myriad of differences was reported, including Clostridiaceae and Enterococcus in the crop and gizzard, and that a majority of cecal bacteria were not culturable or described. However, these profiles can vary greatly, as it has been suggested that host genotype, sex, and age play an important role in determining microbial composition (61). Furthermore, a majority of the collected papers reported information using community-fingerprinting techniques such as temporal temperature gradient electrophoresis (TTGE) and terminal-restriction fragment length polymorphism (T-RFLP), as well as culture-based methods. These techniques provide useful information, such as the application of T-RFLP in Torok et al. (25), which helped identify the presence of over 600 bacteria species and 100 distinct genera in the GIT of chickens. However, each of these techniques exhibits significant issues. Community fingerprinting techniques in general, are considered only semiquantitative and are only capable of detecting taxa in abundance of >1% (61, 62). Additionally, culture-dependent methods are particularly limited. For example, in the cecum, only 10-60% of bacterial strains have been cultured (63,64). Therefore, while these techniques have generated valuable information, to accurately detail the complex and minute changes to the microbiota under the effect of prebiotics, further investigation with more sensitive methodologies is needed. The changes, however, often depend on the type of prebiotic utilized.
COMMONLY USED PREBIOTICS
Prebiotic studies have focused largely on oligosaccharides such as mannanoligosaccharides (MOS), galactooligosaccharides (GOS), and fructooligosaccharides (FOS) including inulin (12,24,(65)(66)(67). Oligosaccharides are polymer chains with 3 to 10 of simple sugars (Figure 1) (68). Oligosaccharides and fiber have been combined and amended with feed products to create commercially viable sources of prebiotics in the poultry industry with a range of results. Illustrations of the modes of action of prebiotics within poultry can be found in Yadav et al. (32) and Pourabedin and Zhao (67).
Several commercial prebiotics have been studied and utilized, such as Biolex R MB40 and Leiber R ExCel (Leiber, Hafenstraße 24, Germany), which are brewer's yeast cell walls composed of MOS (27)(28)(29)69). These products were found to reduce Campylobacter concentrations and alter the microbiome, and there is an expectation of MOS-based products to reduce pathogens that utilize mannose-specific type 1 fimbriae such as Salmonella (28,70). Furthermore, Lee et al. (71) did evaluate the effect of these products against Salmonella in commercially raised broilers, and while a lower prevalence was noted, only 10 samples were utilized, and a challenge study was not performed. As another example, the commercialized yeast-fermentate product XPC (Diamond V, Cedar Rapids, IA), has reduced Salmonella in chickens and increase butyrate in the GIT (27,29,(72)(73)(74). Furthermore, during a Salmonella challenge experiment, the addition of XPC, which is comprised of 25% fiber, to chicken feed decreased the expression of virulence factor hilA, which is a regulator and promoter within a pathogenicity island (SPI-1) (72,74). These findings imply that XPC may reduce Salmonella virulence and invasion.
While these effects are detectable, synergistic effects can also be created by combining probiotics and prebiotics to create synbiotics. Probiotic products such as All-Lac R have been used in conjunction with Bio-MOS R to alter the microbiome, whereas Fasttrack R (Fasttrack, Conklin, Kansas City MO) and PoultryStar R (PoultryStar, BIOMIN GmbH, Herzogenburg, Austria), contain FOS and have been shown to reduce Salmonella and improve feed conversion efficiency (65,(75)(76)(77). These products, along with numerous others, have been found to improve poultry GIT health, increase animal weight, and inhibit Salmonella and Campylobacter. As a consequence, because of the range of available prebiotic products, methodologies of application, and the yield of numerous and sometimes inconsistent results (24,78,79), it is vital to understand these prebiotics better. Moreover, it is essential to detail their currently elucidated or suggested mechanisms to refine further ways to improve poultry health and production practices. To capture the effects of the breadth of prebiotics available, several types of prebiotics and their impact on Salmonella in poultry will be discussed in this section.
Mannanoligosaccharides ( Figure 1A) are found in the cell wall of numerous fungal species including brewer's yeast (Saccharomyces cerevisiae) and Saccharomyces boulardii, as well as certain plants (66,67). Comprised of mannose oligomers linked via β-1,4 glycosidic bonds, MOS have been demonstrated to suppress enteric pathogens and enhance the poultry immune system (80). Broiler chickens do not possess enzymes to break down MOS, as such it is suggested that bacteria in the lower GIT, such as the ceca, are responsible for their digestion (67). One particular advantage of MOS as a prebiotic is its stability as a pellet during steaming, which allows it to be easily added to feed (66). Studies have shown that Salmonella possessing type 1 fimbriae can be sensitive to the presence of MOS, which can disrupt attachment and adhesion from the intestinal lining by encouraging attachment to the mannose in the lumen (69,81). Mannanoligosaccharides have also been reported to improve overall gut health through increasing villi length and providing an adjuvant-like effect by acting as a microbial antigen (66,84,85). One study in particular exhibited a reduction in Salmonella ceca population by day 10 in challenged chicks fed a diet consisting of (0.40%) MOS (86). Stanley et al. (87) also demonstrated a one to three log reduction of cecal Salmonella counts in 21-day old chicks when supplemented with 0.05% MOS and MgSO 4 . A meta-analysis, which was designed to increase power by combining results from multiple studies, was performed by Hooge (66), which indicated MOS addition to feed generated improved body weight, feed conversion ratios, and survivability. This meta-analysis listed seven selection criteria including date of publication and age of bird and consisted of 29 pen trials from separate studies that were analyzed using a paired T-test. However, some discrepancies were noted in MOS ability to improve beneficial microorganisms (80), and there was no set standardization among studies involving the administration of the amount of the prebiotic.
Fructooligosaccharides (Figure 1C) are naturally occurring, typically of plant origin, contain β-(2,1) linkages, and can be food ingredients, functional foods, and prebiotics (8,88). Due to the β-(2,1)-linkages, enzymatic degradation is difficult in the upper GIT, leading to primary breakdown occurring in the ceca (8,24,89). Fructooligosaccharides support the growth of Lactobacillus and Bifidobacterium, resulting in an increase in SCFAs and lactate, an enhancement of the immune system, and the reduction of Salmonella colonization (23,24,90,91). The elucidated mechanism of action for many of these benefits is that FOS is fermented by Lactobacillus and Bifidobacterium which increases SCFAs and lactate in the cecum resulting in lower Salmonella colonization (23,24). The ability to ferment FOS is present in most strains of Lactobacillus and Bifidobacterium (24,92,93). However, only 8 of 55 strains tested by Rossi et al. (94) were capable of using inulin, which is a long chain FOS derivative, as the sole carbon source.
Furthermore, it was suggested that adverse consequences might exist with the implementation of FOS in poultry feed. Ten Bruggencate et al. (95) demonstrated, in rats, a decrease in Salmonella resistance occurred due to an increase in intestinal permeability. Additionally, SCFAs may lead to an enhanced expression of Salmonella virulence genes despite reductions in colonization (20,96). However, inulin amended diets have yielded middling results with Rehman et al. (93) demonstrating that inulin supplementation did not significantly impact the microbial community of the chicken cecum and Ramnani et al. (97) showed no impact on SCFA production in human diets supplemented with inulin. The effectiveness of FOS and inulin is dependent on a number of factors including the composition of the basal diet, degree of FOS polymerization, the presence of Bifidobacteria strains, host animal characteristics, and even host stress factors (91,98). The FOS amended diets in poultry studies have appeared to yield inconclusive results; however, it has been demonstrated that FOS, when supplemented with probiotics, can produce consistently significant reductions in Salmonella (24,79). This potential synergism has led to its implementation in products such as PoultryStar TM that directly impact aspects of the GIT (76,99). Galactooligosaccharides ( Figure 1B) can be naturally found in human and cow milk, and consist of β-(1,6) and β-(1,4) linkages that avoid digestion in the upper GIT (100-103). Commercially, GOS can be prepared through hydrolyzing lactose from cow's milk and often commercial products contain lactose and a myriad of GOS oligomers (104)(105)(106). For instance, Bimuno (Clasado Ltd) is composed of varying concentrations of lactose and di-, tri-, tetra-, and pentose oligomers of GOS (104,106,107). Bimuno, in vitro and in mice ileal gut loops, caused reduction of S. Typhimurium adhesion and invasion, and but not when GOS was removed from the Bimuno mixture (107). Despite these positive effects, no significant differences in Salmonella concentrations was found when poultry was provided feed amended with 1% GOS, although significant alterations to the cecal microbiome were observed (108).
Despite this contrast, while GOS has not been as well studied in poultry compared to FOS and MOS (67), several publications have suggested some potential for GOS as a prebiotic in poultry. A bifidogenic effect has been observed by showing increased counts of Bifidobacterium in feces of birds fed 3 g of GOS per 25 kg of feed for 40 days (100). The addition of GOS to feed has also been shown to increase the Lactobacillus population in cecal contents (109), and when compared to xylooligosaccharides (XOS), FOS, and MOS, GOS significantly improved L. reuteri growth on minimal media (110). Besides promoting the growth of Bifidobacterium and Lactobacillus, GOS has demonstrated other potentially beneficial effects such as reducing heat stress in the jejunum, but not the ileum (111). GOS has been demonstrated to significantly alter the poultry transcriptome when injected in ovo compared to the addition of inulin and Lactococcus lactis (112), and also improve cellmediated immunity when in low concentrations (0.1%) (109).
Additionally, GOS has been utilized as part of a synbiotic in some studies. Synbiotics are defined as a combination of probiotics and prebiotics (113). When Bifidobacterium was added to poultry feed along with GOS, this synbiotic affected total anaerobic microbial populations in feces, increasing them from 9.71 to 10.26 log colony forming units per gram (CFU/g) (100). This addition also increased Lactobacillus and Bifidobacterium fecal counts by 0.53 log and 1.32 log units, respectively (100). When injected in ovo, commercialized GOS and Lactococcus lactis elevated the body weight of broilers at the end of the rearing period (102,113). This data differed from Biggs et al. (114) which used only the prebiotic, and by Jung et al. (100) and Abiuso et al. (115), which found no change in body weight when GOS was administered in feed. A cursory examination suggests this variation may be due to the differences in the basal diet and genetic variation of the chickens but more in-depth studies must be performed to ascertain the reason.
Other prebiotics have also been investigated to varying degrees. The implementation of 2 g/kg of XOS increased Lactobacillus and acetate in the cecum and after a 5-week treatment, significantly reduced cecal colonization and spleen translocation of S. Enteritidis (92,116). Approximately a one log reduction of S. Enteritidis in the cecum was found by Pourabedin et al. (117) when XOS was implemented, but this was lower than the reduction observed by MOS (1.6 log reduction). Additionally, it was found that isomaltooligosaccharides (IMO) improved growth of Lactobacillus in vitro, exhibited a bifidogenic effect, and inhibited Salmonella in vitro (110,118,119). Thitaram et al. (120) found that diets supplemented with 1% IMO could reduce Salmonella by a two-log reduction and enhance growth during the first 3 weeks of growth, as well as increasing butyrate concentrations in the jejunum (121).
The effects of dietary fiber has also been investigated and suggested to possess prebiotic properties in poultry (10,122). Fiber, depending on the derivative, source, and concentration, can accelerate feed passage and can alter the weight of the organs of the poultry GIT in a way that is indicative of improved functioning of the GIT (122)(123)(124)(125). Organic acids, such as SCFAs, are a by-product of anaerobic fermentation of dietary fiber, and this suggests the possibility of inhibiting Salmonella growth in the GIT (126). As a consequence, there is some discussion if fiber should be considered a prebiotic (10). In Japan, while the term prebiotic is not defined, fiber, along with oligosaccharides are considered "foods to modify the gastrointestinal conditions" and can be considered "foods with specific health uses" (10,127). Dietary fiber does meet the definition of a prebiotic purported in Gibson et al. (12). However, Roberfroid 128 suggests the need for several additional criteria such as resistance to gastrointestinal absorption, fermentation by intestinal microbiota, and selective stimulation of growth or activity of beneficial bacteria. Under this definition fiber, as well as inulin does not match the criteria for being a prebiotic, despite having some prebiotic effects (46,128). As such, regulatory agencies such as the FDA and the European Food Safety Authority (EFSA) do not currently consider fiber to be a prebiotic (10,129).
Regardless of their defined role from a regulatory consideration, there is an apparent variance in the effects these molecules have on the chicken GIT. Due to the complexity of some of these molecules such as fiber, and their effects, to elucidate their mechanisms on Salmonella reduction, the changes in the gut microbiota must be observed. To capture these alterations, microbiomic technologies can be employed.
MICROBIOMICS
With the advent of whole genome and 16S rRNA genomic sequencing, researchers have been able to more accurately quantify microbial population shifts and host responses to the addition of prebiotics (25). By sequencing portions of the highly conserved 16S rRNA gene, such as the V1-V3 or the V4 region, and comparing it to databases, such as the Greengenes database, accurate identification of the microbiome can be determined efficiently and at a relatively lower cost (130,131).
It should be noted that the rapid advancement in DNA sequencing technologies is continuously allowing for higher throughput at a lower cost (132,133), and this section will attempt to provide as recent information as possible. Currently, Illumina-based microbiome sequencing can provide Operational Taxonomic Unit (OTU) detection at a very low abundance due to sequencing short DNA strands up to 300 bp. With the Illumina MiSeq Benchtop sequencer (Illumina, San Diego, CA, USA), a three-day sequencing run can return 7.5 Gb from 15 million 300-base paired-end reads to yield bulk data for small-scale projects (132). This efficiency is only increasing as technology allows for faster returns of more substantial data. Large-scale projects to study numerous samples can also use the Illumina HiSeq which allows for parallel sequencing at a comparably lower cost (132). The Illumina HiSeq returns 1,500 Gb from 5 billion 150 base paired-end reads but is typically only considered for production scale laboratory studies (132). Additionally, the Ion Torrent PGM system operates by detecting hydrogen ions that are released during DNA synthesis to sequence the genome israpid and easily scalable (Thermo Fisher Scientific, Waltham, MA, USA) (134)(135)(136). To analyze this ever-expanding capacity for bulk genomic data, bioinformatics programs are be employed such as Quantitative Insights Into Microbial Ecology (QIIME) and mothur (131,137). Despite several differences, such as the programing language utilized, both programs have been shown to compile genomic data and evaluate species richness and equality with little statistical variation (131,(138)(139)(140)(141). Using these bioinformatic programs, data can be efficently processed and changes in the GIT microbiome can be elucidated.
Investigative research into prebiotics greatly benefits from the sensitive high throughput technology that can quantitatively measure the differences between testing conditions. Park et al. (26) utilized Illumina based technology and the QIIME pipeline program to assess the changes in the cecal microbiota when subjected to the yeast-based prebiotics, Biolex R MB40, and Leiber R ExCel. They found significant changes in concentrations of Campylobacteraceae, Faecalibacterium, and, on the whole, in the phyla Firmicutes and Proteobacteria (26). This data was supported by Rastall and Gibson (142), and Park et al. (28), which also found an increase in Faecalibacterium OTU's during prebiotic treatment and suggested this increase helped facilitate a healthy microbiome, as an increase in Faecalibacterium has been linked to health benefits in poultry. Additional investigations into prebiotics found that MOS implementation can significantly alter the bacterial community phylogenetically (143,144). Park et al. (28) also reported that FOS increased species diversity in pasture flock chickens demonstrated the prominence of Firmicutes across all trials, and showed that Bacteroidetes decreased in birds fed with diets amended with FOS and GOS. This study also investigated the use of fiber and found it increased the presence of the butyrate-producing Fusobacterium (28).
However, these changes only represent broad stroke differences in previously identified major taxa of importance. The aforementioned studies, as well as studies such as Pan (145), have generated not only general information about major taxa shifts but also seemingly negligible differences in the abundance and presence or absence of previously undetailed bacterial strains. While it is important to report changes in previously identified taxa of importance, Illumina sequencing allows for investigation into more nuanced changes or differences found in previously undescribed taxa. For instance, in Park et al. (26), several bacteria that could only be classified to the order Bacteroidales were present in chickens fed Biolex R MB40, but were not noted in the control group or birds fed with Leiber R ExCel. These unspecified species may play a potential role in the overall health of the GIT and may have previously gone undetected by culture and community fingerprinting techniques. Some of these nuanced differences can be attributed to variation in individual chicken microbiomes, but, when taken in composite, these data may yield vast and potentially vital information for understanding changes in the avian GIT incurred by prebiotics.
Currently, through analysis of clustered data, it appears the predominant driver of the poultry microbiota composition is host age (28). This deterministic variable was independent of treatments with feeds amended with 1 kg of FOS or plum fibers per ton and 2 kg of GOS per ton (28). While Original XPC TM was able to reduce Salmonella cecal populations in Park et al. (27), the microbiota was impacted more by the age of the bird even when in the presence of a coccidiosis vaccine (27,29). These findings agree with previous assertions regarding the age of the poultry GIT, as it is reported that at birth the GIT is colonized by aerobic organisms followed by anaerobic microbial domination (146). Despite the strong influence of age and other uncontrollable variables such as gender (61), data still indicate that the microbiome can be shifted due to feed amendments. Therefore, because prebiotics can still be utilized to shift the microbial composition of poultry GIT, it is possible to generate environments that are unfavorable for Salmonella colonization. This can be accomplished by increasing populations of "healthy" bacteria, preventing space for Salmonella colonization as well as increasing SCFA production (67). To understand how these environments can be chemically altered, microbiome technologies can be employed in conjunction with investigative metabolomics technologies.
METABOLOMICS
Metabolomics is the qualitative and quantitative identification of all metabolites in a biological system such as the GIT. Metabolites are the final products of cellular processes and can be quantified through a number of instruments such as nuclear magnetic resonance (NMR) and mass spectrometry (MS) (147,148). Due to its high selectivity, NMR is widely accepted as the primary choice for metabolite elucidation. However, MS is more sensitive comparatively, allowing for detection down to femtomolar (10 −15 ) concentrations. Because of this sensitivity, for mixed samples, such as cecal and fecal contents, MS analysis is more readily utilized (147,149,150). Mass Spectrometry can also be coupled with chromatography to elucidate the macrocontents of complex mixtures (151). Gas Chromatography (GC) coupled with MS has allowed for the analyses of both volatile and nonvolatile compounds (152). Using GC-MS, Rubinelli et al. (153) investigated the effects of rice bran on Salmonella in cecal cultures in vitro and detected 578 metabolites. Of these, 367 were unknown, and the change in metabolite concentration was causally linked to the reduction of Salmonella. Liquid chromatography has also been used to identify thermolabile molecules in the form of high-pressure liquid chromatography (HPLC) which demonstrated FOS when fed to layers, could reduce cholesterol in eggs (154).
Metagenomic outputs in Sergeant et al. (155) indicated over 200 enzymes that can degrade non-starch polysaccharides in cecal contents, some of which are involved in pathways that produce SCFAs and are vital to the mechanistic understanding of modifying the environment. Unfortunately, one significant drawback to this methodology is the current inability to incorporate genomic information by providing definitive linkages between genotypes and the metabolome (147). Furthermore, the dynamic range of current MS technologies resolving power is ∼10 6 , which is far below the estimated concentration of cellular metabolites (147). However, with advances in both high throughput microbiome sequencing and mass spectrometry, it may be possible to derive causal relationships between the presence of phylogenetically related species and concentrations of metabolites.
CONCLUSIONS
The potential for prebiotics to alter the GIT of broiler chickens has been demonstrated with previous generation technologies such as DDGE, T-RFLP, and conventional plating techniques (35). However, despite the success of altering the microbiome, the precise mechanisms, and changes, such as the exact impact of SCFAs on the cecal microbiota, were historically undetermined due to the incomplete analysis offered by the technologies available at the time (156). Furthermore, with a range of variables such as age, type of bird, and genotype, the underlying mechanisms affecting the GIT seemed unlikely to be elucidated. However, with the rising use and affordability of "omic" technologies such as metagenomics and metabolomics, new investigative strategies can be employed. Through the use of bioinformatics pipeline applications on the bulk deepsequencing data produced by these technologies, there is potential to produce a complete image of the GIT affected by prebiotics. This image may provide predictive power and allow for the understanding and creation, through prebiotics, of an environment that controls for and inhibits Salmonella colonization and growth. Moreover, while Salmonella is not the only pathogen of concern in the poultry industry, with the potential for virulence gene repression, it is likely prebiotics will continue to play a role in the control of this pathogen. With the ability to utilize next-generation technologies and more fully understand the complexity of the microbiome of poultry GIT, impacts of prebiotics on pathogen control will continue to be elucidated, investigated, and utilized in food safety.
AUTHOR CONTRIBUTIONS
AM, SF, and SR have made substantial, direct and intellectual contribution to the work, and approved it for publication. HP and DM have been involved in the editing process and approved it for publication.
ACKNOWLEDGMENTS
AM is supported by a Distinguished Doctoral Fellowship and support from the Department of Food Science at the University of Arkansas. Diamond V is acknowledged for its support and assistance. | 2018-08-15T13:06:07.112Z | 2018-08-15T00:00:00.000 | {
"year": 2018,
"sha1": "09a7b05341aa2ac6cb3ab523f3d83cd2e0bd8d43",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2018.00191/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "09a7b05341aa2ac6cb3ab523f3d83cd2e0bd8d43",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
259863975 | pes2o/s2orc | v3-fos-license | Low and high serum IgG associates with respiratory infections in a young and working age population
Summary Background We investigated health consequences and genetic properties associated with serum IgG concentration in a young and working age general population. Methods Northern Finland Birth Cohort 1966 (NFBC1966, n = 12,231) health data have been collected from birth to 52 years of age. Relationships between life-long health events, medications, chronic conditions, lifestyle, and serum IgG concentration measured at age 46 years (n = 5430) were analysed. Regulatory mechanisms of serum IgG concentration were considered. Findings Smoking and genetic variation (FCGR2B and TNFRSF13B) were the most important determinants of serum IgG concentration. Laboratory findings suggestive of common variable immunodeficiency (CVID) were 10-fold higher compared to previous reports (73.7 per 100,000 vs 0.6–6.9 per 100,000). Low IgG was associated with antibiotic use (relative risk 1.285, 95% CI 1.001–1.648; p = 0.049) and sinus surgery (relative risk 2.257, 95% CI 1.163–4.379; p = 0.016). High serum IgG was associated with at least one pneumonia episode (relative risk 1.737, 95% CI 1.032–2.922; p = 0.038) and with total number of pneumonia episodes (relative risk 2.167, 95% CI 1.443–3.254; p < 0.001). Interpretation CVID-like laboratory findings are surprisingly common in our unselected study population. Any deviation of serum IgG from normal values can be harmful; both low and high serum IgG may indicate immunological insufficiency. Critical evaluation of clinical presentation must accompany immunological laboratory parameters. Funding Oulu University Hospital VTR, CSL Behring, Foundation for Pediatric Research.
Introduction
Prevalence, long-term persistence, and prognosis of low serum immunoglobulin levels in the general population are not known. Common variable immunodeficiency (CVID) patients with low serum immunoglobulin G (IgG), and A (IgA) and/or M (IgM) concentrations commonly suffer from recurrent pneumonia and benefit from IgG replacement therapy. [1][2][3][4][5] However, such patients may for years remain fully asymptomatic after discovery. 6 Patients with milder forms of hypogammaglobulinemia may also suffer from respiratory infections. 7,8 Among older individuals, even high serum IgG has been associated with risk of pneumonia-related mortality and recurrent pneumonia. 9 At an unselected population level, however, the significance of serum IgG concentrations on respiratory infection burden among the young and working age population is incompletely understood.
Prospective and lifelong follow up of Northern Finland Birth Cohort 1966 (NFBC1966) has provided high quality population level health information. 10 We recently investigated the secondary risk factors and conditions related to pneumonia in the NFBC 1966 birth cohort with 52 years of follow-up. 11 In the present study, we compare the clinical and behavioural parameters, occurrence of pneumonia, and complicated upper respiratory infection burden with serum immunoglobulin concentrations among the cohort participants. We also aim to explore the relationships between genomewide polymorphisms in the regulation of serum IgG concentration in a genome-wide association study (GWAS). Finally, we aim to define the role of serum IgG on pneumonia and complicated upper respiratory infection burden in the young and working age population.
Northern Finland birth cohort 1966
We used lifelong data from the NFBC1966 to evaluate the clinical presentations. The original cohort size and the follow-up have previously been explained in detail. 10 The NFBC1966 covers the entire population born during one year in the Northern provinces of Finland. It covers all individuals born with the expected date (12,055 mothers) during the year 1966 and comprises 12,231 individuals (12,058 life born) (96.3% of all births during 1966 in the area) (www.oulu.fi/nfbc).
Ethics
The study was originally approved by the ethical committee of the Northern Ostrobothnia hospital district (94/2011, 12/2003). Permission to use nationwide register data was sought from the institutions administrating the registers. Written informed consent was obtained from all participants to use the collected cohort data and their registry data for scientific purposes. The home page of NFBC1966 program includes a full description of the study (https://www.oulu.fi/en/univ ersity/faculties-and-units/faculty-medicine/northern-fin land-birth-cohorts-and-arctic-biobank/research-programhealth-and-well-being). Fig. 1 summarises the follow-up and the data collection of the NFBC study. 10 Our study population consists of those whose serum immunoglobulins were measured (n = 5430) with available clinical or questionnaire data in
Research in context
Evidence before this study The role of defective adaptive immunity and low serum immunoglobulin (IgG) concentration is well established in common variable immunodeficiency (CVID) patients suffering from recurrent pneumonia and respiratory tract complications. It has also been shown that patients with milder forms of hypogammaglobulinemia may suffer from respiratory infections. Although these patients benefit from IgG replacement therapy, they may remain fully asymptomatic for years. Genetic causes of abnormal B cell maturation or low IgG in CVID are only partially understood. Elevated serum IgG in older individuals may also indicate risk of pneumonia-related mortality and recurrent pneumonia although mechanisms are incompletely described. Currently it is thought that a delay in recognition of CVID and hypogammaglobulinemia can cause significant morbidity and mortality; early diagnosis and consideration of IgG replacement therapy is believed to be beneficial. Screening protocols to support early identification of those suffering from B cell deficiency have been suggested. However, understanding of the significance of low serum IgG concentration found in asymptomatic individuals is incomplete. Respiratory infection burden associated with serum immunoglobulin concentration in the general population is poorly understood. CVID is thought to be rare (0.6-6.9 cases per 100.000) although population level prevalence is not well described. Current supplies for IgG replacement products obtained from blood donations are limited; there is an obvious shortage of immunoglobulins and therefore a necessity of wise use of replacement therapy.
Added value of this study
We demonstrated that at population level CVID-like laboratory findings were approximately 10-fold higher when compared to previous reports based on clinical diagnoses. Not only low but even high serum IgG concentrations were associated with respiratory infection burden. Genetic properties and smoking are involved in regulation of serum IgG concentration.
Implications of all the available evidence Availability of IgG replacement products is currently limited, thus treatment of fully asymptomatic individuals and individuals with normal vaccine responses does not appear prudent. Such high frequencies of CVID-like laboratory findings should be considered when screening protocols of CVID and hypogammaglobulinemia are created. Subtly low IgG levels among smokers strongly suggest cessation of smoking as the first line intervention.
Articles the 46-year follow-up together with consents to use their data in combination with the national register data. Additional data for chronic disease diagnoses, medications and selected pneumonia risk factors were obtained from national Finnish registry databases as listed previously. 11 The study participants were analysed for numerous details related to their health and behaviour. Causes of death among the deceased study participants has been reported. 11 Data collection, analysis and manuscript preparation agree with the STROBE checklist.
Airway infection definitions
Pneumonia episodes that required hospitalisation, visits, and procedures at an otorhinolaryngologist's clinic, and diagnoses of associated complicated upper airway infections were obtained from the Care Register for Health Care (CRHC), previously named Finnish Hospital Discharge Register (FHDR), maintained by the Finnish Institute for Health and Welfare as previously described in detail. 11 In case of multiple diagnoses of pneumonia, only episodes at least 90 days apart were counted. Microbiological data were not available. All study participants were subjects to careful analysis of their lifestyle and health properties at the age of 46 years. Health data from 1971 to 2018 were obtained from national registers. Information on numbers of study participants is shown. Females, employed cohort members, those with high social class, married, those who have children and higher education participated more actively to the follow-up. 10 Causes of death among NFBC 1966 participants have been reported. 11 Questionnaire at age 46 years provided self-reported data on life-time burden of pneumonia episodes treated at hospital or at home (Supplementary material 1). In addition, data on complicated upper airway infections, such as recurrent otitis and sinus operations, as well as chronic respiratory symptoms (prolonged productive cough, chronic bronchitis, and allergic rhinitis) were retrieved from the questionnaire. Supplementary material 1 includes the full description of questions regarding respiratory infections and their symptoms.
Autoimmune conditions, asthma, and chronic respiratory symptoms Diagnosis of diseases with potential immunological origins was obtained from national registers based on the ICD 8, 9 and 10 codes. Conditions with at least ten cases were considered and included rheumatoid arthritis, psoriasis, coeliac disease, sarcoidosis, vasculitis, purpura, and multiple sclerosis. The diabetes variable was created by combining the data of hospital registers from CRHC, medicine purchases and reimbursement documentation from the Social Insurance Institution. 11 Those suffering from asthma were identified based on reimbursement of medical expenses from the Social Insurance Institution. Detailed criteria for asthma diagnosis and a physician's medical certificate are required for reimbursement eligibility in Finland. Chronic respiratory symptoms were analysed based on the questionnaire at age 46 years (Supplementary material 1).
Antibiotic consumption
Since 1993, antibiotic consumption data have been collected from medicine purchases by the Social Insurance Institution. A physician's prescription is required for all antibiotic purchases. Information on antibiotic classes included in this study are listed in Supplemental Table S2.
Genome-wide and phenome-wide association studies
A genome-wide association study (GWAS) was conducted to detect genetic variation associated with serum IgG concentration in 3591 of the studied 5430 individuals. Genotyping was performed with Illumina Infinium 370cnvDuo array. Close relatives were excluded (pi-hat<0.2) based on identity by descent calculations performed with PLINK. 12 Principal components (PCs) were calculated with PLINK 12 to allow accounting for population substructure. Imputation of the genotype data was done with the HRC imputation pipeline. Before analyses, the IgG concentrations were inverse rank normalised, then adjusted for sex and the first ten PCs to account for population substructure, and inverse rank-based normalisation was used to transform the resulting residuals to a normal distribution. Singlenucleotide polymorphisms (SNPs) with imputation info score <0.8 or minor allele frequency (MAF) <0.01 were excluded, as well as those violating the Hardy-Weinberg equilibrium (p < 0.00001). GWAS was performed under the additive model with SNPtest v. 2.5.4. 13 Regional association plots were drawn with LocusZoom. 14 A phenome-wide association study (PheWAS) was performed using PhenoScanner V2 15 ; in these SNP lookups, p-value<1 × 10 −5 was considered as evidence of association.
Other health data
Several health and behavioural parameters including body mass index (BMI), waist circumference, alcohol, and tobacco consumption, were collected as described. 11 Self-reported data of daily alcohol doses exceeding 20 g in women and 30 g in men were considered excessive. 16 Physical activity was collected from the questionnaire data, and it was calculated as the metabolic equivalent of task scores in hours per week. 17
Statistical analysis
In this study, the role of previously identified pneumonia risk factors among the NFBC 1966 study population was analysed with serum IgG concentrations at 46 years in 5430 subjects. 11 In addition to these previously identified risk factors, the role of physical activity was considered. 17 To test the association between pneumonia risk factors (Table 1, Supplemental Table S1) as well as infection burden (Supplemental Table S3), the patients were divided into categories by serum IgG concentration (low, IgG ≤6.8 g/L; normal, IgG 6.9-15.5 g/L; high, IgG ≥15.6 g/L) and by sex. Cross-tabulation with the Pearson's chi-square test or Fisher's exact test were used as indicated in Table footnotes. We used Kruskal-Wallis test to compare physical activity measures between IgG categories (Table 1). These IgG concentration categories based on ±2 standard deviations (±2SD) were chosen according to the distribution of measured IgG levels in the study cohort. Risk factors with a count of less than ten were not evaluated.
To understand the overall respiratory disease burden associated with serum IgG, we composed the combination variable of upper and lower respiratory tract infections with at least one pneumonia episode, chronic sinus infection or sinus surgery. Pearson's chi-square test was used to analyse this combination variable with Kruskall-Wallis test. d Self-reported data on smoking divided into "never", "former" and "current" categories. e Excessive alcohol consumption is defined as self-reported daily consumption of ≥30 g for males and ≥20 g for females. Data are presented as n (%), unless otherwise stated. Tables S2 and S3) was analysed with cross-tabulation and Pearson's chi-square significance test. The means of serum IgG concentrations were analysed between three smoking categories with variance analysis (ANOVA) and with independent t-test between current smokers with or without history of pneumonia as well as between non-smokers with or without history of pneumonia (Fig. 4). We used the Poisson regression models to evaluate the unadjusted and adjusted relative risk (RR, 95% CI) of future pneumonia and number of sinus surgeries. Because of the over-dispersion assumptions for the Poisson models the unadjusted and adjusted relative risk of antibiotic consumption was analysed with negative binomial models. Numbers of pneumonia episodes, number of antibiotic consumption and number of sinus surgeries were used as dependent variables. To compute the unadjusted and adjusted Odds Ratios (OR, 95% CI) in the binary logistic regression models, at least one pneumonia was a dependent variable. In all models, the categorised IgG concentration was an independent variable where the associations with outcomes by low and high concentrations were compared with those within the normal range. The criteria to select the adjustment variables i.e. potential confounders (sex, smoking, education, asthma, autoimmune disease, chronic liver disease, haematological cancer, physical activity) to the models were based on the significance level of the test in cross tabulations, the previously identified association (p < 0.05) with pneumonia 11 and the sufficiency of the number of samples per category (Fig. 3).
The incidence of upper and lower respiratory infections as well as sinus operations per 10,000 were calculated using a formula ((10,000 × n)/N), where N is the number of subjects in each specific IgG category and n is the number of subjects with the first infection episode. For antibiotic consumption, the number of antibiotic courses (n) was divided with the number of subjects in a specific IgG category (N) using formula (n/ N). Serum IgG subgroups were formed by assigning separate subgroups for each increment of 1 g/L in IgG concentrations in every 1 g/L between 6 and 18 g/L. Those with serum IgG concentration under 6 g/L or concentration of over 18 g/L formed the lowest and the highest IgG subgroups (Fig. 3).
The p-values of <0.05 were considered statistically significant. The statistical analyses were performed using IBM SPSS Statistics for Windows, Version 28 (IBM Corp., Armonk, NY, USA).
Role of funders
This study was partly supported by Oulu University Hospital VTR, CSL-Behring and Foundation for Pediatric Research.
Serum immunoglobulin concentrations
Mean serum IgG concentration of the whole study population (n = 5430) at age 46 years was 11.20 g/L (SD 2.2 g/L) ( Fig. 2A). Two SD below and above the mean were 6.84 g/L (−2SD) and 15.56 g/L (+2SD), respectively. 57 participants had serum IgG 6.8 g/L or lower. In 162 cases, serum IgG level was above 15.6 g/L. IgG subclasses were measured for those with serum IgG lower than 5.0 g/L or higher than 20.0 g/L. In all cases, subclass findings were consistent with an even distribution.
Parameters associated with low or high serum IgG concentration
The participants' characteristics associated with low, normal, or high serum IgG were considered. Table 1 shows the associations of IgG classes with social background, health, and lifestyle variables as well as with chronic conditions. Diagnosis of any autoimmune condition among females was associated with serum IgG concentration (p = 0.029). Supplemental Table S1 shows that serum IgG level was also associated with rheumatoid arthritis although the number of cases was low. Current smoking was associated with low serum IgG both in males (p < 0.001) and females (p < 0.001) ( Table 1). Serum IgG concentrations among current smokers and non-smokers were 10.3 g/L (−2SD 5.9 g/L; +2SD 14.6 g/L) and 11.5 g/L (−2SD 7.3 g/L; +2SD 15.7 g/L), respectively (p < 0.001) (Fig. 2). Mean serum IgG concentration in former smokers (11.2 g/L; −2SD 6.9 g/L; +2SD 15.6 g/L) did not differ from nonsmokers. Other studied parameters including obesity, alcohol consumption, cardiovascular disease, diabetes, or malignancy were not associated with serum IgG concentrations.
Additional potential aetiologies for secondary hypogammaglobulinemia among those with low serum IgG concentrations were considered according to previously published criteria. 3,18 In this population-based cohort without subjects with advanced age, low serum IgG was not associated with malignancies, or use of corticosteroids or other immunosuppressants. In summary, despite smoking, secondary causes to explain hypogammaglobulinemia were not found suggesting additional hereditary factors.
Laboratory findings suggestive of common variable immunodeficiency (CVID)
We analysed the immunoglobulin profiles of study subjects for consistency with CVID criteria according to the European Society for Immunodeficiency (ESID). 19 In summary, laboratory values in a total of seven cases with both reduced serum IgG (≤6.7 g/L) and serum IgA (<0.8 g/L) or serum IgM (<0.4 g/L) concentrations were found (Fig. 4B). None of them had received a diagnosis of immunodeficiency. Only two of them had suffered a single pneumonia episode (cases 2 and 3 in Fig. 4B). Two cases (5 and 6) were current smokers which potentially aggravated their immunoglobulin findings. However, the immunoglobulin profiles of cases 1-4 in Fig. 4B suggest a potentially CVID-like condition, with a very high population prevalence (73.7 cases/100,000).
Serum IgG concentrations and pneumonia
Those with low, but also with high serum IgG concentrations had experienced pneumonia episode(s) more frequently than their peers (Fig. 3), resulting in over 1000/10,000 pneumonia episodes among those with low or high serum IgG concentration. Multivariate analysis confirmed that also participants with high serum IgG (≥15.6 g/L) had, based on hospital discharge register data, frequently been hospitalised for pneumonia and experienced higher numbers of pneumonia episodes compared to those with normal serum concentration (6.9-15.5 g/L) (Fig. 3E).
Smoking, serum IgG and pneumonia
Serum IgG was lower (mean 10.5±2.4 g/L) among smokers with a history of at least one pneumonia episode when compared to smokers who had not experienced pneumonia (mean 10.9±2.2 g/L, p = 0.02) (Fig. 4A). Non-smokers with (11.8±2.7 g/L) or without (11.5±2.0 g/L, p = 0.137) a history of pneumonia episodes had a similar serum IgG concentration. Associations with continuous serum IgG concentration and smoking status are illustrated with a line diagram in Fig. 4A. Fig. 3 also demonstrates the associations between serum IgG concentrations and sinus surgery due to chronic sinusitis. Multivariate analysis confirmed an association between low serum IgG concentrations and sinus surgery (Fig. 3E). Results also unsurprisingly demonstrated that those with low serum IgG concentration had higher antibiotic use compared with those with normal serum IgG levels (Fig. 3E).
Combined data of upper and lower respiratory infections
A Combination parameter of at least one pneumonia episode, chronic sinus infection or sinus surgical operation was also analysed. We found that participants with any of these lower or upper respiratory infections had evidence of deviating serum IgG. A high proportion of those with low (20.6%) or high (16.0%) serum IgG concentration had suffered from any respiratory infection complication when compared to normal serum IgG group (10.9%, p < 0.007).
Genome-wide association study
In GWAS conducted to detect genetic variations associated with serum IgG concentration, two associated loci were detected (Fig. 2E-G): FCGR2B and TNFRSF13B. The most significant SNPs (lead SNPs) in these loci were rs7554873 (upstream of FCGR2B; MAF = 0.07, beta = −0.48, p = 2.3 × 10 −23 ) and rs4273077 (intronic within TNFRSF13B; MAF = 0.09, beta = 0.25, p = 1.6 × 10 −9 ). Both loci include biologically relevant genes that are involved in immunity: FCGR2B encodes low affinity immunoglobulin gamma Fc region receptor II-b (aka CD32b) 20 and TNFRSF13B encodes tumour necrosis factor receptor superfamily member 13B (aka TACI). 21 The FCGR2B lead SNP rs7554873 has previously been associated with haematocrit, red blood cell count and haemoglobin concentration, 22 and with some HDL-related cholesterol measures. 23 Interestingly, FCFR2B-rs7554873 is also associated with mRNA levels of, e.g., FCGR2B, FCGR2C and HSPA7, 24 and with blood protein levels, including those of FCGR2B, FCGR2A and X isoform of amelogenin. 25,26 We detected that the TNFRSF13B lead SNP rs4273077 is associated, e.g., with serum total protein, 27 risk of multiple myeloma, 28 and monogalactosylation of IgG, 29 as well as blood levels of certain proteins, such as tumour necrosis factor receptor superfamily 17 and HLA class II histocompatibility antigen, DM alpha chain. 25 Overall, the IgG-associated SNPs are characterised by having associations across multiple immunity-related phenotypes.
Discussion
The role of severe hypogammaglobulinemia in monogenic CVID is well established 30 and milder hypogammaglobulinemia is a known risk factor for infections. 7,8 In addition, there is some evidence suggestive of unfavourable properties associated with high serum IgG concentration among older individuals. 9 Our lifelong follow-up study on the health of a young and working age birth cohort population and their serum IgG concentrations suggests novel insights into the significance of adaptive immunity. 10,11 Highly intriguingly, not only the low IgG but also high serum IgG concentration in (B), former smokers (n = 1460) (C), and current smokers (n = 1020) (D) are shown. Mean serum IgG was 10.3 g/L (−2SD 5.9 g/L; +2SD 14.6 g/L) and 11.5 g/L (−2SD 7.3 g/L; +2SD 15.7 g/L) among smokers and non-smokers, respectively (p < 0.001). Genome-wide association study of serum immunoglobulin G levels (E-G). A Manhattan plot summarising the results of GWAS of IgG levels (n = 3591) is shown (E). Regional association plots at the two associated loci are shown in panels F and G. Chromosomal positions are shown on the x axis and -log 10 (p-values) on the y axis, and each dot is a single SNP. The red dashed line indicates the level of genome-wide significance (p < 5 × 10 −8 ). Genomic positions refer to human genome build hg19. Linkage disequilibrium values refer to the 1000 Genomes European population.
our cohort are associated with unfavourable outcomes. It seems possible that any deviation of serum IgG concentration from normal values may be an indication of an unfavourable immunological property. Importantly, a significant role for serum IgG concentration in our study was found with several self-reported and register based parameters including pneumonia burden, upper respiratory tract health and antibiotic use. Ameratunga et al. followed cohorts of hypogammaglobulinemia patients and concluded that patients with significant hypogammaglobulinemia should receive IgG substitution regardless of their symptoms. 6 In our study, the high prevalence of hypogammaglobulinemia cases with laboratory findings suggestive of CVID-like condition (73.7 cases per 100,000) questions the feasibility of such an approach when compared to previous reports of already exceptionally high CVID prevalence from Finland (6.9 per 100,000) or elsewhere (0.6-3.8 per 100,000). 31 In our study population, none of the CVIDlike cases had received immunodeficiency diagnosis or immunoglobulin treatment. This high frequency must be considered especially when selecting criteria for screening of B cell deficiencies. [32][33][34] Our findings highlight the importance of careful consideration of clinical history and disease spectrum when evaluating the immunological laboratory parameters in clinical settings. 3,35 We considered the known associated secondary factors for IgG concentrations at population level 10,18 ; low serum IgG concentration was most obviously associated with smoking in our study cohort. Mean IgG concentration was lower among current smokers compared to non-smokers consistent with previous reports. 36,37 Smokers who had experienced at least one pneumonia episode had lower serum IgG concentration compared to non-smokers or smokers without a history of pneumonia. Our findings support the view that increased pneumonia risk among smokers is at least partly Fig. 4: Distribution of serum IgG concentrations among participants divided based on history of pneumonia (A) (at least one episode or no episodes) and smoking status (current). Serum IgG was low among those who smoke and a history with at least one pneumonia episode (mean 10.5 ± 2.4 g/L) when compared to smokers who had no pneumonia (mean 10.9 ± 2.2 g/L, p = 0.02). Non-smokers with (11.8 ± 2.7 g/L) or without (11.5 ± 2.0 g/L, p = 0.137) history of pneumonia had a similar serum IgG concentration. Cases with serum immunoglobulin concentrations (IgG, IgM, IgA, g/L) suggestive of common variable immunodeficiency (CVID) according to European Society for Immunodeficiencies (ESID) criteria (B). Those with low serum IgG (−2SD, <6.7 g/L) and low serum IgM (<0.4 g/L) or low serum IgA (<0.8 g/L) were included. Lifelong history of hospital treatment for pneumonia is indicated. Self-reported data on respiratory tract symptoms, other symptoms and information on smoking is included. Current smokers (cases 5 and 6) were excluded from calculation of CVID prevalence. explained by immunological mechanisms. It seems possible that low serum IgG may also at least partially explain the previously observed high antibiotic prescription rate among tobacco users. 38 Serum IgG concentration among former smokers, however, was comparable to non-smokers suggesting that the adverse effects of tobacco components on serum IgG are reversible. Although autoimmunity was associated with high serum IgG concentration among females, infection burden in our female cohort was low. We did not find statistically significant associations with infections, autoimmunity, and serum IgG.
B cell maturation and IgG production are genetically regulated. 39 In our birth cohort, GWAS analysis found a role for FCGR2B and TNFRSF13B in the regulation of serum IgG concentration. Similar TNFRSF13B associations have previously been found in Chinese 40 and Japanese 41 populations. In addition, both TNFRSF13B and FCGR2B were associated with serum IgG in an Icelandic study. 42 Our study confirms that FCGR2B and TNFRSF13B are associated with serum IgG even in the genetically distinct and isolated population of Northern Finland. Interestingly, variants in TNFRSF13B are common among CVID patients. These may increase the risk of CVID in combination as polygenic risk factors, through dominant negative effects or haploinsufficiency. 21,[43][44][45] Interestingly, theTNFRSF13B lead SNP rs4273077 is in our study associated with monogalactosylation of IgG; these biological events play an important role in the interaction of IgG with FcγRs. 29,46 Although the role of FCGR2B in CVID is not obvious, this only inhibitory Fcγ receptor may have a role in the regulation of IgG production and development of autoimmunity. 47,48 Since smoking in our study was the only secondary factor associated with low serum IgG, it is tempting to speculate that FCGR2B and TNFRSF13B may also be implicated in population level respiratory infection burden. The role of these genes is further supported by our PheWAS analysis in which multiple immunological associations were observed.
Limitations of our study include the possibility that those with poor immunity and high number of infections may have been lost from the follow up. Since the number of deaths caused by infections or immunological causes in the NFBC 1966 cohort is low, our study cohort may well represent the attributes of the general population. 11 At age 46 years, especially females, employed cohort members, and those with high social class participated actively in our NFBC 1966 study. 10 Active participants were also more likely to be married, to have children and higher education. Selected social parameters (education, alcohol consumption), however, did not associate with serum IgG concentration. Importantly, it must be recognised that the potential selection bias should not interfere with our results of associations between the IgG concentration and infection susceptibility. It is also important to note that for the GWAS analyses we had a restricted sample size.
In conclusion, our study reveals that a significant population level infection burden is associated with low or high serum IgG concentrations. While previously documented genetic associations with serum IgG concentrations were confirmed, smoking clearly has an impact on serum IgG concentrations. This and the high frequency of CVID-like laboratory findings must be recognized when low serum IgG concentration is encountered. While our current report provides a historical view of 52 years of infections in the young and working age population, our study still lacks prognostic data on serum IgG and long-term survival at population level. Further follow up of future infectious events among the NFBC 1966 will provide invaluable understanding on population level significance of serum IgG and adaptive immunity.
PH, PP, MRJ, TH: verification of the underlying data. MKK: GWAS study and manuscript preparation. JK, SV, MRJS, ES, AH: data analysis and manuscript preparation. All authors have contributed to the study design and manuscript preparation, and they have approved the manuscript.
Data sharing statement
All data are available upon reasonable request. Instructions for material request portal can be found at Northern Finland Birth Cohort study home page (Faculty of Medicine | University of Oulu).
Declaration of interests PH: received scientific conference sponsorship from Octapharma and Takeda.
TH: received scientific conference sponsorship from CSL Behring. | 2023-07-15T15:19:03.850Z | 2023-07-13T00:00:00.000 | {
"year": 2023,
"sha1": "b4b97d44c63133cc108d34132c03d0054a0537e5",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thelancet.com/article/S2352396423002773/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "edbe2f1bc0f3157c8afe6ce6e4bb7e0b045f88c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210877737 | pes2o/s2orc | v3-fos-license | Local Electronic Structure in AlN Studied by Single-Crystal 27Al and 14N NMR and DFT Calculations
Both the chemical shift and quadrupole coupling tensors for 14N and 27Al in the wurtzite structure of aluminum nitride have been determined to high precision by single-crystal NMR spectroscopy. A homoepitaxially grown AlN single crystal with known morphology was used, which allowed for optical alignment of the crystal on the goniometer axis. From the analysis of the rotation patterns of 14N (I=1) and 27Al (I=5/2), the quadrupolar coupling constants were determined to χ(14N)=(8.19±0.02) kHz, and χ(27Al)=(1.914±0.001) MHz. The chemical shift parameters obtained from the data fit were δiso=−(292.6±0.6) ppm and δΔ=−(1.9±1.1) ppm for 14N, and (after correcting for the second-order quadrupolar shift) δiso=(113.6±0.3) ppm and δΔ=(12.7±0.6) ppm for 27Al. DFT calculations of the NMR parameters for non-optimized crystal geometries of AlN generally did not match the experimental values, whereas optimized geometries came close for 27Al with χ¯calc=(1.791±0.003) MHz, but not for 14N with χ¯calc=−(19.5±3.3) kHz.
Introduction
Aluminum nitride, AlN, is industrially used as a substrate for semiconductor devices such as ultraviolet LEDs, and is also the preferred starting material for the synthesis of chemically inert lightweight ceramics with excellent mechanical properties, such as SiAlONs [1,2]. Ceramic materials are often amorphous or consist of crystalline grains which are embedded in a glassy matrix, and hence characterization of such materials as well as detection and identification of impurities is not always straightforward. Nuclear magnetic resonance (NMR) spectroscopy has proven to be a powerful analytical technique to analyze ceramic structures, because of its ability to selectively probe the local surrounding of the observed nuclides [3][4][5]. For characterization of a multi-component system, it is crucial to know the exact NMR-interaction parameters of the detected nuclei in the various components, in order to correctly assign and distinguish the NMR signals arising from them. The 'gold standard' 2. Single-crystal NMR of 14 N and 27 Al 27 The resonance frequencies of the transitions |mi $ |m + 1i of a nuclide with spin I > 1/2 in an 28 external magnetic field may generally be described by: [12][13][14] 29 n m,m+1 (k) =n 0 + n CS + n (1) m,m+1 (k) + n (2) m,m+1 (k 2 ) with k = m + 1 2 Here, n 0 is the Larmor frequency which solely scales with the magnetic field strength, and n CS 30 is the contribution of the chemical shift (CS). The interaction between the non-symmetric charge 31 distribution of the nucleus and its electronic surroundings further shifts the resonance frequencies 32 by n (1) m,m+1 (k) and n (2) m,m+1 (k), corresponding to the quadrupolar interaction to first and second order, 33 respectively. The factor k is ±0.5 for the two transitions of 14 N with I = 1 and ±0, 1, 2 for the 34 five transitions of 27 Al with spin I = 5/2. The nuclear quadrupole interaction and its orientation 35 dependency is gauged by the quadrupole coupling tensor Q , which may generally be described by a 36 second-rank symmetric and traceless tensor, i.e. Q ij = Q ji and Q xx + Q yy + Q zz = 0. It is generally 37 helpful to define three distinct coordinate systems for NMR spectroscopy of single crystals, i.e. the 38 laboratory frame, where the z axis is defined by the orientation of the external magnetic field, the 39 crystal lattice (CRY) frame and the principal axis system (PAS). For 14 N and 27 Al in AlN, the CRY 40 frame and the PAS are identical and the quadrupole coupling tensor Q for both nuclides is solely 41 defined by the quadrupolar coupling constant C q = Q 33 = c: Single crystal of aluminum nitride, AlN, with the synthesis described in Reference [8].
The crystallographic c axis and the ab plane are indicated by arrows. (b) Wurtzite structure of AlN, according to Reference [9], viewed down the crystallographic [11][12][13][14][15][16][17][18][19][20] direction. The aluminum atoms (blue-grey) and the nitrogen atoms (yellow), both located at Wyckoff position 2b, are tetrahedrally coordinated by each other with one Al-N bond directed parallel to the crystallographic c axis. (c) Individual, tetrahedrally coordinated, aluminum and nitrogen atom in the crystal structure of AlN, in which the three equal, shorter, bonds Al/N-I/II/III with 1.8891(8) Å and the longer bond Al/N-IV with 1.9029(16) Å along the three-fold rotation axis are highlighted. a Drawing generated with the VESTA program [10].
Single-Crystal 14 N and 27 Al NMR
In the solid state, the NMR response of spin I = 1/2 is governed by the chemical shift, and by dipolar (direct) couplings between spins [11]. The dipolar couplings between the nuclear spins in the AlN lattice result in homogeneous line broadening and will not be quantitatively evaluated here. Both 14 N and 27 Al have a spin I > 1/2, and therefore, the quadrupolar coupling between the non-symmetric charge distribution of the nucleus and its electronic surroundings also needs to be considered [12]. For a spin I in an external magnetic field, 2I NMR transitions exist, which are classified according to their magnetic quantum number m. With a particular transition |m → |m + 1 designated by the parameter k = m + 1 2 [13], the resonance frequency ν m,m+1 of this transition may be described by the following general notation: For the two transitions of 14 N with I = 1, the values for k are ± 1 2 . For the five transitions of 27 Al with I = 5/2, the values for k are k = 0 for the central transition, and k = ±1, 2 for the satellite transitions. In Equation (1), ν 0 is the Larmor frequency, ν CS the contribution of the chemical shift (CS), and ν (1) m,m+1 (k) and ν (2) m,m+1 (k) are the effects of the quadrupolar interaction described by perturbation theory to first and second order, respectively. Magnitude and orientation dependency of the quadrupole interaction may be gauged by the quadrupole coupling tensor Q. Similar to the electrical field gradient (EFG) tensor V, to which it is related by Q = (eQ/h)V, this second-rank tensor is symmetric and traceless, i.e., Q ij = Q ji and Q xx + Q yy + Q zz = 0. Generally, for NMR spectroscopy of single crystals, it is useful to define three distinct coordinate systems, i.e., the laboratory frame, where the z axis is defined by the orientation of the external magnetic field, the crystal lattice (CRY) frame and the principal axis system (PAS). In the wurtzite structure of AlN, nitrogen and aluminum are both situated on a three-fold rotation axis parallel to the crystallographic c axis, and therefore the CRY and the PAS frames for 14 N and 27 Al are identical. In their PAS frame, symmetric tensors take diagonal form. This has the consequence that the tensors cannot change when the two formula units are generated by the symmetry elements of Wyckoff position 2b. Therefore, the two 14 N and 27 Al atoms in the AlN unit cell are practically pairwise magnetically equivalent, even though they do not fulfil the strict equivalence criterion of being connected by either inversion or translation. The Q tensor for both nuclides is hence uniaxial (with asymmetry η Q = (Q 11 − Q 22 )/Q 33 = 0), and solely defined by the quadrupolar coupling constant χ = C q = Q 33 : This tensor is conveniently determined from the separations ('splittings') of the symmetric doublet k = ±0.5 for 14 N, and of the satellite transitions (ST's) with k = ±1, 2 for 27 Al, since these are not affected by the chemical shift and the second-order quadrupolar interaction. Thus, the difference ∆ν(k) of the resonance frequencies (where we have dropped the m, m + 1 subscripts used in Equation (1) for brevity) is: The contribution of the quadrupolar interaction to first order for η = 0 is given by [12]: Here, the orientation dependence of ν (1) (k) on the relative orientation of the Q tensor to the external magnetic field is expressed by the Euler angle β, with β being the angle between the eigenvector with the largest eigenvalue, i.e., Q 33 = χ, and the magnetic field vector.
The contribution of the chemical shift ν CS to the resonance frequency is gauged by the chemical shift tensor δ. Taking into account the same symmetry arguments as for the Q tensor above, the chemical shift (CS) tensor for 14 N and 27 Al in AlN is given by: The weighted trace of δ determines the isotropic chemical shift δ iso = 1/3(δ 11 + δ 22 + δ 33 ) and, similar to the Q tensor, the asymmetry parameter for the CS tensor is η CS = (δ 22 − δ 11 )/∆δ = 0. Here, we generally order the tensor components according to the convention |δ 33 − δ iso | ≥ |δ 11 − δ iso | ≥ |δ 22 − δ iso |, and make use of the reduced anisotropy ∆δ = δ 33 − δ iso [14].
To determine the CS tensor of quadrupolar nuclei with half-integer spins, such as 27 Al (I = 5/2), it is customary to trace the orientation dependency of the central transition (CT), i.e., the k = 0 transition [15]. In cases where the CT signal cannot be resolved [16], the variation of the center of the satellite transitions (and for spin I = 1, the center of the doublet with k = ±0.5 in all cases) may be traced instead: For 14 N in AlN, the quadrupolar interaction to second order is negligible, and the CS tensor δ may directly be determined from the doublet centers. The CT of 27 Al in AlN is, however, affected by the quadrupolar interaction to second order, and this contribution has to be subtracted from the CT line position before δ can be determined. This second-order contribution can be written as [17]: After subtracting ν (2) from the observed ν, the change of the CT resonance frequency from the Larmor frequency is solely due to the chemical shift. The line position depends on the relative orientation of the magnetic field vector b 0 to the tensor δ CRY in the crystal frame, which may be compactly expressed by the product [18]: The determination of the actual quadrupole coupling tensors Q N , Q Al and the chemical shift tensors δ N , δ Al for 14 N and 27 Al in aluminum nitride, using the above formalism, is described in the following.
27 Al Quadrupole Coupling Tensor
A single crystal of aluminum nitride with approximate dimensions of 5 × 5 × 4 mm was used for the single-crystal NMR experiments. Since the crystal was grown by a homoepitaxial growth process [8], it is possible to assign the crystal faces to crystallographic planes, as indicated in Figure 1a. It was therefore possible to fix the crystal into in a specific orientation by gluing it with its (10-10) face onto the goniometer axis, which itself is perpendicular to the external magnetic field b 0 . The crystal was then rotated until the [000-1] direction was parallel to b 0 . Both orienting procedures involve small misalignments, which can however be quantified by the data analysis, as described below. Representative 27 Al NMR spectra are shown in Figure 2a, with the full rotation pattern over 180 o shown in Figure 2b, which was obtained by rotating the crystal counterclockwise in steps of 15 • using the goniometer gear. The satellite pairs for k = ±2, in the following denoted as ST(5/2), and k = ±1, in the following denoted as ST(3/2), are symmetrically positioned around the central transition. All 27 Al resonance lines are fairly broad, with a full width at half-maximum fwhm ≈ 9 kHz, caused by hetero-and homonuclear dipolar interactions between aluminum and nitrogen atoms in the structure [19].
The experimentally determined satellite splittings of the ST(5/2) and ST(3/2) doublets in kHz are plotted over the rotation angle ϕ in Figure 3a. The rotation patterns in both Figures 2b and 3a are mirrored at a position very close to 90 • , with the mirror defining the rotation angle for which b 0 is situated in the crystallographic ab plane. The deviation ϕ ∆ of the mirror from 90 • quantifies the original misalignment of the [000-1] direction to b 0 . From the way the crystal is glued on the goniometer axis, we know that the rotation axis must be in the crystallographic ab plane. Also, the above considerations of the effects of crystal symmetry on the tensor structure imply that the eigenvector with the largest eigenvalue (Q 33 = χ) must point along the three-fold rotation axis, i.e., along the crystallographic c axis, which we attempted to align along b 0 for the starting point of our rotation pattern. For this situation, the angle β in Equation (4) can be replaced by β → ϕ − ϕ ∆ , and the magnitude of the satellite splittings (Equation (3)) an be expressed by: To determine the quadrupole coupling tensor Q Al of 27 Al, the satellite splittings were simultaneously fitted according to Equation [20]. The full Q Al tensor, with the eigenvalues and corresponding eigenvectors in the PAS frame (Equation (2)), is summarized in Table 1. The quadrupolar asymmetry parameter η Q = 0, and the orientation of the eigenvectors are a consequence of the crystal symmetry, with q 33 aligned exactly along the c axis and q 11 , q 22 placed in the ab plane.
27 Al Chemical Shift Tensor
To determine the chemical shift tensor δ Al of 27 Al, the contribution of the second-order quadrupolar interaction must be subtracted from the central transition (k = 0) line position. In Figure 3b, the 27 Al CT is plotted over ϕ, and the data points clearly show the presence of the quadrupolar-induced shift, which according to Equation (7), contains harmonic terms depending on both cos 4 (β) and cos 2 (β). Using the results obtained from evaluating the splittings (χ = 1.914 MHz and ϕ ∆ = 0.65 • ), this second-order quadrupole shift can be calculated for each crystal orientation according to Equation (7) with β = ϕ − ϕ ∆ , see red points in Figure 3b. After subtracting the quadrupole contribution from the experimental points, the remaining variation in CT line position (Figure 3b, purple) is solely caused by the chemical shift tensor, which can be determined from it. Due to the cylindrical symmetry of the tensor and the fact that it does not transform between its PAS and CRY frame (see Equation (5)), the exact orientation of the rotation axis in the crystallographic ab plane of AlN is indeterminate. For simplicity, the rotation axis can be assumed to be parallel to the b axis, and the orientation of the magnetic field vector in the CRY frame for each rotation angle ϕ can be expressed by: Inserting this (and Equation (5)) into Equation (8), we obtain the expression necessary for fitting the data in Figure 3b: For this fit, ϕ ∆ was kept fixed at the value derived from fitting Q Al , and the components of the chemical shift tensor of 27 Al determined thereby are P = (107.2 ± 0.3) ppm and R = (126.3 ± 0.3) ppm, with the full tensor listed in Table 1. The isotropic chemical shift of δ iso = (113.6 ± 0.3) ppm is in good agreement with a previously reported value [4], which was determined from a polycrystalline sample of AlN under magic-angle spinning (MAS), and after correcting for the second-order quadrupole shift (from the reported line position of 113.3 ppm at a 600 MHz spectrometer [4], the correction of ν (2) ai = −(3/500)(χ 2 /ν 0 ) ≈ −0.9 ppm needs to be subtracted), comes out to δ iso = 114.2 ppm. The chemical shift asymmetry parameter η CS = 0 and the orientation of the chemical shift eigenvectors follow the same symmetry restrictions as for the quadrupole coupling tensor described above.
14 N Quadrupole Coupling Tensor
For the determination of the quadrupole coupling tensor Q N and the chemical shift tensor δ N of 14 N in aluminum nitride, the same AlN crystal (Figure 1a) and goniometer axis as for 27 Al was used. Since a change of the solenoid coil was necessary to go from the resonance frequency of 27 Al to 14 N, the offset angle ϕ ∆ is slightly different and needs to be determined from the data fit again. Representative 14 N NMR spectra are depicted in Figure 4a, and at first glance, appear to show much broader lines than the 27 Al spectra. In fact, with fwhm ≈ 3 kHz, the resonance lines are only about one third as broad as those of 27 Al, since the gyromagnetic ratio of 14 N is 3.5 times smaller than that of aluminum, which scales down the homonuclear contribution of the dipolar coupling. The impression of broad lines for 14 N is chiefly because the shifts of its k = ±0.5 resonances caused by the quadrupolar interaction (≈ 300 ppm) are much smaller than those of 27 Al (≈ 8000 ppm), since these shifts scale with the quadrupolar moment of the nucleus, which is 20.44 mb for 14 N, but 146.6 mb for 27 Al [20]. The broad resonance lines of the 14 N spectra, combined with the relatively poor signal-to-noise ratio (due to the long relaxation time of T 1 = 1080 s [22]) make it difficult to precisely derive the line positions from the spectra. Therefore, all 14 N NMR spectra were deconvoluted, assuming combined Lorentz-Gauss functions (so-called Voigt profiles), to reliably obtain the line positions.
The splittings of the thus deconvoluted 14 N doublets are plotted over the rotation angle ϕ in Figure 5a. The quadrupole coupling tensor was determined by a fit of these splittings according to Equation (9) with ∆k = 1, giving the quadrupolar coupling constant χ = (8.19 ± 0.02) kHz and an offset angle of ϕ ∆ = −(0.74 ± 0.13) • . The full quadrupole coupling tensor, with the eigenvalues and corresponding eigenvectors in the PAS frame (Equation (2)), is summarized in Table 2. The quadrupolar asymmetry parameter η Q = 0, and the orientation of the eigenvectors are identical to the Q tensor of 27 Al. So far, only an upper limit of the quadrupolar coupling constant of 14 N in AlN was available in the literature, namely χ < 10 kHz determined from a polycrystalline powder sample [20]. To determine the chemical shift tensor for 14 N, the center of the two resonances, k = ±0.5, is 158 plotted over the rotation angle j In Figure 5b. The data exhibit quite some scatter, however, it has to 159 be kept in mind that for tracing the anisotropy of the 14 N chemical shift in aluminum nitride, we are 160 attempting to extract variations of the order of~90 Hz from resonance lines with fwhm ⇡ 3 kHz. As 161 stated above, the quadrupole interaction to second order for 14 N in AlN is neglectable small, other 162 than for 27 Al, since the nuclear quadrupole moment is almost ten times smaller for 14 N (20.44 mb) than 163 it is for 27 Al (146.6 mb). [28] 164 The full chemical shift tensor was thus directly determined from the data in Figure 5b
14 N Chemical Shift Tensor
The chemical shift tensor of 14 N can be calculated from the evolution of the center of the doublet with k = ±0.5 over the rotation angle, as plotted in Figure 5b. Fitting the data in Figure 5b according to Equation (11), with the offset angle kept fixed at the value derived from the quadrupole coupling tensor fit (ϕ ∆ = −0.74 ppm), gives P = −(291.6 ± 0.7) ppm and R = −(294.5 ± 0.6) ppm, with the full tensor listed in Table 2. The data in Figure 5b exhibit quite some scatter; however, it has to be kept in mind that for tracing the anisotropy of the 14 N chemical shift in aluminum nitride, we are attempting to extract variations of the order of ≈ 90 Hz from resonance lines with fwhm ≈ 3 kHz. Despite the scatter, about two thirds of all data points belong to the CS tensor fit function within the error margins of ±1.2 ppm. The resulting isotropic chemical shift δ iso = −(292.6 ± 0.6) ppm is in good agreement with the previously reported value of δ iso = 64.7 ppm [4], determined from a polycrystalline powder sample under MAS and referenced to an aqueous (NH 4 ) 2 SO 4 solution, with a 'NH + 4 ' solution resonance shifted −355 ppm relative to the 'NO − 3 ' solution used here [23]. Similar to the quadrupole coupling tensor, the asymmetry of the CS tensor with η CS = 0, as well as the eigenvector orientation follow the symmetry restrictions of the crystal lattice. Table 2. Quadrupole coupling tensor Q N (left), and chemical shift tensor δ N (right) of 14 N in the wurtzite structure of AlN, as determined from single-crystal NMR experiments. The orientation of the corresponding eigenvectors are listed in spherical coordinates (θ, ϕ) in the hexagonal abc crystal frame CRY. The errors of the experimental values reflect those delivered by the fitting routine.
14 N and 27 Al DFT Calculations
It has become customary within the solid-state NMR community to augment experimental results by comparing them to predictions derived from calculations using density functional theory (DFT) methods employing periodic plane waves [24]. To check how the quadrupolar coupling constants for 27 Al and 14 N derived from our precise single-crystal results compare to DFT predictions, we have performed such calculations for aluminum nitride, using the CASTEP code, see Section 4.3 for computational details. Table 3 shows the quadrupolar coupling constants χ calc determined by DFT calculations using the coordinates from X-ray diffraction data reported in the inorganic crystal structure database (ICSD) for a selection of different database entries. The variation of these entries concerns mostly the unit cell dimensions (see also below about geometry optimization), which is reflected in the varying unit cell volumes V cell listed in the table. On the left of Table 3, the calculation results are given from directly using the ICSD coordinates, the so-called single-point energy (SPE). We note that for this calculation mode, the DFT algorithm returns χ calc values within a wide scatter, mirrored by standard deviations of 37% for 27 Al and 73% for 14 N. Whereas a single structure might accidentally give numbers for χ calc that are practically identical to the experiment, as structure ICSD 34475 does here for AlN, a more systematic exploration would demand to take the arithmetic mean of the eight different structures. These mean values, χ calc ( 27 Al It is however well documented in the literature that in order to obtain good agreement between DFT and experimental results, a geometry optimization (GO) of the crystal structure is usually necessary [31][32][33]. This was also done for AlN, taking the coordinates of the previously used ICSD database entries as a starting point. It should be noted that for AlN, only the unit cell parameters a, b, c may be geometry optimized, since both aluminum and nitrogen atoms are situated on a crystallographic special position, Wyckoff position 2b. As may be seen from the entries on the right in Table 3, the χ calc values are practically independent from the starting point after energy optimization, with a mean of χ calc ( 27 Al) = 1.7913 MHz and χ calc ( 14 N) = −19.5 kHz. This leads to small standard deviations (0.1% for 27 Al and 17% for 14 N), which seem to imply a high accuracy of the DFT results. However, the small standard deviations of the GO calculations reflect only on a high precision of the computational algorithm. The accuracy of calculation results is defined by comparison to the experiment [34], and is therefore quite low, since both experimental values (especially that of 14 N) are outside the standard deviation of the high-precision χ calc values.
Aluminum Nitride
The single crystal of aluminum nitride shown in Figure 1a was grown at IKZ, using physical vapor transport of bulk AlN in a TaC crucible with radio frequency induction heating. Further details may be found in Reference [8].
Solid-State NMR Spectroscopy
Single-crystal NMR spectra were acquired on a BRUKER Avance-III 400 spectrometer at MPI-FKF Stuttgart, at a Larmor frequency of ν 0 ( 27 Al) = 104.263 MHz, and ν 0 ( 14 N) = 28.905 MHz, using a goniometer probe with a 6 mm solenoid coil, built by NMR Service GmbH (Erfurt, Germany). The 27 Al spectra were recorded with single-pulse acquisition, four scans and a relaxation delay of 20 s. For the 14 N spectra a spin-echo sequence [35] was employed to minimize baseline roll and the spectra were recorded with 16 scans and a relaxation delay of 300 s. All spectra were referenced to a dilute Al(NO 3 ) 3 solution at 0 ppm. The fit of the rotation pattern and deconvolution of the 14 N spectra were performed with the program IGOR PRO 7 from WaveMetrics Inc., which delivers excellent non-linear fitting performance.
DFT Calculations
All calculations were run with the CASTEP density functional theory (DFT) code [36] integrated within the BIOVIA Materials Studio 2017 suite, using the GIPAW algorithm [37]. The computations use the generalized gradient approximation (GCA) and Perdew-Burke-Ernzerhof (PBE) functional [38], with the core-valence interactions described by ultra-soft pseudopotentials [37]. Integrations over the Brillouin zone were done using a Monkhorst-Pack grid [39] of 16 × 16 × 8, with a reciprocal spacing of at least 0.025 Å −1 . The convergence of the calculated NMR parameters was tested for both the size of a Monkhorst-Pack k-grid and a basis set cut-off energy, with the cut-off energy being 1500 eV. Also, the possible contribution of pairwise dispersion interactions was checked by using the Tkatchenko-Scheffler method [40] as implemented in CASTEP, but no improvements were observed. The calculation results reported here therefore do not include dispersion interaction.
Geometry optimization (GO) calculations were performed using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm [41], with the same functional, k-grid spacings and cut-off energies as in the single-point energy (SPE) calculations. Convergence tolerance parameters for geometry optimization were as follows: maximum energy 2.0 × 10 −5 eV/atom, maximum force 0.001 eV/Å, maximum stress 0.01 GPa/atom, and maximum displacement in a step 0.002 Å. Crystallographic data used in the calculations were taken from literature listed in Table 3.
Conclusions
In this work, both the chemical shift and quadrupole coupling tensors for 27 Al and 14 N in aluminum nitride have been determined to high precision by single-crystal NMR spectroscopy. To this end, a homoepitaxially grown AlN single crystal with known morphology was used, which allowed the rotation axis to be determined by optical alignment. Because of the high symmetry of wurtzite-type AlN, one full rotation pattern was sufficient to determine the NMR-interaction tensors in the crystal frame. The three-fold rotation axis on which both atom types are located enforces colinearity of the tensor eigenvectors with the crystallographic coordinate system, which simplifies data analysis. A simultaneous fit for the ST(3/2) and ST(5/2) splittings of 27 Al gave the quadrupolar coupling constant χ( 27 Al) = (1.914 ± 0.001) MHz, and fitting the 14 N doublet splitting resulted in χ( 14 N) = (8.19 ± 0.02) kHz. To extract the chemical shift tensor for 27 Al, the evolution of the central transition over the crystal rotation was tracked, and the contribution of the second-order quadrupolar shift was subtracted according to the previously determined quadrupolar coupling tensor. A fit over the thus corrected central transition positions resulted in an isotropic chemical shift of δ iso = (113.6 ± 0.3) ppm and an reduced anisotropy of δ ∆ = (12.7 ± 0.6) ppm. Due to the small quadrupolar moment of 14 N, its second-order quadrupolar shift in AlN is negligible, and the chemical shift tensor was directly fitted from the evolution of the 14 N doublet centers over the rotation angle. The resulting isotropic chemical shift is δ iso = −(292.6 ± 0.6) ppm and the reduced anisotropy is δ ∆ = −(1.9 ± 1.1) ppm.
For comparison, the quadrupolar coupling parameters of 14 N and 27 Al were also calculated using the CASTEP DFT code for a variety of previously reported X-ray structures. For both calculation strategies, i.e., single-point energy (SPE, where the coordinates are directly taken from XRD), and structures which were geometry optimized (GO) by the DFT code, agreement with the experimental values was relatively poor, leaving room for further improvement of these computational methods. | 2020-01-23T16:43:09.987Z | 2020-01-22T00:00:00.000 | {
"year": 2020,
"sha1": "bea4dba6876c42ddbc3aea37fc9149f4d25b1964",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/25/3/469/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9693866becb21236503857d0b704989b304d3cf2",
"s2fieldsofstudy": [
"Materials Science",
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
245370305 | pes2o/s2orc | v3-fos-license | Roles of the macrophages in colon homeostasis
The colon is primarily responsible for absorbing fluids. It contains a large number of microorganisms including fungi, which are enriched in its distal segment. The colonic mucosa must therefore tightly regulate fluid influx to control absorption of fungal metabolites, which can be toxic to epithelial cells and lead to barrier dysfunction. How this is achieved remains unknown. Here, we describe a mechanism by which the innate immune system allows rapid quality-check of absorbed fluids to avoid intoxication of colonocytes. This mechanism relies on a population of distal colon macrophages that are equipped with “balloon-like” protrusions (BLPs) inserted in the epithelium, which sample absorbed fluids. In the absence of macrophages or BLPs, epithelial cells keep absorbing fluids containing fungal products, leading to their death and subsequent loss of epithelial barrier integrity. These results reveal an unexpected and essential role of macrophages in the maintenance of colon-microbiota interactions in homeostasis. Résumé. Une des principales fonctions du côlon est d’abriter la plus large proportion de microorganismes du corps humain, ainsi que d’absorber les fluides issus de la digestion. Ainsi, la muqueuse du côlon doit constamment affronter l’arrivée de produits potentiellement dangereux. Comment le système immunitaire périphérique du côlon assure-t-il la surveillance des fluides absorbés? Il a été montré que les macrophages sont des acteurs majeurs du système immunitaire intestinal. Nous proposons que les macrophages associés à la muqueuse épithéliale participent au maintien des fonctions des régions proximales et distales du côlon. Nous avons observé que les macrophages des régions distales possèdent des « balloon-like protrusions », ou BLP, qui contactent les cellules épithéliales. Notre hypothèse de travail propose que les BLPs des macrophages servent de senseurs évaluant les fluides absorbés et contrôlant le niveau d’absorption de l’épithélium intestinal, afin d’éviter que des métabolites fongiques potentiellement dangereux puissent atteindre la circulation. ∗Corresponding author. # Contributed equally. ISSN (electronic) : 1768-3238 https://comptes-rendus.academie-sciences.fr/biologies/ 338 Aleksandra S. Chikina et al.
Introduction
The gut is a unique environment that is continuously exposed to food antigens but also to a rich community of micro-organisms. Its function, essential to the body, relies on the efficient absorption of nutrients and electrolytes, which is one reason why the intestinal barrier is relatively thin. However, this can become problematic for the host if toxic substances from the lumen reach the bloodstream [1]. A disruption of the intestinal barrier can therefore lead to pathological situations, ranging from nutrient deprivation [2], to inflammatory bowel disease [3], or sepsis [4], to multi-organ failure [1].
Depending on the region of the intestine, the epithelial barrier faces different challenges. The colon, in particular, the epithelium is exposed to the extreme osmotic pressure, that can reach 10 atm and is required to absorb water and solidify the feaces [5], but at the same time, distal colon hosts enriched microbial communities, consisting of bacteria, fungi, archaea and viruses [6]. Fungi, which are particularly abundant in the distal colon [7], can produce metabolites that trigger apoptosis of intestinal epithelial cells [8], thereby potentially compromising the integrity of the intestinal barrier. This is one of the reasons why the intestinal barrier is the main target of many fungal toxins, such as gliotoxin, aflatoxin [9], trichothecenes or candidalysin.
The colonic mucosa therefore tightly regulates fluid absorption while limiting the entry of toxic fungal metabolites into the epithelial cells or even into the bloodstream. Currently, the regulatory mechanisms responsible for this balance remain unknown.
Under homeostatic conditions, macrophages are the most abundant immune population in the gut. They mainly differentiate from monocytes in response to local signals. Intestinal macrophages reside in the lamina propria, or in the muscular part of the intestinal wall where they participate in various biological processes such as the degradation of microorganisms [10], the silent elimination of apoptotic bodies [11], tissue repair [12] and gastrointestinal motility [13]. In addition, intestinal macrophages limit inflammation [14], facilitating food tolerance [15]. Their function is closely linked to the presence of microbiota [16] and explains why they are found in greater numbers in the colon than in the small intestine [17].
Interestingly, in the colon, macrophages are found in close association with epithelial cells [18] and their mislocalization of is involved in the loss of intestinal barrier integrity, which is observed in inflammatory bowel diseases such as ulcerative colitis and Crohn disease [19]. Macrophages are therefore ideally positioned to orchestrate the interactions between epithelial cells and the microbiota, and thus maintain colonic homeostasis.
The main question of our work was to identify the role of colonic macrophages in the integrity and function of the intestinal epithelial barrier in homeostasis [20].
In summary, we have described a mechanism by which the innate immune system allows rapid and effective control of the quality of absorbed fluids to avoid colonocyte intoxication [20]. This mechanism relies on a population of macrophages in the distal colon equipped with "balloon" like protrusions (BLPs) inserted into the epithelium to sample absorbed fluids. In the absence of macrophages or BLPs, epithelial cells continue to take up fluids containing fungal products, resulting in their death and subsequent loss of epithelial barrier integrity.
These results [20] revealed for the first time an unexpected and essential role for macrophages in maintaining interactions between the colonic epithelial barrier and the microbiota, playing a key role in the maintenance of intestinal homeostasis.
Macrophages control epithelial cell survival and barrier integrity in the distal colon
The intestinal immune system is highly compartmentalized, with different cell populations distributed in a gradient along the intestine. This is particularly true for macrophages (Mφs), which are present in greater numbers in the colon compared to the small intestine. Mφs in the colon contribute to the maintenance of the epithelial integrity and their absence is correlated with the development of ulcerative colitis and Crohn disease [21]. However, the mechanism(s) by which Mφs perform this homeostatic function in vivo remain unknown. To address this question, we depleted Mφs using the CD64 DTR mouse model [22] and then assessed the status of the epithelium ( Figure 1A, [20]). We analyzed the proximal and distal colon as these two areas are known to have many differences in terms of physiology and microbial composition [23]. The absence of colonic Mφs in our model was verified by flow cytometry and imaging in both colonic segments. Unexpectedly, we found that Mφ depletion led to massive apoptosis of epithelial cells in the distal but not proximal colon ( Figure 1B,C, [20]). Our results therefore suggest that Mφs promote epithelial cell survival particularly in the distal colon.
To assess the impact of epithelial cell death on intestinal permeability, we perfused CD64 WT and CD64D TR mice intra-rectally with a hypotonic solution containing a small fluorescent molecule hydrazide and then measured its appearance in the blood. We found that hydrazide was more abundant in the blood of Mφ-depleted mice compared to control animals ( Figure 1D, [20]), indicating a loss of barrier integrity in these animals. Overall, these results show that Mφ are required for epithelial cell survival and intestinal barrier integrity in the distal colon.
Distal colonic macrophages insert "balloonlike" protrusions between epithelial cells
These results prompted us to investigate the tissue distribution of Mφs and/or their physical interaction with epithelial cells in the proximal and distal colon. To address this question, we performed immunostaining of the proximal and distal colon isolated from CD11c-Cre/R26mTmG mice [24]. In this mouse model, all cells that express CD11c throughout their differentiation, which include intestinal Mφs (and dendritic cells), switch from membranetomato to membrane-GFP expression, allowing better visualization of these cells in the tissue. Using these animals, we observed a population of phagocytes in the distal colon that physically interacted with epithelial cells through balloon-like membrane protrusions ( Figure 1E,F, [20]), hereafter referred to as "balloon-like protrusions" or BLPs. The cell bodies of these cells were located around the opening of the crypts. Immunostaining showed that BLPs + cells were formed by bona fide intestinal Mφs (CD11b + MHCII + F4/80 + CX3CR1 + CD64 + CD103 − CX3CR1 − GFP + ) and as a result they were lost in toxin-injected CD64 DTR mice [25]. Strikingly, a reduced number of BLPs was observed in the proximal colon ( Figure 1E,G), although the crypts of the proximal and distal colon contained a similar number of Mφs (determined by flow cytometry and imaging, Figure 1H, [20]). We conclude that the distal colon is enriched in Mφs equipped with balloon-like membrane protrusions inserted at the base of the epithelial cells.
Intestinal fungi increase BLPs formation of macrophages
These results led us to evaluate the impact of microbiota on BLP + Mφs [20]. The colon harbors the largest amount of microorganisms in the body, including bacteria and fungi [6]. To assess the potential effect of bacteria and fungi on BLPs formation, we targeted each population with a cocktail of broad-spectrum antibiotics or antifungal agents respectively. No effect of antibiotics on BLPs formation was observed (Figure 2A, [20]). In contrast, two antifungal agents, fluconazole and amphotericin B, significantly decreased the number of protrusions (Figure 2A,B, [20]) without changing the total number of Mφs, as shown by flow cytometry. We have previously shown that these drugs efficiently reduce the amount of intestinal fungi in mice [26] but by different mechanisms [27]. TOf note, the combination of fluconazole or amphotericin B with antibiotics showed no additive effect ( Figure 2C, [20]), suggesting that these two antifungal agents do not act indirectly on BLPs, i.e. by stimulating the expansion of colonic bacteria. These results therefore suggest that fungi residing in the distal colon may promote PLB formation in the subepithelial Mφ.
To directly assess the role of fungi in the induction of BLPs, we colonized germ-free mice with modified Schaedler flora (ASF), a well-defined community of eight bacterial species [28] or with the intestinal fungal pathobiont Candida albicans. Notably, fungal colonization induced strong BLPs formation in the distal colon of germ-free mice that were otherwise devoid of these structures, whereas bacterial colonization did not have this effect ( Figure 2D, [20]). No change in the number of subepithelial macrophages was observed. While colonization of germ-free mice with the pathobiont, Candida albicans, led to epithelial cell death, the bacteria had no effect ( Figure 2E, [20]). Overall, these results support a role for the mycobiota in BLPs formation in the distal colon, where these microorganisms are particularly abundant. Nevertheless, although we obtained no evidence for the involvement of bacteria in the formation of BLPs, we cannot exclude that specific bacterial species that are not present in the ASF and are resistant to the antibiotics used may also influence these protrusions.
Intestinal fungi are responsible for epithelial cell death in the distal colon of macrophagedepleted mice
So far, our results suggest that in the distal colon there is a population of CD11c high Mφs that form BLPs in response to local fungi. On the other hand, we found that Mφs depletion was associated with massive epithelial cell apoptosis in the distal colon but not in the proximal colon ( Figure 1, [20]). We therefore hypothesized that, by forming BLPs, Mφs might protect epithelial cells from fungis-induced death. Indeed, fungi have been shown to produce many deleterious to the host, including toxins and metabolites [8].
To test this hypothesis, we pre-treated CD64 DTR mice with antifungal agents prior to Mφs depletion and assessed the impact of such treatment on epithelial cell survival. Strikingly, we found that antifungal treatment rescued epithelial cells from death in animals without Mφs ( Figure 2F, [20]). The number of apoptotic colonocytes in mice treated with fluconazole or amphotericin B before Mφs depletion was comparable to those observed in mice containing Mφ. In contrast, epithelial cell death was not rescued when Mφ-depleted mice were treated with antibiotics (Figure 2F, [20]). We conclude that fungi are most likely responsible for the epithelial cell death observed in the distal colon of Mφ-depleted mice. These results further suggest that Mφs protect epithelial cells from fungus-induced cell death.
BLP + macrophages sample fluids absorbed by epithelial cells
How do Mφ detect fungi in the distal colon? Structural characterization of BLPs showed that they were filled with epithelial cell membranes and enriched in endo-lysosomal compartments. We therefore hypothesized that BLPs could sample fungal metabolites/toxins indirectly, through the fluids absorbed by the epithelial cells. Indeed, the epithelial cells of the distal colon have specific mechanisms to optimize water absorption and facilitate stool dehydration [29]. Such mechanisms could allow BLP + Mφs to sample the environment in the absence of direct contact with the local microbiota. To assess whether BLP + Mφs respond to fluid uptake by epithelial cells, we infused CD11c-Cre/R26mTmG mice intrarectally with a hypotonic solution. We found that such treatment increased the number of BLPs as quickly as 10 min after infusion ( Figure 3A, adapted from [20]) and returned to steady state within 30 min. This increase was also observed when fluid uptake was stimulated by injection of aldosterone, the corticosteroid hormone that increases sodium and thus water absorption specifically in the distal colon. In contrast, laxative treatment (Bisacodyl) inhibited water absorption; we observed a significant decrease in the number of BLPs ( Figure 3B, adapted from [20]). Notably, this decrease was abrogated when the mice were pre-treated with indomethacin, which inhibits the action of bisacodyl [30]. Strikingly, monitoring of fluid absorbed by epithelial cells using Alexa-633labelled hydrazide (a low molecular weight compound of 0.5-1.5 kDa) showed significant accumulation of the dye within the BLP + s as quickly as 5 min after infusion ( Figure 3C, adapted from [20]). These results show that fluid absorbed through the distal colonic epithelium stimulates BLPs formation in the associated Mφs in which this fluid accumulates. Most importantly, we found that BLP formation during fluid uptake was increased when fungi were present while the efficiency of epithelial fluid uptake was unchanged, supporting our hypothesis that BLPs sample fluids for the presence of fungal products.
Mφs protect epithelial cells from poisoning by fungal toxins
To directly test this hypothesis, we searched for a molecule to use as a generic fungal metabolite, toxic to epithelial cells when concentrated. We turned our attention to gliotoxin. Indeed, this fungal metabolite has been shown to induce apoptosis of epithelial cells [8] and can be produced by both patho- bionts and food spoilage fungi such as Penicillium chrysogenum [31], which is abundant in the murine intestinal tract [26]. It should be noted that gliotoxin was also produced by Candida spp. including Candida albicans, although there are conflicting studies on this [32,33]. To determine whether Mφs can detect fungal metabolites in fluids taken up by epithelial cells, we infused fungus-depleted C57BL/6J mice with a hypotonic solution containing or not gliotoxin ( Figure 3D, adapted from [20]). We found that the gliotoxin-containing solution stimulated BLPs formation as early as 5 min after infusion ( Figure 3E, adapted from [20]). While the epithelium continued to absorb the hypotonic solution without gliotoxin, it stopped absorbing the gliotoxin-containing solution 20 min after infusion ( Figure 3F, adapted from [20]). Similar results were obtained using two other fungal toxins: candidalysin from the pathobiont Candida albicans and T2 toxin from the commensal Fusarium sporotrichoioides [26] ( Figure 3G, adapted from [20]). These data show that epithelial cells detect and stop absorbing fluids poisoned by fungal toxins.
To determine whether this is an intrinsic ability of epithelial cells or relies on the presence of Mφ BLP + , we performed a similar experiment with CD64 WT or CD64 DTR mice injected with diphtheria toxin (DT) and perfused with the hypotonic solution containing gliotoxin ( Figure 3H, adapted from [20]). Remarkably, we found that while DT-injected CD64 WT mice stopped absorbing the gliotoxin-containing solution, Mφ-depleted CD64 DTR mice continued to absorb ( Figure 3I, adapted from [20]). To assess whether this uncontrolled uptake of gliotoxin had long-term effects on epithelial homeostasis, we perfused CD64 WT and CD64 DTR mice with the hypotonic gliotoxincontaining solution and sacrificed them 6 h later. In these experiments, epithelial cells of the distal colon underwent massive apoptosis in the absence of Mφs, confirming that these phagocytes protect epithelial cells against poisoning by fungal toxins ( Figure 3J, adapted from [20]). We conclude that in the distal colon, Mφs confer on epithelial cells the ability to recognize toxic fluids and stop absorption, maintaining epithelial integrity and local homeostasis.
Discussion
The intestinal barrier separates the intestinal lumen from the internal environment. It acts as a selectively permeable filter that allows the absorption of nutrients, electrolytes and water, which can then reach the systemic circulation. On the other hand, as the intestinal lumen also contains many toxic substances produced by the microbiota, absorption must be tightly regulated to avoid intoxication and host disease. Indeed, disregulation of intestinal barrier permeability is a major cause of sepsis-related mortality in critically ill patients and in inflammatory bowel disease. While the mechanisms of intestinal permeability regulation have been extensively studied in the small intestine, little is known about the colon, where the main physiological function is to absorb fluids but which also contains a very high load of microorganisms.
We have described a mechanism where a particular population of subepithelial macrophages, which rapidly control the quality of absorbed fluids, maintains the integrity of the intestinal barrier.
To do this, the Mφs use BLPs inserted at the base of the epithelium, which sample the fluids absorbed by colonocytes. If the fluids are overloaded with fungal metabolites/toxins, the Mφs instruct the epithelial cells to stop the uptake, preventing epithelial cell poisoning and death. This could, for example, occur through the secretion of prostaglandin 2 (PGE2) by Mφs, which decreases the alical localization of aquaporin of epithelial cells in vitro [30]. In the absence of Mφs or BLPs, epithelial cells absorb fluids irrespectively of their fungal toxin/metabolite load and undergo apoptosis, thus compromising barrier integrity.
These results suggest that in homeostasis, intestinal barrier permeability is differentially regulated depending on the local physiological roles of the specific gut segment and its microbial content. Mφs thus emerge as key players in orchestrating such regulation. In the small intestine, Mφs form transient transepithelial dendrites visible by live imaging. In response to microbial signals, these projections extend between epithelial cells and reach the intestinal lumen where they capture bacteria or food antigens. They express tight junction proteins to form adhesions with epithelial cells when crossing the barrier, thus maintaining the integrity of the epithelium. They therefore appear to be different from the colonic BLPs described here. Although BLPs also penetrate the basement membrane and occupy the intercellular space of the epithelium, they do not di-rectly contact the colonic lumen. These differences are consistent with the distinct properties of the epithelia in the small and large intestine. While paracellular permeability is high in the small intestine epithelium, reflecting its physiological role in nutrient absorption and the establishment of food tolerance, the colonic epithelium has limited paracellular permeability. This helps the colonic epithelium to withstand the local mechanical stresses imposed by high osmotic pressure and stool solidification, preventing loss of barrier integrity. Mφs could therefore have developed an alternative sampling strategy in the context of such tightly sealed colonic epithelium. They form BLPs, which sample absorbed fluids through or between epithelial cells, rather than directly engulfing the contents of the colonic lumen, which contains a large number of microorganisms. Although the paracellular pathway allows sampling of absorbed fluid across the epithelium, a process necessary for stool formation, the transcellular pathway could detect transient barrier leakage events resulting from mechanical stress by stretching and shearing. These results suggest that the sampling mechanisms by the peripheral immune system are adapted to both local signals and physiological functions of the intestinal segment. Unexpectedly, we found that BLP + Mφs in the distal colon respond to fungal products. Similar results were observed with two different antifungal agents, fluconazole and amphotericin B, which target distinct fungal species by different mechanisms [34]. These results are consistent with fungi being particularly enriched in this intestinal segment [7]. We obtained no evidence that bacterial compounds stimulate BLPs formation. However, we cannot exclude the possibility that bacteria insensitive to the antibiotics used or absent from the ASF flora may still stimulate BLP formation. How do subepithelial Mφs BLP + detect fungal products?
The distal colon contains a thick layer of mucus that physically separates the microbiota from the epithelial cells; the inner part of the mucus is sterile [35]. Therefore, in homeostasis, there is no contact between the subepithelial Mφs and the colonic lumen, and Mφs use BLPs to sample the fluids taken up by the epithelial cells, which carry the full spectrum of fungal metabolites, providing a complete picture of the local composition of the mycobiota. Whether BLP + Mφs directly detect fungal products absorbed by the epithelium or detect stress compounds released from poisoned epithelial cells requires further investigation. This is in sticking contrast to what happens in response to barrier disruption or in the presence of invasive fungal species: in this case, Mφs physically come into contact with fungi and use Dectin-1 to mount effective anti-fungal immune responses [7,36]. Similar results were obtained using three different fungal toxins: gliotoxin, which is produced by both pathobionts and commensals, Candidalysin, from the pathobiont Candida albicans, and T-2 toxin, from commensals of Fusarium sporotrichoioides. In all cases, the number of BLPs increased upon toxin inoculation in fungus-free mice, indicating that BLP + Mφs recognise all three fungal metabolites. These results suggest that BLPs respond not only to fungal compounds produced by pathogenic species, but to a wide variety of fungal metabolites. The BLPs response may therefore be critical not only for detecting potentially dangerous fungal species, but also for detecting the overgrowth of commensal fungi, whose metabolites could compromise the survival of epithelial cells if they are too abundant. As a result, we found that commensal fungi are indeed responsible for the apoptosis of epithelial cells in Mφs-depleted mice. Defining the precise nature of the commensal fungi involved will require further investigation. An interesting candidate is the commensal species Fusarium sporotrichoioides, as it is targeted by both fluconazole and amphotericin B and produces T-2 toxin. How do BLP + Mφs resist the toxins remains an open question; it could be envisaged that BLPs keep the toxins away from the cell body of these cells. In conclusion, we describe here a previously unknown homeostatic function of CD11chigh subepithelial Mφs in the distal colon: they help the epithelium to maintain its integrity in an environment subjected to high physical and chemical stresses resulting from osmotic pressure, faecal solidification and a high microbial load. How do BLP + Mφs instruct epithelial cells to take up or not take up fluids in homeostasis and whether alterations in these mechanisms lead to pathologies such as inflammatory bowel disease and cancer shall next be addressed.
Conflicts of interest
Authors have no conflict of interest to declare. | 2021-12-22T16:23:46.678Z | 2021-12-20T00:00:00.000 | {
"year": 2021,
"sha1": "a53ae7dc0e44ca73277e50ea3469fd88766c66df",
"oa_license": null,
"oa_url": "https://comptes-rendus.academie-sciences.fr/biologies/item/10.5802/crbiol.67.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ebc53651e11ea3af635257c8ce18650179fb1263",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246832640 | pes2o/s2orc | v3-fos-license | COVID-19, Vaccines, and Thrombotic Events: A Narrative Review
The coronavirus disease 2019 (COVID-19), a deadly pandemic that has affected millions of people worldwide, is associated with cardiovascular complications, including venous and arterial thromboembolic events. Viral spike proteins, in fact, may promote the release of prothrombotic and inflammatory mediators. Vaccines, coding for the spike protein, are the primary means for preventing COVID-19. However, some unexpected thrombotic events at unusual sites, most frequently located in the cerebral venous sinus but also splanchnic, with associated thrombocytopenia, have emerged in subjects who received adenovirus-based vaccines, especially in fertile women. This clinical entity was soon recognized as a new syndrome, named vaccine-induced immune thrombotic thrombocytopenia, probably caused by cross-reacting anti-platelet factor-4 antibodies activating platelets. For this reason, the regulatory agencies of various countries restricted the use of adenovirus-based vaccines to some age groups. The prevailing opinion of most experts, however, is that the risk of developing COVID-19, including thrombotic complications, clearly outweighs this potential risk. This point-of-view aims at providing a narrative review of epidemiological issues, clinical data, and pathogenetic hypotheses of thrombosis linked to both COVID-19 and its vaccines, helping medical practitioners to offer up-to-date and evidence-based counseling to their often-alarmed patients with acute or chronic cardiovascular thrombotic events.
Introduction
Since the beginning of the SARS-CoV-2 pandemic, and the consequent coronavirus disease 2019 , up to date (6 February 2022), more than 394 million cases and over 5.7 million deaths have been documented in the world [1], with devastating socio-economic, physical, and psychological consequences for communities. The pathophysiology of SARS-CoV-2 infection displays the predominance of hyperinflammation and immune dysregulation (cytokine release syndrome) in inducing multiorgan damage, predisposing patients to thrombotic and thromboembolic events due to endothelial cell activation and injury, platelet activation, and hypercoagulability [2]. Vaccines are the primary modality to prevent the disease from spreading. In 2020, an international race for developing vaccines against SARS-CoV-2 started [3]. However, like COVID-19, even the vaccines employed for its prevention have been associated with unexpected thrombotic events.
This paper aims to explore the complex relationships between COVID-19, its vaccines, and thrombotic diseases. Because of the low level of available evidence, and the continuous evolution of knowledge in this field, this is an interim document, based only on expert opinion consensus.
COVID-19 and Thrombosis
There are numerous relationships between cardiovascular disease and COVID-19. The presence of previous cardiovascular diseases is associated with a higher frequency of adverse outcomes in COVID-19, proportionally to the severity, extent, or symptoms of coronary lesions [4,5]. Conversely, among COVID-19 hospitalized patients, and in severe cases in general, a wide range of acute heart diseases, such as arrhythmias, fulminant myocarditis, acute heart failure, cardiogenic shock, pulmonary embolism (PE), or acute coronary syndromes (ACS), was commonly found [6][7][8][9][10][11][12][13][14]. Since March 2020, thrombo-embolic events have been increasingly described in the literature, with an incidence reaching 14% of hospitalized patients in surveillance wards and between 17% and 50% of patients in intensive care units [15]. Among 533 hospitalized patients with thrombotic events, an acute myocardial infarction (AMI) was present in more than half of the cases [16]. It is not surprising that COVID-19 can increase the ACS risk, as oxygen starvation, resulting from respiratory distress, and increased oxygen demands, occurring in response to infections, may cause a mismatch between oxygen supply and demands [17]. Local inflammation and hemodynamic changes may also increase the risk of the rupture of an atherosclerotic plaque [18,19]. An ST elevation MI (STEMI) may be the first clinical manifestation of COVID-19, but about a third of these patients do not present obstructive coronary artery disease [12,20,21] or angiographic signs of plaque rupture [20,22]. This finding highlights the potential role of endothelial dysfunction and hypercoagulation status [23][24][25]. Infection, hemodynamic stress of an acute critical pathology, inflammation (up to the typical hyper-reactive immune response that manifests itself with the cytokine storm) and fever, in fact, may favor a prothrombotic state, also interfering with the ability to dissolve thrombi, and may cause early or late instability and ruptures of coronary plaques and thrombosis [16,26,27]. High levels of IL-6, IL-1B, and IL-8, in fact, have been associated with plaque instability and increased thrombotic risk. Furthermore, IL-6 is involved in the stimulation of matrix-degrading enzymes such as matrix metalloproteinases, and may contribute to ACS development [28]. A STEMI could also be attributable to microthrombi formation [29].
Studies to date suggest that the underlying pathophysiology of COVID-19-associated cardiac injury may be multi-factorial, as it can derive from both systemic perturbations (hyper-inflammation and thrombophilia) and potential direct cardiotoxic effects of SARS-CoV-2 due to disruption of the renin-angiotensin system, microangiopathy via endothelial cell/pericyte involvement (akin to parvovirus), or cardiomyocyte damage [40].
The rise in the incidence of thrombosis in large and small vessels can be explained by the presence of multiple factors, namely, the stasis of flow due to prolonged bed immobilization, vessel wall damage secondary to the loss of the normal thromboprotective state of the endothelium (due to inflammation and irritation caused by central venous catheters), hypercoagulable state caused by sepsis and endothelial activation due to the virus itself, thrombophilic inflammation responsible for the increase of von Willebrand factor and factor VIII, and neutrophil/platelet activation [41,42]. Venous thromboembolism is further favored by the hemodynamic effects of prolonged mechanical ventilation [38].
The activation of coagulation during systemic inflammation caused by different infectious agents is very complex and can occur through different mechanisms, involving polyphosphates derived from platelets activated by microorganisms, mast cells and factor XII, the complement system, and components of neutrophil extracellular traps (NETs), a mesh similar to a network that has the purpose of trapping viruses [43]. However, COVID-19-induced coagulopathy is different from that induced by sepsis, leading to extensive micro-and macro-vascular thrombosis and organ failure [44,45]. In addition, the type and rate of thrombosis can vary according to the cause of pneumonia: community-acquired pneumonia is more frequently complicated by arterial thrombosis, while an equal incidence of venous and arterial thrombosis occurs in SARS-CoV-2 [46].
It has been hypothesized that SARS-CoV-2 infection induces an immuno-thrombosis, in which neutrophils and activated monocytes interact with platelets and the coagulation cascade [38,47]. The main activation of the signal pathways to produce inflammatory cytokines are the toll-like receptors that recognize the presence of viral nucleic acids and the ACE-2 receptors, which the virus uses to infect cells. The coagulation alterations are mainly mediated by the activation of platelets [48]. The procoagulating effect of hypoxia should also be considered. A summary of hypothesized thrombotic mechanisms after COVID-19 is shown in Figure 1. It is possible to find a prothrombotic state also in long COVID-19, due to residual persistence of the blood chemistry of inflammation and procoagulative states [49]. The most frequently reported coagulation abnormality, especially in the most severe patients, is the elevation of D-dimer, but there are also increases in fibrinogen and its degradation products, PAI-1, and von Willebrand factor, as well as low levels of antithrombin III and antiphospholipid antibodies, known for their thrombophilic effect [29,39,45,[50][51][52].
A simultaneous occurrence of CSVT and immune thrombocytopenic purpura has also been reported [53]. In COVID-19, thrombocytopenia is frequent and is associated with a worse prognosis. A meta-analysis showed that most severe cases of COVID-19 show a significant decrease in platelet counts (up to about 10,000) [46]. The pathogenesis of thrombocytopenia is probably related to the platelets' overactivation by the complement through the generation of procoagulant microparticles and the insertion of C5b-9 in lytic quantities on platelets, in the absence of complement regulators [38]. Activated platelets also express a functionally active tissue factor (TF) that can trigger the coagulation cascade [54]. The resulting thrombosis leads to platelet consumption.
Anti-COVID-19 Vaccines
The natural history of COVID-19 can only be changed with the extensive use of vaccination. Favorable results from rigorous randomized, controlled phase III trials have been published for the Pfizer-BioNTech [56], Moderna [57], AstraZeneca/Oxford [58], and Johnson & Johnson/Janssen Cilag [59] vaccines, as well as the Russian Gam-COVID-Vac and the Novavax vaccine [60,61]; Table 1 shows the main COVID-19 vaccines. The Pfizer-BioNTech and Moderna vaccines are based on messenger RNA (mRNA). AstraZeneca/Oxford uses a modified chimpanzee adenovirus to contain the gene for spike glycoprotein (S) production; Janssen Cilag uses the modified human serotype 26 adenovirus vector in a single administration, which encodes the complete S sequence by stimulating both neutralizing anti-S antibodies and other functional anti-S antibodies, as well as direct cellular immune responses. Sputnik uses two different adenoviruses for the two doses of vaccine, and Novavax has produced a protein-based vaccine containing tiny particles obtained from a recombinant version of protein S. Vaccines derived from chemically inactivated cultured viruses are produced by Sinopharm and Sinovac and are available in China [61].
To date, 240 candidate vaccines have been registered by the WHO, 63 of which are in the clinical evaluation phase, 177 in the preclinical phase, and 111 authorized for use in at least one country [61]. As of 26 December 2021, a total of 8,948,475,404 vaccine doses have been administered [1].
Medical practitioners are still required to make a special effort to promote the vaccination of patients with cardiovascular diseases. However, reports of thrombotic events in conjunction with some vaccines have caused much concern and even panic among the population and the medical community [61,62]. These serious, albeit rare, side effects related to vaccination, in our opinion, require further reflections, in anticipation of having to recommend vaccination even to patients returning from recent arterial and venous thrombotic episodes not related to COVID-19.
Anti-COVID-19 Vaccines and Thrombosis
In the initial phase III clinical trials [56][57][58][59], no major safety warnings, including thrombosis, were reported, apart from rare cases of anaphylaxis. In addition, a systematic review of the safety of vaccines in pivotal trials indicates that they are safe and without serious adverse events [63]. However, it is not surprising that new reports of adverse events emerge as long as more people are vaccinated and follow-ups become more extensive [64][65][66]. In March 2021, three descriptions of a new syndrome, characterized by thrombosis in unusual locations (CSVT, splenic vein thrombosis-SVT, thrombosis of the porta, mesenteric, or hepatic veins) and thrombocytopenia 4-28 days after the first dose of the AstraZeneca/Oxford vaccine were published [67][68][69] (11 patients in Germany and Austria, 23 in the United Kingdom, and 5 in Norway). These subjects were typically healthy or clinically stable, but about 40% of patients died, either from cerebral ischemia or overlapping hemorrhage. Most were women under the age of 50.
These reports were followed by many other articles on various events after administration of the AstraZeneca/Oxford vaccine, including DVT, PE or acute arterial thrombosis at various levels, cerebral arterial thromboembolism, and thrombotic microangiopathy .
Although adverse effects were observed more frequently for females younger than 60 years [123], the European medicine Agency (EMA) does not consider age and gender as significant risk factors, as the scarcity of data precludes robust estimates. Conversely, known risk factors are the use of estrogen-containing drugs and pregnancy [124].
Comparison between the frequency of SAEs demonstrated a lower frequency of thrombocytopenia and SAEs in young adults and higher frequency in older Ad26.COV2 recipients. After vaccination with the AstraZeneca/Oxford vaccine, 7 DIC cases were also observed in around 20 million subjects in the UK and Europe [140], and a link with the vaccine was considered possible.
A second vaccine that has been associated with the appearance of thrombosis is the Janssen Cilag one [135,[141][142][143][144][145][146], so it has been hypothesized that viral vectors could play a role. We do not have data on the thrombotic risk of another adenoviral vaccine, the Sputnik V [61].
The lack of the total number of patients who have received a particular vaccine, and in particular, of reliable denominators stratified by age and sex, however, does not allow a direct comparison between the different vaccines [164]. In U.S., reporting rates for thrombosis with thrombocytopenia were 3.83 per million vaccine doses (Ad26.COV2.S) and 0.00855 per million vaccine doses (mRNA-based COVID-19 vaccines) [165]. Recent metanalyses suggest that approximately half of patients with thrombosis and thrombocytopenia syndrome present with CVST [166], vaccines against SARS-CoV-2 are not associated with an increased risk of thromboembolism, hemorrhage, and thromboembolism-/hemorrhage-related death [167], and the prevalence of thrombotic thrombocytopenia following ChAdOx1-S was 0.73 per 100,000 [168]. These data also suggest the importance of finding further mediators of this aberrant immune response beyond the adenoviral sequences or other components of the AstraZeneca and Janssen Cilag vaccines [139].
Regarding myocardial infarction, there are reports after AstraZeneca/Oxford [174] and, in a 96-year-old woman, after Moderna [175]. It is plausible that the stress of receiving the vaccine, as well as the reported adverse events (injection site pain, asthenia, nausea and vomiting, fever) may trigger an increased oxygen demand in the presence of an unknown coronary atherosclerotic burden. However, it is likely that a similar adult could have had a poor prognosis in case of infection with COVID-19. In a study of 126,661,070 vaccinated subjects, the incidence of heart attack increased with age (very rare in children, rare in women aged 30 to 54 years, uncommon in men and women aged 55 to 84 years, and common in those over eighty-five) [176].
Another point is the incidence of bleeding events. Out of more than 30 million vaccinated, the UK Medicines and Healthcare Products Regulatory Agency (MHRA) reported 267 hemorrhages (including 6 fatal) with AstraZeneca/Oxford, and 220 (9 fatal) with BioN-Tech/Pfizer, and in the VAERS database, out of more than 110 million vaccinated, 439 hemorrhagic events were reported with the BioNTech/Pfizer and Moderna vaccines [61].
Prognostic, Preventive, and Therapeutic Aspects
In a systematic review of the outcomes of patients with thromboembolic events following the AstraZeneca vaccine, 39 out of 146 patients died [177]. A recent systematic review and post hoc analysis, in which a total of 25 studies with 69 patients were included, investigated prognostic predictors in vaccine-associated thrombosis. Platelet nadir (p < 0.001), arterial or venous thrombi (χ 2 = 41.911, p = 0.05), and chronic medical conditions (χ 2 = 25.507, p = 0.041) were statistically associated with death. The ROC curve analysis yielded D-dimer (AUC = 0.646) and platelet nadir (AUC = 0.604) as excellent models for death prediction [178]. Additionally, in a multicenter British cohort study, CSVT is more severe in the context of thrombocytopenia [179]. In an international registry of consecutive patients with CVST within 28 days of SARS-CoV-2 vaccination from 81 hospitals in 19 countries [180], fibrinogen levels, age, platelet count, and the presence of intracranial hemorrhage (ICH) were significantly associated with mortality, and the FAPIC score comprising these risk factors could predict mortality [181]. In a prospective cohort study in the United Kingdom, the odds of death increased by a factor of 2.7 (95% CI 1.4 to 5.2) among patients with CSVT, by a factor of 1.7 (95% CI, 1.3 to 2.3) for every 50% decrease in the baseline platelet count, by a factor of 1.2 (95% CI, 1.0 to 1.3) for every increase of 10,000 units in the baseline D-dimer level, and by a factor of 1.7 (95% CI, 1.1 to 2.5) for every 50% decrease in the baseline fibrinogen level; the observed mortality was 73% among patients with platelet counts below 30,000 per cubic millimeter and ICH [182]. In a recent systematic review on thrombosis with thrombocytopenia after adenoviral vaccines, the mortality rate was 36.2%, and patients with suspected TTS, venous thrombosis, CVST, pulmonary embolism, or intraneural complications, patients not managed with non-heparin anticoagulants or i.v. immunoglobulins, receiving platelet transfusions, and requiring intensive care unit admission, mechanical ventilation, or neurosurgery were more likely to expire than recover [183].
Regarding prophylaxis with antithrombotic drugs, there is no scientific evidence to support the hypothesis that aspirin or low molecular weight heparin are effective in reducing the risk of thrombotic events in subjects undergoing vaccination against COVID-19 with adenoviral vaccines, in the face of a risk of serious adverse events, such as a greater, well quantifiable, and relevant hemorrhage [184]. It is obvious that these drugs may be continued in patients already treated.
Patients with thrombocytopenia after vaccination respond favorably to immunotherapy with intravenous steroids and immunoglobulins [169], whose possible benefits include blocking Fcγ receptor IIa (FcRγIIA), neutralizing anti-platelet factor-4 (PF4) antibodies by anti-idiotype antibodies, facilitating the catabolism of anti-PF4 antibodies, and modulating the immune cell compartment, including B cells that produce anti-PF4 [185].
Coagulopathies, including thromboses, thrombocytopenia, and other related side effects, are likely correlated to an interplay of the two components in the vaccine, i.e., the spike antigen and the adenoviral vector, with the innate and immune systems, which under certain circumstances can imitate the picture of a limited COVID-19 pathological picture [207,208]. Circulating platelets serve as a reservoir of immunomodulatory molecules [201]. However, no significant activation of fibrinogen-driven coagulation, plasma thrombin generation, or clinically meaningful platelet aggregation after AstraZeneca/Oxford or BNT162b2 vaccination was observed [209].
The mechanisms behind this are currently a subject of active research and include the following: (1) the production of PF4-polyanion autoantibodies; (2) adenoviral vector entry in megacaryocytes and the subsequent expression of spike proteins on platelet surfaces; (3) direct platelet and endothelial cell binding and activation by the adenoviral vector; (4) activation of endothelial and inflammatory cells by the PF4-polyanion autoantibodies; (5) the presence of an inflammatory co-signal; (6) the abundance of circulating soluble spike protein variants following vaccination [210].
The first important pathophysiological mechanism has been hypothesized by the Greifswald Working Group led by Andreas Greinacher. The constellation of thrombosis and thrombocytopenia has led to the hypothesis of a condition similar to heparin-induced thrombocytopenia (HIT), in which IgG-specific antibodies recognize the multimolecular complex between PF4-cationic and heparin polyanionic elements as foreign, causing multicellular activation and, in particular, the activation of monocytes and platelets through their FcRγIIA receptor, with a release of procoagulant metalloproteinases, as well as a direct activation of the endothelium by antibody complexes, leading to increased thrombogenicity with the release of selectin P and E, von Willebrand factor, interleukin 6, and thrombin, with consequent thrombocytopenia from platelet consumption and severe thrombogenicity [186][187][188]211]. The diagnosis of certainty of HIT requires the demonstration of the presence of anti-PF4-heparin antibodies. A similar mechanism could be implicated in the antiviral response to SARS-CoV-2, considering that antibodies to PF4-heparin have been detected in cases of patients with COVID-19. Terpos et al. [212], moreover, detected non-platelet activating anti-PF4 antibodies in 67% of the vaccinated individuals following the first dose of the AstraZeneca/Oxford vaccine. Vaccination can, therefore, probably induce the formation of antibodies against platelet antigens as part of the inflammatory reaction and immune stimulation; the adenoviral epitopes used in vaccines also have a strong affinity for PF4, mimicking the effect of heparin; this allows PF4 tetramers to cluster and form immune complexes through electrostatic interaction, which, in turn, causes FcγRIIa (also known as CD32a) massive dependent platelet activation [213], increased TF expression, and subsequent thrombin generation [190,191,204,214], regardless of the presence of heparin. However, it is unclear whether PF4 is a mere witness within an immune complex that activates platelets, or directly contributes to the formation of the thrombus. The delay in the production of these autoantibodies would explain the appearance of adverse reactions 4-14 days after vaccination [61]. Since none of the patients were exposed to heparin, the name autoimmune HIT was proposed for the syndrome [67]. Other proposed definitions are vaccine-induced prothrombotic immune thrombocytopenia (VIPIT) and vaccine-induced immune thrombotic thrombocytopenia (VITT), which is the most widely used. VIPIT, occurring after a vaccination, may pass into an asymptomatic state or it may have severe clinical complications as VITT [215]. Several international scientific societies have recently published recommendations on the diagnosis and management of VITT [64,[216][217][218][219][220][221]. A suspect VITT should raise prompt testing for anti-PF4 antibodies [222].
Second, it has been hypothesized that even an accentuated immune response, a mechanism that mimics the effect of active COVID-19, could represent a thrombotic trigger [164]. The infection induces the activation of neutrophils and monocytes, with release of leukocyte DNA, which interact with platelets and the coagulation cascade leading to intravascular formation of thrombi in large and small vessels [192,193]. As mentioned before, during viral infections, and particularly in the case of SARS-CoV-2, one of the adaptive responses of the innate immune system (not selective but immediate) is the production by neutrophils of NET via IL-1β/NLRP3 inflammasome activation [223,224]. Although NET is useful and effective, numerous studies have shown its association with thrombosis [202]. The protein S, expressed by vaccines, also activates the complement system and can induce a cellular and humoral immune cascade against the virus favoring thrombosis [194]. Disproportionate inflammation can also increase endothelial adhesion and the release of TF, the true trigger of thrombin generation, a key enzyme in coagulation [187]. Although most of the reports on VITT have focused on the role of platelets, it is likely that VITT pathogenic antibodies bind and activate other cells that express FcγRIIa, notably, leucocytes and endothelial cells. The association between thrombocytopenia and often multiple thrombotic complications with a rapidly worsening clinical course is known to occur in other syndromes on autoimmune basis, such as antiphospholipid syndrome, already demonstrated in COVID-19 [196,206], or thrombotic thrombocytopenic purpura.
According to a further hypothesis [191,225], an accidental injection of the vaccine into a vein, even in small quantities, or multiple exudations over time, can culminate in high levels of adenoviruses in the blood, which, although not replicating, can infect permissive cells such as epithelial or endothelial ones and fibroblasts, which can process large amounts of S glycoproteins, leading to high levels of antigens against them. This is not possible in the case of mRNA vaccines since lipid nanoparticles cannot survive in the enzymatically hostile plasma environment and are rapidly eliminated from the reticulo-endothelial system.
Genetic vaccines could instead directly infect platelets and megakaryocytes, causing mRNA translation and intracellular synthesis of S proteins, which would cause an autoimmune response against these elements, causing reticulo-endothelial phagocytosis and direct lysis of CD8 T cells. When a vaccinated cell dies or is destroyed by the immune system, in addition, debris can release a large amount of whole or fragmented S proteins into the blood. In a subject with previous SARS-CoV-2 infection or with cross-reactive antibodies to common coronaviruses, a large volume of immune complexes can form shortly after vaccination with adenovirus-based vaccines, but also with mRNA [190]. IgG against these immune complexes can be glycosylated in an aberrant way (e.g., afucoselate) as is the case in the most severe cases of COVID-19.
Finally, it is known that SARS-CoV-2 uses ACE-2 as a Trojan horse to invade target cells. Vaccines have the potential to interact with ACE-2, promoting its internalization and degradation, a phenomenon also observed in platelets, in which subunit 1 of protein S, but not subunit 2, binds to ACE-2, inducing a dose-dependent facilitation of aggregation and release of adenosine triphosphate [192]. The loss of ACE-2 receptor activity from the outer side of the cell membrane, mediated by the interaction between ACE-2 and spike proteins, results in less angiotensin inactivation, which increases thrombotic risk [189,194].
A distinctive feature of the SARS-CoV-2 spike protein is its ability to efficiently fuse cells, thus producing syncytia found in COVID-19 patients; this ability may enable spike to cause COVID-19 complications as well as side effects of COVID-19 vaccines [226].
Finally, in a recent study, both adenoviral and mRNA vaccines enhanced inflammation and platelet activation, though adenoviral vaccination induced a more pronounced increase in several inflammatory and platelet activation markers compared to mRNA vaccination, and post-vaccination thrombin generation was higher following adenoviral vaccination compared to mRNA vaccination [227]. Additionally, no difference in either the PF4 antibody level or the proportion of individuals with positive PF4 antibodies was observed between the vaccine groups [227].
All, or many, of these conditions should be present to trigger platelet and thrombosis, which explains the rarity of these cases [199,200]. The different pathophysiological hypotheses are illustrated in Figure 2. It is not clear why this immunogenic thrombosis occurs in the cerebral or splanchnic vessels, that is, whether it is correlated with the localization of the antigen or with the vascular response. The presence of specific polyanionic antigens in the mentioned vascular sites could be a possible explanation. Venous drainage of microbiotic-rich areas in the nose and intestines, which can trigger local endovascular immunity with engagement of autoantibodies directed towards PF4-microbiote complexes, could play an additional role [228].
Finally, a myocardial infarction could be also provoked by vaccine-induced allergic vasospasm, as in Kounis syndrome [229,230]. A mRNA COVID-19 vaccine-related anaphylactoid reaction and coronary thrombosis has been described [231].
Regulatory Aspects
On 15 March 2021, due to the cited reports of thrombosis and some suspicious deaths, several European health institutions suspended the use of the AstraZeneca/Oxford vaccine throughout the national territory [64,232,233]. On 18 March, in the UK, the MHRA stated that the evidence does not suggest that thrombosis is caused by the Astrazeneca/Oxford vaccine, while the EMA concluded that a causal link with the vaccine was possible [35]. The EMA has also decided to include information on thrombotic risk in package leaflets, warning patients and doctors to be vigilant about the potential appearance of symptoms. The EMA Pharmacovigilance Risk Assessment Committee (PRAC), after the necessary checks, readmitted the use of the AstraZeneca/Oxford vaccine on 19 March [125,232]. On 27 March, the EMA, the COVID-19 subcommittee of the WHO Global Advisory Committee on Vaccine Safety (GVACS), and the MHRA reviewed the risk of thrombosis after vaccination with AstraZeneca/Oxford, again agreeing that the benefits outweigh the risks. On 7 April 2021, including information from the EMA and MHRA (which advised to offer young people under 30 an alternative to AstraZeneca/Oxford if available), the GACVS published an interim statement that the causal relationship between the vaccine and the onset of thrombi and platelet disease is plausible but not confirmed, and in the UK, an age restriction regarding the AstraZeneca/Oxford vaccine for people under 30 years of age was introduced. On the same date, the Italian Ministry of Health recommended the preferential use of the AstraZeneca/Oxford vaccine in people over 60 years of age, considering the low risk of thromboembolic adverse reactions, in the face of the high mortality from COVID-19 in an age group in which the vaccine is certainly effective in reducing the risk of serious disease, hospitalization, and death related to COVID-19 [132]. In addition, it has been stated that it is not possible to make recommendations regarding the identification of specific risk factors, and that preventive treatments of the aforementioned thrombotic episodes are not identifiable. On 7 May, in the UK, a restriction of AstraZeneca/Oxford vaccine was introduced for people under 40 years of age. On 22 July 2021, the Advisory Committee on Immunization Practices reviewed updated benefit-risk analyses after Janssen Cilag and mRNA COVID-19 vaccination, and concluded that the benefits outweigh the risks for rare, serious adverse events after COVID-19 vaccination [234]. Finally, in December 2021, the U.S. Advisory Committee on Immunization Practices voted unanimously (15 to zero) for a recommendation for the preferential use of mRNA COVID-19 vaccines over the Janssen COVID-19 vaccine for the prevention of COVID-19 for all persons aged ≥18 years [235].
Final Considerations
Vaccines against SARS-CoV-2 have been used for a short time, and knowledge about their clinical manifestations is constantly evolving [34,236]. Data on their long-term effects, interactions with other vaccines, use in immunocompromised subjects, and those with comorbidities (e.g., hematological, autoimmune or inflammatory disorders) are lacking [237]. Therefore, careful surveillance and long-term follow-up studies are needed [136]. In this regard, it is important to remind all cardiologists, but also hematologists, and in some cases, internists and vascular surgeons, to report all suspected adverse reactions associated with the use of COVID-19 vaccines, in accordance with the respective national reporting system.
Unfortunately, part of the population still hesitates to recognize the dangers associated with SARS-CoV-2, comparing them to past influenza epidemics, ignoring the fact that mortality continues to rise in the world despite strict hygiene and lock-down measures. The phenomenon of denial is important, and has been influenced by reports of side effects, especially thrombotic. The vaccine hesitancy is a complex phenomenon that is driven by individuals' perceptions of safety and the efficiency of the vaccines [238].
It should be remembered that the incidence of severe thrombotic events appears low (1/100,000-1/1,000,000 vaccinated subjects) [130,161,162,204,239,240]. In a real-world evidence-based study, which retrospectively analyzed a cohort of 771,805 vaccination events across 266,094 patients in the Mayo Clinic Health System between 2017 and 2021, CVST was rare and not significantly associated with COVID-19 vaccination [241]. However, clinical trials that tested the effectiveness of vaccines included only SARS-CoV-2-negative subjects. The possibility cannot be excluded that the vaccination of an increasing number of subjects may cause an unexpected thrombotic and inflammatory reaction in subjects predisposed by a previous infection [189].
Pharmacovigilance reports, however, contain administrative and uncontrolled data that are undoubtedly useful, but they cannot and should not support hypotheses about causal relationships. Early signs of rare side effects during pharmacovigilance that can lead to severe outcomes should not be set aside solely based on statistical prevalence but require extensive scientific studies and a correlation with the clinic to rule out a potential causal link [196].
When evaluating for the purpose of making decisions about the use of drugs, it is important to consider the natural history of pathologies, based on pre-pandemic incidence rates in the general population. In the general population, regardless of each vaccination, the annual incidence of venous thrombosis and cerebral thrombosis are, respectively, 1.2/1000 and 1.2/100,000 [61]. In the Danish National Patient Registry [242], cases of venous thromboembolism (DVT, PE, hepatic vein thrombosis, mesenteric, portal vein thrombosis, renal or hollow vein thromboembolism, migrans thrombophlebitis, intracranial vein thrombosis) were identified in all adults between 2010 and 2018, calculating an incidence of venous thromboembolism of 1.76 per 1000 patients per year (0.95/1000 between 18 and 64 years). In the 5 million Danish inhabitants (corresponding to the number of subjects who received AstraZeneca/Oxford vaccines in Europe as of 10 March) the incidence of venous thromboembolism corresponds to 169 cases per week in all adults (91 from 18 to 64 years). In contrast, only 30 cases of thromboembolic events have been reported after As-traZeneca/Oxford vaccination, which, therefore, does not appear to increase the incidence rate of venous thromboembolism compared to the natural one. However, these data cannot rule out the possibility that some venous thrombotic events after AstraZeneca/Oxford are caused by the vaccine, as they occurred after a short interval of time.
The risks and benefits of current vaccines must be compared with the real possibility of contracting the disease and developing long-term complications and sequelae based on the available clinical evidence and avoiding unjustified bias [243,244].
In fact, thromboembolic complications of COVID-19 are much more frequent (6 to 28% of cases) [131,232,239,[245][246][247]. In a study by Taquet et al. [247], the incidence of CSVT in the two weeks after a COVID-19 diagnosis (42.8 per million people, 95% CI 28.564.2) was significantly higher than in a matched cohort of people who received an mRNA vaccine (RR = 6.33, 95% CI 1.8721.40, p = 0.00014) or in patients with influenza (RR = 2.67, 95% CI 1.046.81, p = 0.031), and the incidence of peripheral thrombosis after COVID-19 diagnosis (392.3 per million people, 95% CI 342.8448.9) was significantly higher than in a matched cohort of people who received an mRNA vaccine (RR = 4.46, 95% CI 3.126.37, p < 0.0001) and in patients with influenza (RR = 1.43, 95% CI 1.101.88, p = 0.0094). Comparing data from the US Centers for Disease Control and Prevention, the Nationwide Inpatient Sample, and the Society of Vascular and Interventional Neurology COVID-19 registry, Bikdeli et al. [239] highlighted that the incidence of CSVT was 0.9/million in vaccinated people, 2.4/million in the general population and 207.1/million in COVID-19 patients.
Risks versus benefits varied significantly between age groups and transmission levels. Across different scenarios, benefits of adenoviral vaccination in people 55 years and older exceeded the risk of death from COVID-19. In young adults, the risks were at least of a similar magnitude as the benefits [248]. For example, for every million doses of the Janssen Cilag vaccine administered to women aged 18 to 48 years, 297 hospital admissions, 56 admissions to intensive care, and 6 deaths related to COVID-19 are avoided, compared to 7 cases of thrombosis [246]. Under a high transmission rate, deaths prevented by AstraZeneca/Oxford vaccine far exceed deaths from VITT (by 8 to > 4500 times depending on age). The probability of dying from COVID-19-related atypical severe blood clots was 58-126 times higher (depending on age and sex) than dying from VITT [249]. Excess deaths due to the interruption of the AstraZeneca vaccination campaign in France and Italy largely overrun those due to thrombosis, even in worst case scenarios of frequency and gravity of the vaccine side effects [250]. For AstraZeneca/Oxford vaccination itself, a recent Italian study showed that the benefits outweigh the risks as early as the age of 30 [251].
Authorities, media, and the population should also be reminded that thrombotic risks are accepted in the modern lifestyle; millions of women use contraceptives that increase their thrombotic risk by 3 to 5 times, and the absolute risk of having a venous thrombosis after an air trip lasting more than 4 h is 1/4600, much higher (50-100 times) than that of having a CSVT after vaccination [61,252]. Clearly, in VITT, mortality differs from classic DVT, reaching 40% [61].
The ESC Patient Forum published information on the COVID-19 vaccine for heart patients on 12 April, reiterating the importance that everyone can receive it due to the high risk of complications [253]. Trials with COVID-19 vaccines have included patients with heart disease without demonstrating serious effects; no vaccine and cardiological drug interactions are reported and there is no evidence to suggest that they are contraindicated in heart disease. Furthermore, due to the immunogenic nature of thrombosis, patients with a history of thrombosis and/or known thrombophilia do not have an increased risk after AstraZeneca vaccination [64]. In fact, it is estimated that about 5000-6000 subjects per 100,000 vaccinated are carriers of these coagulative abnormalities [254], which clearly contrasts with the extreme rarity of the most serious thrombotic complications observed. COVID-19 vaccines seem safe for patients with previous CVST [255]. There is also no evidence that thrombosis at typical sites (lower limbs, pulmonary embolism) is more common after adenoviral vaccines than in the general population stratified by age. Therefore, there are no elements to contraindicate vaccination in patients returning from a recent thrombotic event or, in particular, from an AMI, but is opportune, giving preference to mRNA vaccines in younger classes, particularly in women, in accordance with the indications of the regulatory authorities.
There are no reliable data on the risk related to the booster dose. According to the hypothesis of a hyperactivity of coagulation induced by the vaccine, it is reasonable to expect that, with the first administrations, there has already been the so-called "depletion of susceptible", or a sort of selection of subjects who, for unknown reasons, are more exposed to the action of these hypothetical prothrombotic mechanisms, and that, therefore, any adverse manifestations are even rarer following the second dose [256]. According to the hypothesis of the production of autoantibodies, re-exposure to the vaccine could instead lead to important clinical manifestations in some subjects who, at the first dose, had already activated an abnormal immune response, even if it was clinically not evident [67]. As of 12 May, 15 cases of atypical thrombosis with thrombocytopenia have been reported by the English MHRA for about 9 million second doses of AstraZeneca/Oxford administered, which would seem to correspond to a weaker signal than that found for the first doses and, in any case, is definable as very rare, supporting the "depletion of susceptible" hypothesis.
Conclusions
The rapid availability of vaccines, effective in limiting transmission and severe forms of the disease, has emerged as the only solution for controlling the SARS-CoV-2 pandemic. Careful surveillance and long-term follow-up studies on vaccines are needed. Unfortunately, part of the population still hesitates to recognize the dangers associated with SARS-CoV-2. Healthcare professionals remain the most appropriate advisers regarding vaccination decisions and must be supported to provide reliable and credible information. Table 3 suggests, for example, what to avoid in case of vaccination [61]. It should be remembered that the incidence of severe thrombotic events appears low. The risks and benefits of current vaccines must be compared with the real possibility of contracting the disease and developing long-term complications. Authorities, media, and the population should also be reminded that thrombotic risks are accepted in the modern lifestyle. All scientific societies emphasize the value of continuing vaccination programs to protect patients from severe forms of COVID-19 and to slow the circulation of the virus and its variants [257]. Vaccine hesitancy risks regressing progress in infectious disease control. Abstention is not an option, as it results in a failure to assist a large population that remains in danger. Action, with increased vigilance, is the best solution in our public health mission [61]. Table 3. What to avoid in case of COVID-19 vaccination (from [61], modified).
•
Systematic premedication with low molecular weight heparin, direct oral anticoagulants, or aspirin G. and F.C. Each author has made substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data; or have drafted the work or substantively revised it. All authors have approved the submitted version (and the version substantially edited by the journal staff, which involves the author's contribution to the study), and agrees to be personally accountable for the author's own contributions and for ensuring that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and documented in the literature. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable. | 2022-02-16T16:26:44.037Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "e68ca74e0713dac3757bf00ec24baad4717753c2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/11/4/948/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d90ecd3ccfd1973efa2bd2db5b5ea6d2d9e316d4",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15867997 | pes2o/s2orc | v3-fos-license | A PCR-mutagenesis strategy for rapid detection of mutations in codon 634 of the ret proto-oncogene related to MEN 2A.
BACKGROUND
Multiple endocrine neoplasias type 2A (MEN 2A) is a dominantly inherited cancer syndrome. Missence mutations in the codon encoding cysteine 634 of the ret proto-oncogene have been found in 85% of the MEN 2A families. The main tumour type always present in MEN 2A is medullar thyroid carcinoma (MTC). Only 25% of all MTC are hereditary, and generally they are identified by a careful family history. However, some familial MTCs are not easily detected by this means and underdiagnosis of MEN 2A is suspected.
METHODS
DNA samples from MEN 2A patients were amplified by PCR. The products were incubated with the restriction enzyme Bst ApI or Bgl I. The samples were loaded in non-denaturing 10% Polyacrilamyde Gel and run at 120 volts for 40 min. The gels were stained with 10 microg/ml ethidium bromide, and the bands were visualized under a UV lamp.
RESULTS
We developed a PCR-mutagenic method to check the integrity of the three bases of the cysteine 634 codon.
CONCLUSION
The method can be used to detect inherited mutations in MTC patients without a clear family history. The method is relatively simple to use as a routine test in these patients to decrease the underdiagnosis of MEN 2A. In addition, the assay can be used to screen affected families with any mutation in cysteine 634.
The samples were loaded in non-denaturing 10% Polyacrilamyde Gel and run at 120 volts for 40 min. The gels were stained with 10 µg/ml ethidium bromide, and the bands were visualized under a UV lamp.
Results:
We developed a PCR-mutagenic method to check the integrity of the three bases of the cysteine 634 codon.
Conclusion:
The method can be used to detect inherited mutations in MTC patients without a clear family history. The method is relatively simple to use as a routine test in these patients to decrease the underdiagnosis of MEN 2A. In addition, the assay can be used to screen affected families with any mutation in cysteine 634.
Background
Multiple endocrine neoplasia type 2A (MEN 2A) is a dominantly inherited cancer syndrome, which involves the triad of medullary thyroid cancer (MTC), pheochromocytoma, and hyperparathyroidism. It is inherited as an autosomal dominant trait. Missense muta-tions in the codon encoding cysteine 634 of the ret protooncogene have been found in 85% of the reported families [1]. Other mutations in the same cysteine-rich domain of the protein can also cause MEN 2A. All these mutations lead to the replacement of a cysteine by an alternate amino acid.
The ret gene is expressed in neural crest derived tissues [2]. It encodes a receptor with tyrosine kinase activity that is essential for the normal development of the kidneys and the intestinal nervous system [3]. This protein is considered a dependence receptor because it provokes proliferation in the presence of its ligand, and cell death in its absence [4,5]. The mutations associated with MEN 2A cause ligand-independent constitutive activation of the tyrosine kinase receptor by the formation of disulfide-bonded homodimers [6][7][8][9]. The main tumour type always present in MEN-2A is MTC, accompanied in 50% of the cases by pheocromocytoma and in 15-30% by parathyroid hyperplasia. MTC arises from the proliferation of parafollicular or C cells of the thyroid. About 25% of all MTCs are hereditary, the remainder are sporadic. At present, hereditary MTC is identified mainly by a careful analysis of the family history. However, about 15-20% of the patients with familial MTC do not have a clear hereditary history, a pheochromocytoma or any other condition that would clearly indicate the familial background of the disease (personal communication, Oliver Gimm). This means that about 5% off all patients with MTC have hereditary MTC that cannot be identified by the analysis of the family history. Higher underdiagnosis of hereditary cases would be expected in developing countries due to poor family information and lack of routine mutation analysis in MTC patients. Since the ret proto-oncogene has a "hot-spot" site at the codon 634 in exon 11, the aim of our group has been to develop a PCR-directed mutagenesis strategy to test the integrity of this codon. The assay is relatively simple and can be applied as a routine analysis for MTC patients and to screen MEN 2A affected families.
Materials and methods
DNA samples of MEN 2A patients that had been previously diagnosed as carriers of a hereditary mutation were used [10]. DNA was extracted from 3 ml of entire blood using the saline extraction procedure [11]. The PCR primers used were, 5'ATCCACTGTGCGGCAAGCTG (forward) and 5'AAGAGGACAGCGGCTGCGATGCCCGTGCG (reverse). The PCR programme consisted of 30 cycles of 94°C 30 sec, 55°C 2 min and 72°C 30 sec, with an initial and final hold of 94°C 3 min, and 72°C 3 min, respectively. PCR products were incubated with 0.5 µl of Bst ApI (10 U/ µl New England Biolabs, Buenos Aires, Argentina) or Bgl I (10 U/µl Promega, Buenos Aires, Argentina), in a 25 µl reaction volume. The same reaction replacing the enzyme with miliQwater was used as negative control. Incubation with the Bgl I enzyme was performed for 2 h at 37°C; incubation with the BstApI enzyme was performed for 4 h at 60°C, and 1 µl of enzyme was added to each tube for a second 4 h incubation at 60°C.
The samples were loaded in non-denaturing 10% Polyacrilamyde Gel and ran at 120 volt for 40 min. The gels were stained with 10 µg/ml ethidium bromide, and the bands were visualized under a UV lamp.
Results
Our goal was to design a strategy in which the PCR product would contain the 634 codon and one (or more) restriction sites present only in the wild type allele. The codon 634 (TGC) and its neighbouring nucleotides do not form a palindromic site for any restriction enzyme. Two restriction sites were created introducing point mutations with the PCR primers. The restriction site introduced with the forward primer is cut by BstApI and is disturbed by mutations in any of the three bases of the codon. The second restriction site is generated with the reverse primer and is recognized by BglI. Any mutation in the last two bases of the codon alters this restriction site (Fig 1).
Four different DNA samples were amplified, a wild type sample, and three DNAs each with one of the three bases of the 634 codon mutated (TGC to CGC, TAC or TGG). Preliminary assays showed that only partial cut of the wild type allele was observed after incubation with the BstApI enzyme for 4 h at 60°C. A second incubation adding fresh enzyme rendered a total cut of the wild type allele (Fig. 2). In general, DNA samples from MEN 2A patients are heterozygous carrying a wild type allele and a mutated allele. Fig. 2 shows that, in these samples, BstApI can cut only a small proportion of the PCR product. Most of the product is not cut because the enzyme fails to recognize not only the mutated strains but also the heteroduplex formed between the mutated and the wild type strains that hybridizes during the PCR amplification.
The same PCR-products incubated with the Bgl I enzyme were completely digested only in the case of the wild type DNA or when the mutation was present in the first base (Fig. 3).
The results show that the complete 634 codon can be checked by a PCR amplification and an incubation with BstApI. The incubation with the BglI enzyme permits a double-check for mutations in the last two bases. Products that are not recognized by BstApI but are cut by BglI are likely mutated in the first base of the 634 codon.
Discussion
Several mutations in the 634 codon related to MEN 2A create recognition sites for restriction endonucleases. Therefore, the analysis of restriction fragments has been extensively used to screen MEN 2A-affected families. However, in these studies, the mutations present in the families were first identified by sequencing the exon 11 in an affected member of the family. In a second step, if the mutation introduced a restriction site, a PCR based restriction analysis was designed to screen the rest of the family. The method that is described in this report checks (or double-checks) the integrity of the three bases of the 634 codon. Therefore, it can be used to detect mutations in MTC patients without a clear family history and to screen affect-ed families with any mutation in the 634 hot-spot. If a mutation is detected, the necessary following step is to sequence the region and confirm which base is altered. This will avoid to mis-diagnose the polymorphism TGC>TGT, that would be detected by our strategy as a mutation, but that still codifies for a cystein,. Both alleles are completely cut by the enzyme in the wildtype codon, which means that the codon is correct. In the rest of the samples, the product is not completely cut, which reveals that one allele is mutated at one of the three bases of codon 634. In lane 4 and 5, the product is not completely cut, which reveals that one allele has a mutation at the second or third base of codon 634. | 2017-05-29T02:16:45.902Z | 2002-05-21T00:00:00.000 | {
"year": 2002,
"sha1": "fd564f797b0b39f629772ad36bf5ef7a59b578af",
"oa_license": "CCBY",
"oa_url": "https://bmcmedgenet.biomedcentral.com/track/pdf/10.1186/1471-2350-3-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd564f797b0b39f629772ad36bf5ef7a59b578af",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
224915206 | pes2o/s2orc | v3-fos-license | The Gap between Urban and Rural Development Levels Narrowed
The difference between urban and rural development levels (URDL) and the deficiency of rural development level have become the weak points for China to achieve balanced and high-quality development. In order to reveal the changing trend of urban-rural differences in China over the years and provide a reference for the policy-making of the balanced development of urban and rural areas and high-quality economic development, this paper uses the United Nations Development Program-adjusted Human Development Index (HDI) calculation method to calculate the urban and rural areas, based on 1995–2017 national time series data and provincial panel data. On this basis, this paper uses the Logarithmic Mean Weight Divisia to decompose the dynamic changes of the difference of URDL and analyzes the spatial equilibrium of the change trend of the differences of URDL, supplementing the shortcomings of the existing literature which only focuses on income, education, and other local areas and lacks continuity and comparison. The research finds that 1. based on the time series analysis of the difference of URDL, this paper firstly proposes the “inverted U” curve for the difference of URDL in China, which shows that the difference of URDL in China has experienced a process from expansion (1995–2001) to high fluctuation (2001–2011) to continuous convergence (2011–2017). 2. From the factor decomposition effect of the difference of URDL, the difference expansion period is caused by the increase in the gap between the Health Index, the Education Index, and the Income Index. With the decline in the gap between education levels and life expectancy, the growth trend of China’s urban-rural gap has been suppressed, and it has entered a high platform period with relative small fluctuation. After 2011, benefiting from the large decline in income level gaps, China’s urban-rural difference has entered a period of convergence. 3. From the perspective of the spatial evolution characteristics of the gap of URDL, the overall coefficient of variation of the country has shown a downward trend. The degree of spatial equilibrium is gradually increasing, and the overall changes in the east, west, and northeast are the same as the overall trend of the country. The decline in the northeast is the largest, the west is the second, and the east is the least. The middle shows a slight upward trend, but the value of each year is always smaller compared with other regions. Generally speaking, the gap of URDL is relatively good in the middle and northeast, followed by the east, and there is still much room for improvement in the balance of the west.
Introduction and Literature Review
As a developing country, China's urban and rural development has typical dual-structural characteristics. e countryside has made great contributions in the process of China's economic development, but the differences between urban and rural development levels and insufficient rural development levels have become the shortcomings of China's balanced and high-quality development. From the report of the 16th National Congress of 2002 on "coordinating urban and rural economic and social development" to the report of the 17th National Congress of 2007 on "urban and rural integration" to the 2013 ird Plenary Session of the 18th Central Committee of the People's Republic of China on the issue of "new urban-rural relations" and the report of the 19th National Congress of 2017 on "urban-rural integration development", the emphasis on the imbalance between urban and rural development has been increasing. Revealing the gap between urban and rural development and analyzing the causes of development differences not only allow us to clearly understand the changing trends of urban and rural development differences but also provide a reference for balanced urban and rural development and policy-making of economic high-quality development.
Urban-rural difference is an important area for studying the economic and social development of developing countries. Generally speaking, the research on urban-rural differences includes two areas, one is to comprehensively and holistically analyze the urban-rural differences, and the other is to conduct research on urban-rural differences in a certain area such as education and income.
In terms of comprehensive and holistic analysis, the published literature mainly conducts research from qualitative and quantitative methods. e qualitative research method mainly uses qualitative analysis with multielement description and combines with the empirical judgment of researchers. For example, Wang and He studied the gap between urban and rural development from 1978 to 2003 [1], Guo studied the gap between urban and rural areas before 2003 [2], and Zhong et al. and others studied the urban-rural gap before 2007 [3]. It is relatively consistent that almost all of the qualitative research papers [1][2][3][4][5][6][7][8][9][10][11] that study the gap between urban and rural areas believe that with the economic development, the gap between urban and rural areas in China is expanding.
To sum up, although China has accumulated some research results on the issue of urban-rural gap, the overall trend of urban-rural development differences is not clear. Concerning the research on the differences between urban and rural areas in specific fields such as income and education, the conclusions are highly fragmented due to the differences in the fields. It can only be concluded that the trend of changes in the specific field but cannot be used to infer the overall trend of urban and rural differences; while for comprehensive and holistic research, the number of papers is very limited, and the continuity of the research is relatively lacking. For example, Song and Ma [12] used the Human Development Index to conduct research. However, there has been no relevant literature for follow-up and comparative studies since 2002, which cannot reveal the longer-term period, especially the current trend. In comprehensive and holistic research, studies that judge the differences between urban and rural development through qualitative descriptions are difficult to obtain consensus due to the differences in researchers' knowledge background and research experience, and the continuity and comparability of the research are also relatively weak. In order to better understand the changing trend of China's urban-rural relations, and analyze and determine the future and long-term changes, this paper will use the United Nations Development Program (UNDP)-adjusted Human Development Index calculation method widely recognized by the international community to analyze and calculate the national time serial data from 1995 to 2017 and the panel data of each province, and reveal the overall trend of China's urban-rural development differences, analyze the driving factors of urban-rural development differences, and provide basic reference information for a comprehensive understanding and analysis of China's urban-rural development and change trends.
2.1.
e Difference between the Urban and Rural HDI. Based on the UNDP's adjustment of the subindex and weighting method of the Human Development Index in 2010, the available data in this paper are used to calculate the difference between the urban and rural Human Development Index in China according to the latest index system measurement method (formula (1)). e calculation method (due to the lack of statistics on the expected education of urban and rural areas, the Education Index in the Human Development Index only includes the average years of education) is as follows: In formula (1), HDI i is the Human Development Index in year i, LEI i is the Health Index in year i, EI i is the Education Index in year i, II i is the Income Index in year i, EL i is the life expectancy in year i, EL min is the minimum life expectancy, EL max is the maximum expected life expectancy, PGNI i is the per capita GNI (2011, PPP $) in year i, PGNI min is the minimum per capita income, PGNI max is the maximum per capita income, AEY i is the average years of education in year i, AEY min is the minimum value of average years of education, AEY max is the maximum value of average years of education, PPR i is the proportion of the primary school population in year i, JPR i is the proportion of junior high school population in year i, SPR i is the proportion of high school population in year i, and ACPR i is the proportion of population of university and above in year i. Among them, EL min , EL max , PGNI min , PGNI max , AEY min , and AEY max are shown in Table 1. e remaining data sources and processing methods in the HDI calculation formula are as follows. e calculation data sources and processing methods are as follows: (1) Sources and processing methods of per capita income data: the HDI methodology uses the US dollar GNI index measured in purchasing power parity (PPP) in 2011, and China's GDP statistics cannot be directly substituted into the calculation. is paper uses the ratio of per capita GDP data measured in RMB prices of the year based on the China Statistical Yearbook to GNI data measured in 2011 PPP prices to obtain a conversion factor. Based on this conversion factor, the national and local GNI data measured in 2011 PPP prices are calculated.
(2) Sources and processing methods of life expectancy data: the national urban and rural life expectancy data only contain the year 2000's statistical data, so this paper quotes urban and rural life expectancy data from 1995 to 2000, 2005, and 2009 in published papers [12,46]. Based on the above methods, the calculation formula for the difference between urban and rural development levels is
Logarithmic Mean Divisia Index. Logarithmic Mean
Divisia Index (here after referred to as LMDI) is a more commonly used research method to analyze different variables based on the magnitude and the change of influence. is paper uses LMDI to decompose the changes in the Human Development Index differences across the country and provinces and explores the main reasons affecting the changes in the Human Development Index from the aspects of Health Index effect, Income Index effect, and Education Index effect.
Logarithmic formula (1) gives Let the change in the difference between the urban and rural Human Development Index from the national T-j period to the T period be ΔM, then
Complexity
In formula (4), ΔLEI, ΔEI, and ΔII represent the changes in the urban-rural differences of the country's Health Index, Income Index, and Education Index during the period T-j∼T, respectively.
Coefficient of Variation.
China has a vast territory, there are differences in resource endowments and natural conditions in different provinces, and there are also significant differences in economic and social development, resulting in spatial equilibrium differences in urban and rural human development levels across the country and in various regions. e calculation methods that reflect the spatial equilibrium development mainly include Gini coefficient, eil Index, coefficient of variation, etc. is paper uses the coefficient of variation to measure the magnitude and trend of the gap between urban and rural human development across the country and regions according to the availability of data and research demand, and explain the spatial equilibrium of the gap between urban and rural development levels across the country and regions. e regional division is based on the commonly used 10 : 6: 11 : 3 classification standard in the national statistics (Table 2)(there are 30 provinces in total due to the difficulty of data acquisition in Tibet), which divides all provinces in China into four regions: east, middle, west, and northeast. e calculation method based on the gap between urban and rural human development to reflect the spatial equilibrium development is as follows: In formula (5), CV i represents the coefficient of variation of the gap between urban and rural human development levels in national, eastern, central, western, and northeastern countries in year i. M i represents the gap in human development in national urban and rural areas in year i, and M ij represents the gap in human development level between urban and rural areas in province j in year i. m represents the number of provinces included in each region.
e Trend of Human Development Level Difference between Urban and Rural
Areas. From 1995 to 2017, the differences of human development levels between urban and rural areas in China showed an "inverted U-shaped" structure, which can be roughly divided into three stages (Table 3 and Analysis are based from the perspective of the factors that affect the future change trends in urban and rural human development levels. First, in terms of income level differences, the per capita disposable income ratio of urban and rural residents in China has shown a linear decline after reaching its peak in 2003 (the ratio of per capita disposable income of urban and rural residents calculated according to the China Statistical Yearbook). With China's balanced development and the implementation of poverty alleviation policies, the downward trend is expected to be consolidated. Secondly, in terms of the differences in health levels, the Health Index of Chinese cities and towns in 2017 has reached 0.910, which is slightly lower than that of the United States (0.916) and at a higher level. With the popularization of rural basic medical facilities and the increase of new rural cooperative medical insurance rate of farmers, the life expectancy in rural areas will also increase, and the difference in life expectancy between urban and rural areas is expected to narrow. ird, in terms of educational level differences, the advancement of policies and measurements such as urban-rural integration and urbanrural education balance provide guarantees for narrowing the gap between urban and rural education levels. Based on the trend analysis of the above three influencing factors, it is carefully determined that China's future urban and rural human development level differences will continue to narrow.
e inverted U-shaped curve of urban-rural gap also reflects the characteristics of China's economic and social development in the same period. In the late 1990s and the beginning of the 21st century, China has experienced rapid industrialization and urbanization, which means that labor, capital, and other production factors have gathered in central cities, and education and medical resources have also been inclined to urban areas. In this way, the difference between urban and rural development levels has been widening. In 2010, China became the second largest economy in the world. In the same year, China's industrialization entered the late stage (Huang, QunHui, Chaoxian Guo, Yanhong Liu, and Wenlong Hu, 2017, Sustainable Industrialization and Innovation Driven. Beijing: Social Science Academic Press. e level of industrialization is comprehensively calculated by five indicators, namely, per capita GDP, urbanization rate, industrial structure, employment proportion of primary industry, and proportion of added value of the manufacturing industry). Environmental pollution and resource constraints gradually emerged. In 2012, when the new government took office, China's economic development concept and model began to adjust. Changing the growth model and promoting the construction of ecological civilization gradually have been written into the development plans and implemented from the central to the local level, which means that the economic development has shifted from focusing on the speed of development to the quality of development; the industrial layout is more Complexity
Factors Affecting Decomposition of the Differences in
Urban and Rural Human Development. According to formula (4), changes in the gap between rural and urban development in China can be decomposed into changes based on three index effects: Health Index, Income Index, and Education Index. If the effect value is positive, it indicates that the index is promoting the expansion of the urban-rural gap; if the effect value is negative, it indicates that the index has contributed to reducing the urban-rural gap. From the comparison of the constituent index effects at different time periods, the expansion of urban-rural disparities in 1995-2001 was the result of the joint promotion of Education Index, Medical Index, and Income Index. e urbanrural gaps in these three fields are expanding and driving the overall urban-rural gap with the Health Index being the largest driving effect which has a decomposition effect value of 0.0047, followed by the Income Index of 0.0030, and the minimum Education Index of 0.0028. During the high fluctuation period of the urban-rural gap from 2001 to 2011, the rapid growth of the urban-rural gap was controlled, mainly due to the decline in the urban-rural education gap and life expectancy gap. Although the income gap was increasing, this increase was less than the decline in the health gap and Education Index gap; during the convergence period between 2011 and 2017, the gap between urban and rural areas showed a relatively stable decline, mainly due to the sharp decline in the urban-rural income gap. e education gap expanded during the period, but the expansion was smaller than the reduction in the urban-rural income gap and health gap. In the future, in order to maintain a steady trend of continuous decline of the gap between the urban and rural areas, the declining trend of income and Health Index differences between urban and rural areas needs to be continued, the widening education gap needs to be converted, and the Education Index needs to be made a positive factor in narrowing the gap of the Human Development Index between urban and rural areas (Table 4).
Analysis of the Spatial Equilibrium of the Difference between Urban and Rural Human Development.
According to formula (5), the smaller the coefficient of variation, the better the spatial equilibrium of the urban and rural Human Development Index gap ( Figure 2). Conversely, the larger the coefficient of variation, the lower the equilibrium of the urban and rural Human Development Index gap in the region. e calculation results show that the country's overall coefficient of variation is declining, from 0. 24 Note: The numbers in brackets in Table 4 are the contribution rates of the corresponding constituent indexes.
Complexity drop of 0.06. e coefficient of variation in the middle increased slightly from 1995 to 2017, from 0.09 to 0.12.
From the change characteristics of the four regional coefficients of variation, the value of the coefficient of variation in the west is significantly higher than that in the country and the three regions in the east, middle, and northeast. Although the coefficient of variation of the western region has decreased significantly compared to 1995, the coefficient of variation in the west is still significantly larger than that of the country and the other three regions. e northeast has become the region with the largest decrease in the coefficient of variation since 1995-2007, with a decrease of 0.13, which is higher than 0.1 in the west and 0.06 in the east. Which is also higher than the country's overall 0.06. Although the coefficient of variation of the middle region showed an upward trend from 1995 to 2017, because the value of the coefficient of variation has always been at a low level, the increase was generally small, and its equilibrium was relatively good in the four regions. erefore, from the variation characteristics of the four regional coefficients of variation, the equilibrium of the gap between the urban and rural Human Development Index is relatively good in the middle and northeastern provinces, followed by the east, and there is still much room for improvement in the balance of the west. (1) Based on the time series analysis of the urban-rural HDI, this paper firstly proposes an "inverted U" curve for the difference of human development level between urban and rural China, which shows that the gap of human development levels between urban and rural China has undergone a process from expansion to high fluctuations to continuous convergence; this "inverted U" curve can be divided into three periods as time develops: the expansion period (1995)(1996)(1997)(1998)(1999)(2000)(2001), the high-fluctuation period (2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011), and the convergence period (2011-2017).
Conclusion and Policy Implications
(2) Judging from the decomposition effect of the gap between the urban and rural human development levels across the country, the gap expansion period is caused by the joint promotion of the Health Index, Education Index, and Income Index; the gap between urban and rural areas has been narrowed in the high-fluctuation period; the convergence period is reflected in the overall decline in the gap between urban and rural areas, mainly due to the decline in the Income Index and the Health Index. e Education Index has become a factor that widens the gap, but due to the large decline in the Income and Health Index, the period as a whole shows a trend of narrowing the gap. (3) From the perspective of the spatial evolution characteristics of the gap between urban and rural human development, the overall coefficient of variation of the country shows a downward trend, and the degree of spatial equilibrium is gradually increasing. e overall change trends in the east, west, and northeast of the country are the same, showing a downward trend. e decline is the greatest in the northeast, followed by the west, and the least in the east; the middle shows a slight upward trend, but its values in each year have been smaller compared to other regions. Generally speaking, the gap between urban and rural human development levels is relatively good in the middle and northeast, followed by the east, and there is still much room for improvement in the balance of the west.
Based on the above conclusions, this paper proposes the following suggestions on how to further narrow the gap between urban and rural areas: first, based on the "inverted U" curve of the gap between urban and rural human development levels proposed in this paper, it shows that China's urban-rural gap and coordinated urban-rural development have entered a new stage. Further research is needed to identify the main problems, difficulties, action ideas, and paths to reduce the gap between urban and rural areas in the new stage and new period, and provide a theoretical research basis for solving the imbalance between urban and rural development and promoting the development of urban-rural integration; second, the research in this paper shows that since 2011, the gap between the urban and rural Human Development Index in China has shown a general trend of convergence. e narrowing of the gap in income and health development is the main reason for the narrowing of the gap between urban and rural areas. e 8 Complexity imbalance in the development of urban and rural education needs attention. e evaluation of the adequacy, the intensity, and effectiveness of the implementation of the formulation of urban-rural balanced education policies is conducted, key issues are identified, and implementation solutions are explored. ird, from the perspective of the regional equilibrium of the gap between urban and rural human development, the balance of the west is still significantly lower than other regions. It is necessary to continue to vigorously promote and implement policies and measures to promote economic and social development in the western region.
rough efficient implementation of policies and the provision of accurate assistance to the western region, we can promote the improvement of the overall development level of the western region and reduce the gap between urban and rural development, achieve the overall balanced development of the western region, and also promote the overall balanced urban and rural development of the whole country.
Limited by the availability of data, the accuracy of some indicators in this study can be improved. Due to the lack of long time series of urban and rural life expectancy data statistic in China, the Health Index can only be calculated by the interpolation method, and showing a general trend of change, hopefully, annual statistics will be available in the future. As to further study next step, the team wish to check the data availability of other countries and carry out international comparison to explore the change law of the Human Development Index based on the studies of various kinds of countries including developed and developing countries, so that the study can better guide and serve the balanced economic and social development of developing countries.
e "Human Development Index related indicators for maximum and minimum criteria," "Education Index, Health Index, Income Index, and Human Development Index by urban and rural areas in China," "decomposition effects of the differences in urban and rural Human Development Index," and "variation coefficients of human development gaps between urban and rural areas in country and regions" data are included within the article. e "per capita disposable income of urban and rural residents in China" data are available from the "China Statistical Yearbook (2018)" (http://www.stats.gov.cn/tjsj/ndsj/2018/ indexch.htm).
Conflicts of Interest
ere are no conflicts of interest to declare. | 2020-10-19T18:09:24.817Z | 2020-09-28T00:00:00.000 | {
"year": 2020,
"sha1": "1d72367a243a5f8f0394178c6f72444b807e5a5f",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/complexity/2020/4615760.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c7bce4a3f5904fecd49b5eb07a953087e92e9197",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Geography",
"Computer Science"
]
} |
216557010 | pes2o/s2orc | v3-fos-license | Prognostic significance of immunohistochemical markers and histological classification in malignant canine mammary tumours
Abstract Canine mammary carcinoma represents a model for the study of human breast cancer, although the prognostic value of various clinical, histological and immunohistochemical parameters has shown contradictory results. A prospective study, through a 4‐year follow‐up, was performed in 77 patients with mammary carcinoma to analyse the association between histological diagnosis, grade of malignancy, peritumoral and vascular invasion. We have also performed immunohistochemistry for the expression of oestrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2) and cyclooxygenase‐2 (COX‐2) that define human biomarkers of disease progression and treatment response. An association between histological diagnosis and clinical stage was observed with a high proportion of complex carcinoma classified as stage I. There was a higher proportion of ER+/PR+/HER2− tumours in stage I. In contrast, triple‐negative tumours (ER−/PR−/HER2−) were found mainly in advanced clinical stages and were associated with vascular and peritumoral invasion. The tumours included in group VII (carcinosarcoma/adenosquamous carcinoma/other special types of carcinoma) had a higher expression of COX‐2. The univariate analysis showed that those patients with complex carcinoma had the lowest incidence of metastases and the highest probability of survival. In contrast, a high proportion of patients with anaplastic/inflammatory carcinoma developed metastases and showed the lowest probability of survival. In addition, the estimated survival time was shorter for those patients with triple‐negative tumours and those with high COX‐2 expression. However, in the multivariate analysis, only the peritumoral invasion maintained its prognostic significance. In conclusion, in our study anaplastic/inflammatory carcinomas had the worst prognosis with a high proportion of triple‐negative tumours in this category.
| INTRODUCTION
The prognostic evaluation of mammary cancer in veterinary medicine is based on clinical stage (tumour size, lymph node status and radiographic evidence of distant metastases), vascular invasion and clinical examination of the tumour in accordance with World Health Organization (WHO) guidelines. [1][2][3] In 2011, Goldschmidt et al published an updated and more detailed histological classification of subtypes of canine mammary carcinomas based on the WHO criteria previously published in 1974. 2 The prognostic significance of this classification has been analysed in several subsequent studies, and shown to be related to lymphatic invasion, distant metastases and overall survival. 2,[4][5][6][7][8] Although, the only effective treatment is surgical removal of the affected glands and local lymph nodes, adjuvant therapies, such as chemotherapy or radiotherapy, are often administered in canine patients. However, there is very limited information on their efficacy. 4 It is known that early detection is crucial for the evolution of patients with mammary tumours and that the determination of biomarkers is a key to evaluate the disease progression and response to treatments. 4 Canine mammary carcinoma has been shown to be a valid model for the study of breast cancer in women. [9][10][11] For this reason, the molecular classification used in human medicine has been used to establish an immunohistochemical classification of canine mammary carcinoma. 9 This classification includes the expression of oestrogen receptor alpha (ERα) and progesterone receptor (PR) and the overexpression of human epidermal growth factor receptor 2 (HER2) in an attempt to redefine the classification of mammary neoplasms, predict their prognosis and provide therapeutic guidelines for routine clinical practice. 12 Several studies have examined different diagnostic antibodies routinely used in human breast cancer to characterise molecular-based groups of canine mammary tumours, but obtained contradictory results because of the variability of the criteria used to classify breast cancer. 8 canine mammary carcinoma. 15 Despite this, immunohistochemical receptors are not routinely analysed in canine mammary tumour disease because of their high cost. 16 The immunohistochemical characterisation of the cyclooxygenase-2 (COX-2), an enzyme involved in the production of inflammatory mediators, has been widely studied as a prognostic factor in canine mammary carcinoma, being associated with disease progression, poor prognosis and short survival in dogs with mammary carcinomas. 3,17,18 Prospective studies of female dogs with mammary carcinomas are not very numerous in the veterinary literature, as well as prognostic studies with multivariate analyses. 3,10,15,[19][20][21] Therefore, the specific objectives of this study were to investigate the relationship between histological diagnosis and immunohistochemical classification (ER, PR, HER2 and COX-2) with clinical stage tumor/lymph node/ metastasis (TNM), histological grade of malignancy, vascular invasion and peritumoral invasion, and to describe the clinical evolution of the patients (development of metastasis and cancer-specific death) based on the histological diagnosis and immunohistochemical classification.
| Study sample
A prospective analysis was performed in 77 patients with malignant mammary tumours. All patients were followed up from their first visit to the Surgery Service of the Veterinary Teaching Hospital at the University of Extremadura, Spain, for assessment and treatment.
The patients included in this study were selected among 385 patients with mammary tumours that were diagnosed in the study period. The selection criteria were a diagnosis of carcinoma or carcinosarcoma from the removed tumour and the ability to carry out a long-term follow-up (January 2008-December 2012), every 6 months, or until death of the patient, excluding patients whose owners opted for adjuvant chemotherapy and patients whose owners declined regular follow-up every 6 months.
| Histological study
The biopsies were sent to the Pathology Service, where they were evaluated macroscopically. The tissue was processed and embedded in paraffin blocks, sections of 5 μ were stained using the appropriate histochemical techniques.
| Immunohistochemical study
The immunohistochemistry technique was performed using the EnVision FLEX Mini Kit, High pH high-sensitivity visualisation system (Dako Autostainer/Autostainer Plus, Dako). The PT Link (Dako) module was used to pretreat the samples at a maximum temperature of 95 C with the corresponding retrieval solution according to the antibody used.
The sections were incubated with the primary antibodies:
| Statistical analysis
The epidemiological variables studied were breed, including purebred and mixed breeds, age and size including large (>50 cm) and medium to small (<50 cm). 22 With regard to reproductive variables, data were collected on spaying prior to diagnosis and age at which it was performed, number of pregnancies, number of pseudopregnancies and hormone therapy.
Five categories were considered to assess the variable clinical stage according to the WHO's modified TNM staging system. 23 The tumours were classified into seven categories according to their aggressiveness using the histological classification of Goldschmidt et al 2 to obtain a significant number in each category: I-complex carcinoma, II-simple carcinoma, III-anaplastic carcinoma/inflammatory carcinoma, IV-mixed carcinoma, V-invasive micropapillary carcinoma/ comedocarcinoma/solid carcinoma, VI-ductal carcinoma/intraductal papillary carcinoma and VII-carcinosarcoma/adenosquamous carcinoma/other special types of carcinoma. Grade of malignancy and the variables peritumoral invasion (defined by the presence of neoplastic cells infiltrating normal tissue adjacent to tumour) and vascular invasion were also considered. 24 The variables development of distant metastases and cancer-specific death were defined as the period (in months) between surgical tumour removal and, respectively, the occurrence of distant metastasis or death because of the tumour.
ERα and PR immunoexpression was established according to the guidelines recommended by the ASCO/CAP adapted to the canine species. 25 Positive immunoexpression was considered ≥2. Expression of HER2 oncoprotein was established according to ASCO/CAP recommendations for the evaluation of HER2 in humans, 26 in which only 3+ tumours are considered positive for HER2 overexpression. Positivity for COX-2 was indicated by cytoplasmic staining. The distribution score and intensity were multiplied to obtain a total score, which ranged from 0 to 12, with scores from 0 to 5 considered low and scores from 6 to 12 considered high. 17 The Statistical Package for the Social Sciences version 22.0 (SPSS, Chicago, Illinois) was used for the statistical analysis. Descriptive analysis of variables, normality tests (Shapiro-Wilk and Kolmogorov-Smirnov), Pearson's chi-square test (to compare two discrete variables) and Cox regression (univariate analysis) were used. For the survival analysis, a univariate analysis was performed with censored data using the Kaplan-Meier estimator and the differences were studied with the log-rank test; with the 95% confidence interval (95% CI) that defines a range of values that contains, with at least 95% of certainty, the population mean. A Cox regression model was used to evaluate the prognostic value of the study variables.
| Cell line validation statement
Since no cell lines were used in the current study, validation testing has not been conducted.
| Histological and immunohistochemical study
The histological diagnosis of the tumours indicated that group I was the largest (25.9%, n = 20), followed by group IV and group VII. Each of these last two groups included 15.5% (n = 12) of the patients. Table S1 shows the distribution of patients in groups by histological diagnosis and epidemiological variables. were classified in the ER + /PR + /HER2 − group. It should be noted that three patients belonging to the ER − /PR + /HER2 + , ER − /PR − /HER2 + and ER + /PR − /HER2 − groups, which had only one animal per group, were excluded from the statistical study ( Figure 1). Of all the patients included in the study, 68.8% (n = 53) showed lack of immunoreactivity to the ER. Table S2 shows the distribution of patients according to immunohistochemical expression and their relationship with prognostic factors.
The analysis of COX-2 immunoexpression indicated that 33.8% of the tumours presented high expression ( Figure S4). As regards the histological grade of malignancy, groups III (anaplastic carcinoma/inflammatory carcinoma) and V (invasive micropapillary carcinoma/comedocarcinoma/solid carcinoma) had a significantly larger proportion of individuals with high-grade tumours than that observed in the other tumour groups (χ 2 ; P < .001).
As for vascular and peritumoral invasion, a significantly higher proportion of patients in groups I (complex carcinoma) and IV (mixed carcinoma) did not present these types of invasion compared with the patients included in the other groups of diagnosed tumours (χ 2 ; P < .001).
Regarding the relationship between histological diagnosis and ER, PR and HER2 expression, the proportion of ER + /PR + /HER2 − patients in group I (complex carcinoma) was significantly higher than the proportion of patients with this immunophenotype in the other groups (χ 2 ; P < .001). In contrast, the percentage of patients with ER − /PR + /HER2 − expression was higher in groups IV (mixed carcinoma) and VII (carcinosarcoma/adenosquamous carcinoma/other special types of carcinoma), groups V (invasive micropapillary carcinoma/ comedocarcinoma/solid carcinoma) and VI (ductal carcinoma/intraductal papillary carcinoma) than in the other groups (χ 2 ; P < .001).
| Clinical and pathological variables associated with ER, PR and HER2
The association between the variables ER, PR and HER2 and clinical stage was significant. There was a higher proportion of ER + /PR + / HER2 − tumours in stage I (73.9%, n = 17) than in the rest of the clinical stages, while triple-negative tumours corresponded to the highest stages (III, IV and V) (χ 2 ; P = .014). Additionally, considering the histological grade of malignancy, the proportion of patients with high-grade tumours was larger (46.2%, n = 6) among those with triple-negative mammary tumours, followed by patients with ER − /PR + /HER2 − tumours (28.9%, n = 11) and the lowest proportion was detected in patients with ER + /PR + /HER2 − tumours (8.7%, n = 2) (χ 2 ; P < .001).
In the triple-negative carcinomas, evidence of peritumoral invasion was found in 53.8% (n = 7) of the tumours compared with 34.2%
| Clinical and pathological variables associated with COX-2 expression
No statistically significant associations were found between the variables clinical stage and COX-2 enzyme expression. In contrast, when analysing the relationship between the variable histological diagnosis and COX-2 expression, the percentage of patients with complex carcinoma and a low COX-2 score was significantly higher than in the Moreover, no statistically significant relationship was found between the expression of this enzyme and vascular and peritumoral invasion.
| Prognostic significance of the histological classification
Patients with complex carcinoma were found to present the lowest incidence of distant metastasis (15%, n = 3). In contrast, patients of group III (anaplastic carcinoma/inflammatory carcinoma) developed the highest number of distant metastases (85.7%, n = 6), with a statistically significant difference (χ 2 ; P = .028) between the two groups.
The survival analysis for the groups according to their histological diagnosis showed that the patients with the highest probability of survival were those diagnosed with complex carcinoma (group I) with an estimated survival time of 62.7 months (95% CI: 52.9 and 72.5), followed by patients with group VI carcinomas (ductal carcinoma/ intraductal papillary carcinoma) with 46.4 months (95% CI: 26.9 and 65.8). In contrast, patients with group III tumours (anaplastic carcinoma/inflammatory carcinoma), showed the lowest estimated survival probability with 5.7 months (95% CI: 0.0 and 12.2) ( Table 1). The survival rate of each group using the Kaplan-Meier curve is shown in Figure 2.
Univariate analysis using chi-square test showed statistically significant differences (P < .005) in survival according to histological subtype (Table 2). However, in the Cox regression no statistically significant relationship was observed between histological subtype and survival (P = .06) or incidence of metastasis (P = .06). (Table 2) or Cox regression showed no significant results (P = .7). Univariate analysis using chi-square test showed statistically significant differences (P = .02) in survival according to COX-2 expression ( Table 2). The analysis using the Cox regression model did not show a statistically significant relationship between COX-2 expression and the appearance of distant metastases (P = .1) or patient survival (P = .3).
| Multivariate survival analysis
A multivariate analysis was performed to assess the joint effect of histological diagnosis, ER, PR and HER2 expression, histological grade of malignancy, peritumoral and vascular invasion and COX-2 enzyme (independent variables) on the follow-up variable death (dependent variable) ( Table 2). Only the variable peritumoral invasion (P < .001) remained as an independent prognostic factor for death in the final model.
| DISCUSSION
Canine mammary tumours occur in elderly females, usually between 8 and 10 years of age and may vary according to the natural life span of the breed. 19,27 This is especially significant in Europe, where females are not spayed at an early age, 28 which coincide with our results.
As regards the size of the patients, our findings are in line with other epidemiological studies where small and miniature breeds are over-represented. 1,29,30 However, Itoh et al indicated that small breeds are the least predisposed to mammary carcinoma. 31 In general, our study shows a greater prevalence of pure-bred individuals vs mixed breeds, as reported by other authors, 25,32 although the presentation will vary depending on the geographical area being analysed.
In relation to the histological diagnosis, the most commonly diagnosed tumour types are complex carcinoma, followed by mixed carcinoma and carcinosarcoma/adenosquamous carcinoma/other special types of carcinoma, as also reported by other authors. 1,8,30,33,34 Our data confirm those of other authors, who have found a statistically significant association between a better evolution for complex carcinomas and mixed carcinomas and a worse prognosis for other types of mammary tumours such as inflammatory carcinomas, 35 anaplastic carcinomas, 10 carcinosarcomas, comedocarcinomas, adenosquamous carcinomas and simple carcinomas, with high rates of local recurrence and metastases. 34,36 Despite the strong association between the histological classification of canine mammary tumours and survival, the multivariate analysis showed that it is not significant, confirming that it represents a weak prognostic factor, rarely retained in multivariate survival analyses in breast cancer. 37 Moreover, in line with other studies, we observed a strong relationship between histological diagnosis and other clinicopathological parameters with prognostic value, as clinical stage, 36 grade of malignancy 5,6,36 and presence of vascular and peritumoral invasion. 5,7 Unaltered canine mammary tissue and benign neoplastic processes express both ER and PR, 38 while low ERα expression has been associated with malignant neoplastic mammary processes with worse prognosis for the patient. 39 It has been shown that primary and Estimation of cancer specific death (months) for each of the groups classified by histological diagnosis Several studies on COX-2 expression have shown that immunoreactivity is more frequent and more intense in malignant mammary tumours than in benign ones, as occurs in breast cancer in women, the percentages of COX-2 expression vary from 56% 45 49 as established for breast cancer in women. 48 Our work supports this evidence, since it is observed that patients presenting tumours with a high expression of the COX-2 enzyme are less likely to survive than those with low expression. However, this parameter loses its prognostic value in the multivariate analysis, as reported previously by other authors. 18 In conclusion, patients with tumours classified as group III (anaplastic carcinoma/inflammatory carcinoma) tend to be in stage V of the disease, having a high histological grade of malignancy, showing both vascular and peritumoral invasion, and are associated with a poor prognosis.
As regards ER, PR and HER2 immunoexpression, tumours without ER, PR and HER2 (triple negative) are associated with stage III, IV and V of disease, a high histological grade of malignancy, the presence of vascular and peritumoral invasion and a poor prognosis; with ER positive tumours being the ones with the best prognosis.
As for the COX-2 enzyme expression, high expression for this enzyme was associated with carcinomas included in group VII (carcinosarcoma/adenosquamous carcinoma/other special types of carcinoma) and a poor prognosis.
Finally, in the multivariable model only peritumoral invasion was found to be an independent prognostic factor. | 2020-04-28T13:02:03.543Z | 2020-04-26T00:00:00.000 | {
"year": 2020,
"sha1": "1e10b6081010164fd71e51f21500ec6e2a02034f",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/vco.12603",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "19542bb7819bc5b010bc0b7fd11c65eeff9bd377",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249051627 | pes2o/s2orc | v3-fos-license | New Findings on the Sperm Structure of Tenebrionoidea (Insecta, Coleoptera)
Simple Summary Tenebrionoidea, with more than 30,000 described species and 30 currently recognized families, is a superfamily of difficult taxonomy. The aim of this work is to support the basal position of the Mordellidae among the beetle tenebrionoids. They have a low number of sperm cells per cysts, contrary to the more derived families of the group; moreover, their sperm are not distributed in two bundles at the opposite poles of the cysts, as occurs in the higher taxa, but their sperm flagella form a loop in the median region so that sperm nuclei are positioned close to the tail end. The sperm structure of two members of higher families, Oedemeridae and Tenebrionidae, are investigated to confirm the data mentioned above. The sperm looping, which also occurs in the closely related Ripiphoridae, could be the consequence of the growth asynchrony between the cyst size and the sperm length. The Mordellidae sperm are characterized, not by small mitochondrial derivatives and accessory bodies, but by a peculiar stiff and immotile thin flagellar posterior region provided with only accessory tubules. Abstract The sperm ultrastructure of a few representative species of Tenebrionoidea was studied. Two species belong to the Mordellidae (Mordellistena brevicauda and Hoshihananomia sp.), one species to Oedemeridae (Oedemera nobilis), and one species to Tenebrionidae (Accanthopus velikensis). It is confirmed that Mordellidae are characterized by the lowest number of spermatozoa per cyst (up to 64), a number shared with Ripiphoridae. In contrast, in the two other families, up to 512 spermatozoa per cyst are observed, the same number present, for example, in Tenebrionidae. Also, as in the other more derived families of tenebrionoids studied so far, during spermatogenesis in O. nobilis and A. velikensis, sperm nuclei are regularly distributed in two sets at opposite poles of the cysts. On the contrary, the Mordellidae species do not exhibit this peculiar process. However, during spermiogenesis, the bundles of sperm bend to form a loop in their median region, quite evident in the Hoshihananomia sp., characterized by long sperm. This process, which also occurs in Ripiphoridae, probably enables individuals to produce long sperm without an increase in testicular volume. The sperm looping could be a consequence of the asynchronous growth between cyst size and sperm length. The sperm ultrastructure of the Mordellidae species reveals that they can be differentiated from other Tenebrionoidea based on the shape and size of some sperm components, such as the accessory bodies and the mitochondrial derivatives. They also show an uncommon stiff and immotile posterior flagellar region provided with only accessory tubules. These results contribute to a better knowledge of the phylogenetic relationship of the basal families of the large group of Tenebrionoidea.
Introduction
The Tenebrionoidea constitute one of the largest and most complex superfamilies of beetles [1,2]. A molecular study on the superfamily suggested that it is monophyletic and that four clades have been suggested within the group; among these clades, ripiphorid-mordellidmeloid were considered the most basal in the superfamily [3]. Likewise, Bocak et al. [4], also based on molecular data, considered these three families to be closely related, however in their study the clade formed by them occupies the most derived position in relation to the other Tenebrionoidea. On the other hand, studies by Zhang et al. [5] and McKenna et al. [6], based on extensive gene sampling, maintained Ripiphoridae and Mordellidae as a sister group and in a more basal position of the tenebrionoid tree, while Meloidae appears in a higher position in this tree.
The structure of tenebrionoid sperm is known from Baccetti et al. [7], Dias et al. [8,9], Dallai [10], and Folly et al. [11]. These works have well established that within this group of beetles, the sperm are characterized by a short acrosome, a cylindrical nucleus, and a flagellum with a 9 + 9 + 2 axoneme flanked by two long mitochondrial derivatives and two cylindrical or elliptical accessory bodies [9]. This model, however, is variable in the different families, mainly due to the shape of accessory bodies. In most beetles, testicular sperm bundles, formed at the end of spermatogenesis by cell divisions, contain up to 256 (2 8 ) cells. In Tenebrionoidea, this number usually rises to 512 (2 9 ), but there are species where this number is only 64 (2 6 ), and there are also species where bundles contain 1024 (2 10 ) spermatozoa.
Different morphological cellular mechanisms along the spermatogenic process in insects have shown a source of variability in sperm arrangements within cyst cells [12,13]. According to Dias et al. [8], one characteristic shared by members of several families of Tenebrionoidea is that the spermatozoa do not maintain a single orientation within the cyst, as it usually occurs in insects. Nevertheless, during spermiogenesis, their nuclei migrate towards two opposite poles of the cyst, forming two sets of sperm with antiparallel orientation. Also, a unique spermatogenesis mechanism was described for the Hemiptera Planococcus citri (Pseudococcidae) [14] and Kerria chinensis (Kerriidae) [12]. In this, the result is two sperm bundles separated at the end of spermiogenesis from a process of inverted meiosis, i.e., by a mechanism different from those tenebrionoids described so far. Studying the sperm ultrastructure of some members of three families considered basal [9], it was concluded that Ripiphoridae and Mordellidae have a close phylogenetic relationship. In contrast, Meloidae would be better placed in a more advanced position in the superfamily, as was also suggested, based on molecular data, by Zhang et al. [5] and McKenna et al. [6]. The present work aims to improve our knowledge of tenebrionid sperm structure, extending the study to other group families. In particular, we have examined two new members of Mordellidae, a member of Oedemeridae and one of Tenebrionidae. The results obtained confirm our previous conclusions [9] and give details on a peculiar process, the sperm looping [13], occurring in the testicular cysts in Mordellidae, allowing the sperm to compact in testes of reduced size.
Materials and Methods
The following species were studied in the present work:
Light and Epifluorescence Microscopic Preparations
Males of M. brevicauda, O. nobilis and A. velikensis were anesthetized with ether and dissected in 0.1 M phosphate buffer pH 7.2 with 3% of sucrose (PB) to remove the genital system. A drop of sperm, removed from the deferent duct and seminal vesicles, was spread over histological slides and photographed with a Leica DMRB light phase-contrast microscope. The length of spermatozoa was measured using image-J software. For the visualization of sperm nuclei, cells were spread on a histological slide, a drop of 1 ug/mL of the DNA specific dye Hoechst in 0.1 M PB was added, and the sample was finally covered with a glass coverslip. Fluorescence observations of the labelled samples were carried out with a Leica DMRB light microscope equipped with a UV light source, fluorescein, and UV filters and Zeiss AxioCam digital camera with dedicated imaging software.
To observe the entire cysts in Hoshihananomia sp., testes were dissected in PB and transferred to a 2% acetic-orcein solution. After 20 min, the follicles were placed on histological slides with a drop of acetic-orcein solution, dissociated using needles, and covered with coverslips. For testicular histology, testes were fixed in 2.5% glutaraldehyde solution in 0.1 M phosphate buffer, postfixed in 1% osmium tetroxide, dehydrated in alcohol solutions, and embedded in Historesin ® . Semithin sections (0.5 µm thick) were stained with Giemsa for 15 min. To measure sperm length, the cells from the vas deferens were spread on histological slides and stained with Giemsa for 15 min. For nuclear size observations and measurements, some samples were stained for 20 min with 0.2 mg/mL DAPI, washed in distilled water, and mounted with 50% sucrose in PB.
Scanning Electron Microscopy (SEM)
Mature spermatozoa taken from the seminal vesicles and deferent ducts of Mordellistena brevicauda were spread onto coverslips previously treated with poly-l-lysine. The coverslips were placed in 2.5% glutaraldehyde in PB for 30 min at 4 • C and then rinsed several times in PB. Specimens on glass coverslips were dehydrated in a graded series of ethanol and then processed by critical drying method in a Balzer's CDP 030. The coverslips were sputtered with about 20 nm gold in a Balzer's MED 010 sputtering device and finally observed in a SEM Phillips XL20 operating at 10 kV electron accelerating voltage.
Transmission Electron Microscopy (TEM)
Adult males were dissected in PB to isolate the testes and deferent ducts. The material was fixed in 2.5% glutaraldehyde in PB overnight. After careful rinsing, the material was post-fixed in 1% osmium tetroxide for 2 h. After rinsing, the material was dehydrated with ethanol series (50% to 100%), then transferred to propylene oxide, and finally embedded in a mixture of Epon-Araldite resins. Some material was also treated with tannic acid, omitting osmium fixation. Ultrathin sections were obtained with a Reichert Ultracut ultramicrotome, routinely stained with uranyl acetate and lead citrate, and observed with a TEM Philips CM10 operating at 80 kV electron accelerating voltage.
Adult males of the species Hoshihananomia sp. were dissected, and the removed testes were processed following the conventional Transmission Electron Microscopy protocol. Ultrathin sections (~60 nm thick) were obtained with an ultramicrotome (Leica UC6). Then they were contrasted with solutions of 3% uranyl acetate and 0.2% lead citrate and after examined in a Transmission Electron Microscope (Tecnai G2-12-SpiritBiotwin FEI) operating at 120 kV at the Microscopy Center of the Federal University of Minas Gerais (CM-UFMG), Belo Horizonte, Minas Gerais, Brazil.
Mordellistena Brevicauda (Mordellidae)
The male reproductive system consists of a pair of testes, each showing 4-5 ovoidal follicles, 480-500 µm long (Figure 1a,b). Testes flow their products into long deferent ducts with large seminal vesicles. They fuse at their proximal end to flow into a long ejaculatory duct. At this level, a complex of spheroidal accessory glands (each 380 µm in diameter) pours their secretions. Two of these glands have a helical long transparent extension, about 600 µm long (Figure 1a). Follicles contain numerous cysts at different stages of spermatogenesis. The elongated ones (Figure 1c-f) consist of maturing spermatids originating from six cycles of cell divisions of a spermatogonium, giving rise to 64 (=26) cells. After Hoechst staining of the long (~260 µm) isolated sperm cysts allowed us to visualize the flagellar ends of the sperms located close to the anterior nuclear regions, all clustered at only one end of the bundle consequent to sperm cells bending by a looping mechanism at about half their length (Figure 1d,e). The anterior region sperm of the bundle shows a twisted appearance, whilst, the posterior tail region is stiff and immotile (Figure 1d-g). with large seminal vesicles. They fuse at their proximal end to flow into a long ejaculatory duct. At this level, a complex of spheroidal accessory glands (each 380 µ m in diameter) pours their secretions. Two of these glands have a helical long transparent extension, about 600 µ m long (Figure 1a). Follicles contain numerous cysts at different stages of spermatogenesis. The elongated ones (Figure 1c-f) consist of maturing spermatids originating from six cycles of cell divisions of a spermatogonium, giving rise to 64 (=26) cells. After Hoechst staining of the long (~260 µ m) isolated sperm cysts allowed us to visualize the flagellar ends of the sperms located close to the anterior nuclear regions, all clustered at only one end of the bundle consequent to sperm cells bending by a looping mechanism at about half their length (Figure 1d,e). The anterior region sperm of the bundle shows a twisted appearance, whilst, the posterior tail region is stiff and immotile (Figure 1d-g). Scanning electron microscopic preparations confirmed the sperm bundles bending at their half-length; the anterior nuclear region often appears disassembled, the middle regions are tightly twisted, and the posterior regions are thinner and stiff (Figure 2a-c). Sperm cysts are shorter than sperm cells. All the sperm from one group show the dynein arms clockwise oriented, typical of sperm observed from the centriolar region to the tail. In contrast, the sperm from the contiguous group show the dynein arms anti-clockwise-oriented, as expected for sperm observed from the tail end (Figure 3a-c). The same pattern is also visible when the cross-section is through the loop region ( Figure 2d) This antiparallel orientation between the two groups of sperm from the same cyst is due to the sperm looping occurring in the cysts as shown in Figure 1d of sperm from the same cyst is due to the sperm looping occurring in the cysts as shown in Figure 1d The length of the sperm is 290-305 µm, of which 1.3 µm corresponds to a conical acrosome and 15 µm to the nucleus (Figure 1g). The acrosome is slightly elliptical in cross-section, 0.43 µm in diameter, and has a dense perforatorium with the same slightly elliptical shape (Figure 4a,b). The sperm are embedded for almost their whole length in a homogeneous electron-dense material (Figure 3a-f). After sample processing for either light or SEM observations, this material seems to be removed, and the anterior sperm regions appear separated, while the middle regions are still tightly twisted (Figure 2a-c). The nucleus, 0.5-0.3 µm in diameter, is tapered from the base to the apex and contains a compact chromatin (Figure 4a-c). It exhibits a flattened side along its length, which is closely associated with a finely granular material of the centriole adjunct (Figure 4b,c). In cross-section, these two structures together result in an oval profile near the acrosome ( Figure 4b) and circular in the basal region (Figure 4d). At the nuclear anterior end, a lateral groove hosts the perforatorium base and, more externally, the asymmetric base of the acrosome (Figure 4b). The posterior nuclear region, on one side, has two cavities hosting the proximal tips of the two mitochondrial derivatives, and on the opposite side, a cavity that houses the centriole (or basal body) ( Figure 4c).
The centriole consists of a complex of 9 microtubule doublets, devoid of dynein arms, and of a crown of 9 outer accessory tubules ( Figure 4c). A cross-section of this region shows how the different sperm components are integrated, including the narrow nuclear lamina ( Figure 4c). Beneath the centriole, the nucleus is no longer visible and all flagellar components are evident: a 9 + 9 + 2 axoneme, two similar mitochondrial derivatives, and two small, almost triangular or elliptical accessory bodies ( Figure 4c). Small cisterns are present between the axoneme and the mitochondrial derivatives. Towards the posterior flagellar region, one of the mitochondrial derivatives exhibits a larger diameter and ends proximally than the slimmer one ( Figure 4d). The posterior flagellar region, about 100 µm long (Figure 1c-e,g), has a remarkably different organization (Figure 4d,e). In the transition between the above described conventional flagellar structure and the posterior sperm region, the reduction of flagella diameter, from approximately 0.5 µm to 0.3 µm occurs, accompanied by the progressive disappearance of the axoneme components. This region is characterized by a very dense material, in which a single mitochondrial derivative and the altered axoneme are embedded. This latter loses first the peripheral doublet microtubules, then the central pair of tubules, and finally the radial links ( Figure 4d). Besides the single mitochondrial derivative, only eight accessory tubules are visible in a circle, while one of them is shifted laterally (Figure 4d). Further posteriorly, when no mitochondria derivatives are visible, the nine accessory tubules take the conventional circular array that continues all the way down to the tail tip ( Figure 4e). Due to lack of the axonemal doublets, the entire region has to be immotile.
Hoshihananomia sp. (Mordellidae)
The reproductive system anatomy of Hoshihananomia sp. is very similar to that of the previous species. Five follicles form each of the two testes; however, their follicles are elongated (400 × 2100 µm, Figure 5a,b) rather than oval. Each follicle is filled with cysts at different stages of spermatogenesis, with the youngest cysts distributed in the anterior and peripheral regions of the follicle.
In contrast, cysts in more advanced stages of maturation are observed throughout almost the entire central region (Figure 6b,c). During spermiogenesis, as the cysts elongate, the sperm bundle spiral (Figure 5c, inset) and fold at their median region, forming a loop approximately 460 µm away from the region of the sperm heads (Figures 5d, 6a and 7a). The loop faces the distal region of the follicle, while the two cystic ends are directed towards the efferent duct (Figure 5b,c). Finally, the two halves of each cyst coil over one another so that the whole forms a double helical structure, which is easily observed in cysts (Figures 5d and 7a), sectioned longitudinally ( Figure 6a).
Hoshihananomia sp. (Mordellidae)
The reproductive system anatomy of Hoshihananomia sp. is very similar to that of the previous species. Five follicles form each of the two testes; however, their follicles are Insects 2022, 13, x FOR PEER REVIEW 9 of 21 elongated (400 × 2100 µ m, Figure 5a,b) rather than oval. Each follicle is filled with cysts at different stages of spermatogenesis, with the youngest cysts distributed in the anterior and peripheral regions of the follicle. gate, the sperm bundle spiral (Figure 5c, inset) and fold at their median region, forming a loop approximately 460 µ m away from the region of the sperm heads (Figures 5d, 6a, and 7a). The loop faces the distal region of the follicle, while the two cystic ends are directed towards the efferent duct (Figure 5b,c). Finally, the two halves of each cyst coil over one another so that the whole forms a double helical structure, which is easily observed in cysts (Figures 5d and 7a), sectioned longitudinally (Figure 6a). In isolated cysts, it was possible to observe that all sperm nuclei are positioned at the same end of the cyst (Figure 7a). Cross-sections showed that each cyst contains up to 64 spermatozoa (Figure 6c), embedded in dense material (Figure 8a,b). The total length of the In isolated cysts, it was possible to observe that all sperm nuclei are positioned at the same end of the cyst (Figure 7a). Cross-sections showed that each cyst contains up to 64 spermatozoa (Figure 6c), embedded in dense material (Figure 8a,b). The total length of the mature cyst (or sperm bundle) is around 1030 µm, with the region after the loop measuring about 570 µm. At approximately 145 µm before the posterior tip of the cyst, a rapid tapering occurs so that the sperm bundle diameter decreases from 8.5 µm to 4.8 µm, and then to 3.0 µm (Figures 5d, 6a and 7a).
In the vas deferens and seminal vesicles, only individualized sperm were observed, indicating that the sperm bundles were dissociated before or immediately downstream of the testes. In this mordellid, the mature sperm are thin and very long, measuring around 1200 µ m in length (Figure 7b). The head region comprises the acrosome and nucleus (Figure 7c-d,f), with an average length of 2.5 µ m and 16.3 µ m, respectively. Posteriorly, as seen in the bundles, the flagellum shows a long (~145 µ m) thinner portion, with the tip (~4 µ m) that thickens slightly and exhibits an arrowhead shape (Figure 7b,e). The mature sperm exhibits an apical bi-layered acrosome that consists of an elliptical acrosomal vesicle with a compact and slightly flattened perforatorium (Figure 7d). The nucleus-flagellum transition region is similar to that of the previous species, including anterior projection of the centriole adjunct along with the entire nucleus. The flagellum (Figure 8a) has an axoneme with 9 + 9 + 2 microtubules and a typical intertubular material adherent to the accessory tubules. Two accessory bodies flank the axoneme for most of the flagellar length. They are long and curved in cross-section and have the same size (Figures 6c and 8a). Mitochondrial derivatives, in cross-section, are asymmetrical in size and shape; the thicker is almost circular, located in an opposed position to the axoneme, and with paracrystalline material occupying more than half of its diameter. The thinnest one has a drop-shaped appearance, and the paracrystalline material occupies a large part of its pointed region (Figures 6c and 8a). Between the mitochondrial derivatives and the accessory bodies, there is a discrete dense material connecting these structures, like in other tenebrionoids (Figure 8a). The tail end is characterized by the disappearance of the accessory bodies, mitochondrial derivatives, and the axonemal microtubule doublets, whereas the accessory tubules persist (Figure 8a). In this region, a In the vas deferens and seminal vesicles, only individualized sperm were observed, indicating that the sperm bundles were dissociated before or immediately downstream of the testes. In this mordellid, the mature sperm are thin and very long, measuring around 1200 µm in length (Figure 7b). The head region comprises the acrosome and nucleus (Figure 7c,d,f), with an average length of 2.5 µm and 16.3 µm, respectively. Posteriorly, as seen in the bundles, the flagellum shows a long (~145 µm) thinner portion, with the tip (~4 µm) that thickens slightly and exhibits an arrowhead shape (Figure 7b,e). The mature sperm exhibits an apical bi-layered acrosome that consists of an elliptical acrosomal vesicle with a compact and slightly flattened perforatorium (Figure 7d). The nucleus-flagellum transition region is similar to that of the previous species, including anterior projection of the centriole adjunct along with the entire nucleus.
The flagellum (Figure 8a) has an axoneme with 9 + 9 + 2 microtubules and a typical intertubular material adherent to the accessory tubules. Two accessory bodies flank the axoneme for most of the flagellar length. They are long and curved in cross-section and have the same size (Figures 6c and 8a). Mitochondrial derivatives, in cross-section, are asymmetrical in size and shape; the thicker is almost circular, located in an opposed position to the axoneme, and with paracrystalline material occupying more than half of its diameter. The thinnest one has a drop-shaped appearance, and the paracrystalline material occupies a large part of its pointed region (Figures 6c and 8a). Between the mitochondrial derivatives and the accessory bodies, there is a discrete dense material connecting these structures, like in other tenebrionoids (Figure 8a). The tail end is characterized by the disappearance of the accessory bodies, mitochondrial derivatives, and the axonemal microtubule doublets, whereas the accessory tubules persist (Figure 8a). In this region, a thick (0.16 µm) and dense intracellular material surround the nine accessory tubules (Figure 8a), while the flagellar tips are embedded in a dense amorphous extracellular material (Figures 6a and 8b).
Oedemera Nobilis (Oedemeridae)
In this species, the male genital system (Figure 9a) consists of two large yellowpigmented testes, which continue with large deferent ducts flowing into a long transparent ejaculatory duct. Two long, whitish accessory glands are also present. Testes contain several elliptical follicles filled with elongated germ cysts. During spermiogenesis, the numerous spermatids within a cyst (about 512 as a result of 9 cycles of cell divisions of the initial spermatogonium) are distributed in two groups in opposite directions, each consisting of up to 256 cells. At maturity, sperm cysts are fusiform with sperm nuclei clustered at two opposite poles, while flagella occupy the central region between these poles (Figure 9b-d).
The consequence of this sperm organization is a common observation, in cross-sections, of sperm flagella with opposite orientations of their axonemes, deducible by clock-wise or anti-clockwise orientation of dynein arms on microtubule doublets (Figure 10d).
Insects 2022, 13, x FOR PEER REVIEW 13 of 21 thick (0.16 µ m) and dense intracellular material surround the nine accessory tubules (Figure 8a), while the flagellar tips are embedded in a dense amorphous extracellular material (Figures 6a and 8b).
Oedemera Nobilis (Oedemeridae)
In this species, the male genital system (Figure 9a) consists of two large yellow-pigmented testes, which continue with large deferent ducts flowing into a long transparent ejaculatory duct. Two long, whitish accessory glands are also present. Testes contain several elliptical follicles filled with elongated germ cysts. During spermiogenesis, the numerous spermatids within a cyst (about 512 as a result of 9 cycles of cell divisions of the initial spermatogonium) are distributed in two groups in opposite directions, each consisting of up to 256 cells. At maturity, sperm cysts are fusiform with sperm nuclei clustered at two opposite poles, while flagella occupy the central region between these poles ( Figure 9b-d). The consequence of this sperm organization is a common observation, in crosssections, of sperm flagella with opposite orientations of their axonemes, deducible by clock-wise or anti-clockwise orientation of dynein arms on microtubule doublets ( Figure 10d). The sperm is relatively short, about 105-110 µm long, with an acrosome, about 2.0 µm long, and a nucleus 16 µm long (Figures 9e and 10b). The conical acrosome has a bi-layered structure; it is 0.4 µm in diameter and has a dense perforatorium that forms an inner ring in cross-section (Figure 10a,b). The cylindrical nucleus has a diameter of 0.56 µm and contains a compact chromatin (Figure 10a-c). It reduces its diameter in the posterior tip and forms two grooves hosting the two mitochondrial derivatives (Figure 10a, inset). The flagellum consists of two large, elliptical, equally developed mitochondrial derivatives (0.71 × 0.25 µm). Its region facing the axoneme shows that a good portion of its matrix is crystallized (Figure 10d,e). Two elliptical accessory bodies with a pointed region opposite to the mitochondrial derivatives are located lateral to the axoneme; their cortical region has a fine structure (Figure 10c-f). The axoneme of 9 + 9 + 2 microtubules shows accessory tubules with 16 protofilaments in their tubular wall (Figure 10f,g). A small cistern surrounded by electron-dense material is present between each accessory body and the corresponding mitochondrial derivative (Figure 6f). Towards the posterior flagellar region, the accessory bodies narrow progressively and the mitochondrial derivatives greatly reduce their size (Figure 10d inset). In the flagellar end (Figure 10g), the two mitochondrial derivatives and the accessory bodies are lost. In the axoneme, accessory tubules are somewhat distant from the microtubule doublets, which lack the dynein arms. Also, in this region, the intertubular material is either very scarce or absent.
Accanthopus Velikensis (Tenebrionidae)
The male reproductive system of this species has two large testes with follicles filled with elliptical germ cysts, 285-300 µm long (Figure 11a-d). Fluorescent dye staining shows that in the cysts, the sperm are distributed in two groups with their nucleus positioned at the two extremities of the cysts, while the flagella extend towards their median region (Figure 11a,b). A cross-section through the cyst reveals that it contains up to 512 germ cells, corresponding to nine cycles of cell divisions of the initial spermatogonium (Figure 12e). Moreover, a cross-section through the middle region of the cyst confirms that sperm cells have axonemes with either clockwise or anti-clockwise dynein arms orientation (Figure 12d), indicative of an opposite sperm orientation. The male reproductive system of this species has two large testes with follicles filled with elliptical germ cysts, 285-300 µ m long (Figure 11a-d). Fluorescent dye staining shows that in the cysts, the sperm are distributed in two groups with their nucleus positioned at the two extremities of the cysts, while the flagella extend towards their median region (Figure 11a,b). A cross-section through the cyst reveals that it contains up to 512 germ cells, corresponding to nine cycles of cell divisions of the initial spermatogonium ( Figure 12e). Moreover, a cross-section through the middle region of the cyst confirms that sperm cells have axonemes with either clockwise or anti-clockwise dynein arms orientation (Figure 12d), indicative of an opposite sperm orientation. The sperm is about 240-250 µ m long (Figure 11e,f). Apically, it shows a 2.2 µ m long conical acrosome with a dense perforatorium (Figure 12a). The 20 µ m long nucleus (Figure 11f), is cylindrical, with a constant 0.6 µ m diameter (Figure 12a) for all its length till its posterior end (Figure 12b,c). The posterior nuclear region is adapted to host the two mitochondrial derivatives, thus becoming progressively narrowed (Figure 12c). Immediately below the nucleus and opposite to the mitochondrial derivatives, the centriole region is The sperm is about 240-250 µm long (Figure 11e,f). Apically, it shows a 2.2 µm long conical acrosome with a dense perforatorium (Figure 12a). The 20 µm long nucleus (Figure 11f), is cylindrical, with a constant 0.6 µm diameter (Figure 12a) for all its length till its posterior end (Figure 12b,c). The posterior nuclear region is adapted to host the two mitochondrial derivatives, thus becoming progressively narrowed (Figure 12c). Immediately below the nucleus and opposite to the mitochondrial derivatives, the centriole region is evident, from which the axoneme begins (Figure 12b). The latter consists of a 9 + 9 + 2 microtubule complex flanked by two triangular accessory bodies (Figure 12d,f). Two mitochondrial derivatives with similar size (0.65 × 0.30 µm) and their region facing the axoneme crystallized are visible (Figure 12d). At the tail end, only the axoneme with doublets devoid of dynein arms is still visible, with the two mitochondrial derivatives and, more distally, the accessory bodies disappearing.
Discussion
The sperm ultrastructure of the four species studied here not only confirms our previously published data (see [9]) but, in addition, provides new useful data for a better understanding of the phylogenetic relationships between the various families of Tenebrionoidea. In Oedemeridae, the new family examined here, spermatozoa show the same organization seen in the other advanced members of the superfamily [7][8][9]11]. They consist of a short two-layered acrosome, relatively long nucleus, and flagellum; in this latter, an axoneme with 9 + 9 + 2 microtubules, two thick similar mitochondrial derivatives, and two elliptical accessory bodies. However, the features supporting the positioning of this family together with advanced tenebrionoids are the presence of numerous sperm per testicular cyst and the antiparallel arrangement of the spermatozoa within the cyst. These two remarkable features are shared by all tenebrionoid families studied so far, except Mordellidae ( [9], this study) and Ripiphoridae [9]. The unusual arrangement of sperm within the cysts was initially described in Tenebrionidae by Dias et al. [8]. These authors demonstrated that this disposition begins in the first stages of spermiogenesis; as the flagella elongate, half of the nuclei migrate to one pole of the cyst and the other half to the opposite pole, ultimately forming two antiparallel sets of sperm per cyst, which are easily observed using DNA stains. Observation by transmission electron microscopy further demonstrated that this arrangement could be inferred from the clockwise and counter-clockwise orientations of axonemal microtubule pairs in cross-sections of cysts (see [9]) and that this character is common to most Tenebrionoidea. The amount of sperm per cyst is a consequence of the number of division cycles that the initial spermatogonium undergoes in the early stages of spermatogenesis, a number considered constant for the species. In four species of Tenebrionidae [8], as well as in A. velikensis and O. nobilis of this study, up to 512 (=2 9 ) sperm per cyst were observed, indicative of nine spermatogonial division cycles. On the other hand, in the Tenebrionidae Lagria villosa (Fabricius, 1781), 1024 (=2 10 ) sperm were counted per cyst [15]. Although Zhang et al. [5] suggested that Ciidae was closely related to the Tenebrionidae, in Ceracis cornifer (Melli, 1849), 256 sperm per cyst were observed [11], and the same number was found in four species of Meloidae [9]. In families of Tenebrionoidea that are considered basal, such as Ripiphoridae and Mordellidae, the number of sperm per cyst is lower, reaching 64 (=2 6 ) ( [9], this study). Commonly, the works on Oedemeridae, Pythidae, Meloidae, Ciidae, and Tenebrionidae placed these groups in all the main branches of the phylogenetic trees above the more basal branch of Mordellidae and Ripiphoridae [4][5][6]15,16]. Thus, it is possible to suppose that the antiparallel disposition of the sperm inside the cyst, and the high number of them per cyst, are characteristics that arose in the common ancestor of all Tenebrionoidea, except Mordellidae and Ripiphoridae. Yet these data indicate that the proposition by Virkki [17] and Lachaise and Joly [18] that the occurrence of a relatively low number of sperm per cyst would indicate a derived character state, compared to cysts containing a relatively high number, cannot be applied to all Tenebrionoidea, as recent studies have indicated that Ripiphoridae and Mordellidae would rather form a sister group to all other Tenebrionoidea [5,6].
Within the Mordellidae family, Mordellistena sp. [9] and Hoshihananomia sp. have giant sperm, 1230 µm and 1030 µm long, respectively, while those of M. brevicauda, with about 290 µm, are comparatively very short. Based on molecular data, Batelka et al. [19] showed, in one of the cladograms, the genera Mordellistena and Hoshihananomia in the most basal and most derived branches, respectively. Thus, it is possible to assume that giant sperm may be the synapomorphic condition for Mordellidae. In contrast, short sperm, as observed in M. brevicauda, is a derived condition within the family. Among all the tenebrionoids studied so far, giant spermatozoa were observed only in Mordellidae and in Ripiphoridae ( [9,20], this study), which is consistent with the proposal that both families are closely related and form a sister group of all Tenebrionoidea [5,6].
A new peculiar finding observed in the two Mordellidae species studied here and possibly in Mordellistena sp. and the Ripiphoridae Ptilophorus dufourii [9], deserves to be discussed. The sperm bundles in these species are characterized by bending at half their length forming a loop at this point. In Hoshihananomia sp., probably because the sperm are very long, it is easy to see that the two folded halves spiral over each other. In M. brevicauda, this cystic organization is less evident, as the sperm length is only about one-quarter of that in H. sp. However, a twisting of the median region of the sperm bundle is also evident in this species. Thus, the cyst shows in both species a supercoiled organization. The folding of the sperm bundle forming a loop, the spiraling of the two halves, and the twisting of the sperm are probably spermatogenic mechanisms resulting from evolutionary innovations that enabled males with extremely long sperm to produce them in sufficient quantities to be competitive, even in relatively small testes. It was possible to observe that the folding of the sperm bundle within a cyst in the form of a loop begins in the early stages of spermiogenesis, and according to Syed et al. [13], it is the result of asynchrony between the elongation of spermatid tails and increases in the plasma membrane of surrounding somatic cells. It is essential to point out that this peculiar cystic organization occurs in mordellids either with very long sperm, namely Hoshihananomia sp. and Mordellistena sp. [9], as well as with very short sperm, such as M. brevicauda. Therefore, this feature must be common to the entire family and it may be likely present in the common ancestor of Mordellidae and Ripiphoridae, sister taxa sharing this cystic organization.
Testicular follicles in H. sp. are elongated and approximately four times longer than those of M. brevicauda, which are ovoid. In tenebrionids the testicular follicles are oval, with the cysts in the different stages distributed in distinct transverse zones: the youngest in the distal zone and the most advanced (about 100 µm long) in the proximal zone to the efferent duct (unpublished data). Such a feature is different from what was observed in Hoshihananomia sp., in which the long cysts are distributed along almost the entire central region of the follicles. These differences seem to validate an existing correlation between sperm and testis lengths and the organization of cysts within follicles.
Mordellidae sperm ( [9], this study) are characterized by a long posterior flagellar tip. In many insects this region is affected by axoneme disorganization with the microtubule doublets becoming grossly irregular structures and lacking their dynein arms, as described in the orthopteran Gryllotalpa gryllotalpa [10,21], the zorapteran Zorotypus caudelli [10], and in the dipterans Drosophila melanogaster and Bactrocera oleae [10]. In the mordellids, however, axonemal degeneration occurs by the loss of the central 9 + 2 microtubule complex, while accessory tubules persist throughout this entire region. Due to the lack of microtubule pairs and consequently dynein arms, this region is stiff and immotile. Still in Mordellidae ( [9], this study), unlike the other Tenebrionoidea, there is a compact material among the sperm in the cysts; in the sperm, the centriole adjunct flanks the entire nucleus, and the tail posterior end is long, formed only by axonemal accessory tubules embedded in dense intracellular material. The absence of these features in ripiphorids, and the phylogenetic distance of Mordellistena and Hoshihananomia [19], suggest that they may be unique traits (autopomorphies) of Mordellidae. Furthermore, these traits may constitute good phylogenetic signals to understand the relationships between the basal families of Tenebrionoidea. Alternatively, as suggested by Hunt et al. [15], the clade at the base of Tenebrionoidea could be Ripiphorinae and Mordellinae together with two subfamilies of Lymexyloidea.
The sperm flagellar structure of insects is also characterized by the size and shape of the mitochondrial derivatives and the accessory bodies [10]. All the species have their two mitochondrial derivatives with a similar shape. However, these structures are more developed and elliptical in O. nobilis and A. velikensis while in the two species of Mordellidae they are smaller and oval-shaped. The two accessory bodies, flanking the axoneme, are of primary importance to verify the relationship between the families of Tenebrionoidea. It has been already pointed out that the oval or elliptical shape of these structures in cross-section, is typical of various species of Tenebrionidae such as Tenebrio molitor or Tribolium castaneum [7,9] and also of A. velikensis here studied. This shape, with few variations, is the commonest feature among the whole superfamily, and it was also found in Meloidae [9,20]. O. nobilis, a member of the new family here studied has elliptical accessory bodies with a pointed apical side. On the contrary Mordellidae species have smaller accessory bodies with an almost triangular shape ( [9], this study).
As quoted in the Introduction, Levkaničová [3] and Bocak et al. [4] considered Ripiphoridae, Mordellidae, and Meloidae closely related. However, the latter authors considered the clade with the three families the most derived, while Levkaničová considered it the most basal. From the testicular and sperm morphology ( [9,20], this study) it is possible to assume that Mordellidae and Ripiphoridae, but not Meloidae, share a recent common ancestor, a condition also proposed by Zhang et al. [5] and McKenna et al. [6] from extensive gene sampling. They further suggested that the clade with these two families is at the base of the superfamily. A position that will possibly be supported by sperm and testicular morphology, but this type of data should be extended to other families and also to groups closely related to Tenebrionoidea, such as Lymexyloidea.
Conclusions
This study confirms that Mordellidae are closely related to Ripiphoridae and that both occupy a phylogenetically distinct position concerning other tenebrionoids. In mordellids, the number of sperm per cyst is low compared to the Oedemeridae and Tenebrionidae, also studied here. Furthermore, in these latter two families, the sperm of the same cyst are distributed in two sets arranged antiparallelly, as in the other families of Tenebrionoidea, except mordellids and ripiphorids. In these latter families, the long sperm cells exhibit the same orientation within each cyst, and the sperm bundle forms a loop approximately at half its length to be contained within a smaller cyst. Also, sperm are characterized by thin mitochondrial derivatives and accessory bodies and, in mordellids, by a long, rigid, immotile posterior flagellar region in which accessory tubules are embedded in dense intracellular material. | 2022-05-26T15:04:47.520Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "9729f11ca26b62bb4c9297ee30431daa2d6dd9fa",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4450/13/5/485/pdf?version=1653280856",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b9967bef5ade286a90056fcc4ac12e4cf1287d0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270184047 | pes2o/s2orc | v3-fos-license | Past! Future! In Extreme!: Looking for Meaning in the “ New Romantics, ” 1978 – 82
First used in 1980, “ new romantics ” was a term applied to describe a British youth culture recognized initially for its sartorial extravagance and penchant for electronic music. Closely associated with the Blitz nightclub in London ’ s Covent Garden (as well as milieus elsewhere in the UK), new romantics appeared to signal a break from the prescribed aesthetics and sensibilities of punk, rejecting angry oppositionism for glamour and aspiration. In response, cultural commentators have often sought to establish connections between new romantism and the advent of Thatcherism and “ the 1980s. ” This article challenges such an interpretation, offering a more complex analysis of new romanticism rooted in nascent readings of postmodernism. It also shifts our understandings of the periodization of postwar British history and the concept of “ popular individualism, ” arguing that youth culture provides invaluable insight both to broader processes of sociocultural change and to the construction of the (post) modern self.
The video starts with piano chords and a synth-propelled rhythm.The song is "Visage" by Visage, a chart-bound record released in July 1981.Steve Strange (Stephen Harrington) looks at the camera, the frame cutting to-and-from a pencil-drawn sketch of his elaborate makeup coming into being.As the song kicks in, Strange heads to the Blitz nightclub in a chauffeurdriven vintage car.He is accompanied by Perri Lister and Lorraine Whitmarsh, two women dressed chicly and identikit: black hats, grey suits, belts around their waist, red lipstick to offset mascaraed eyes.Poses are pulled as Strange sings the first verse: "New styles.New shapes.New modes: that's the role my fashion takes."As they enter the club, down steps into a new space reimagined from a small wine bar into an alluring simulacrum conjuring Hollywood images of speakeasy America or a pre-war European cabaret, smartly dressed barflies turn to stare, watching as Strange and his companions promenade."Oh my visage," he croons, reveling in the attention.Next, they enter a fashion shoot and turn the pages of a mocked-up Vogue with Strange on the cover in drag."Vis-u-als.Mag-a-zines.Reflect styles: past, future, in extreme."Strange's outfit changes from scene to scene: now he exhibits a bolero look caught in the slow-motion flicker of a celluloid light that complements the pulsating disco beat.He and his associates watch themselves on the screen, moving before it, then tearing it down as they turn to face the glare.They re-enter the club and step onto the dancefloor, their studied poise caught in a smoky haze.New outfits again.They have become their own movie.They have stepped out of a magazine.Fantasy becomes life.Life becomes fantasy.The synthetic combines with the imaginative to invent glamorized visions of the past caught freezeframe in the present."Oh my visage," Strange croons once more."Oh my visage." 1 The affectations of Strange and his group have since become a defining motif of the 1980s.As co-convenor, with Rusty Egan, of a series of ongoing club nights at Billy's in Soho and then Blitz in Covent Garden, Strange helped spearhead a distinct youth cultural style that served also to reinvigorate London's nightlife.After much prevarication, "new romantic" became the media label of choice, a term first associated with Blitz regulars Spandau Ballet in the early months of what became a stellar pop career.Following Betty Page's (Beverly Glick's) "The New Romantics: A Manifesto for the 80s," a feature on the band that appeared in the music paper Sounds in September 1980, the moniker entered the cultural lexicon.Writers both astute and bemused tried to explain this mishmash of elaborate fashion and electronic sounds that stoked "controversy in the youth market" for the "first time since the Sex Pistols [and punk]." 2 Visage, Spandau Ballet, a reconfigured Ultravox, and, a little later, Culture Club emerged through Blitz to become household names.Alongside Duran Duran and others, they rode a "new pop" wave that washed over America in another "British invasion" primed to galvanize a music industry adapting to a range of technological changes and challenges. 3The sartorial and musical influences of David Bowie and Roxy Music were common denominators: Bowie, in return, recruited Strange (along with Blitz regulars Judith Frankland, Darla Jane Gilroy, and Elise Brazier) to appear in the video for his number one single "Ashes to Ashes" (1980).Style-wise, highly individualized looks were continually adopted and adapted across genders, drawing from what was soon read as a "postmodern" plundering of the past.These included historic militaria and iconic cinema; Weimar chic and TV sci-fi; decorated religious garb and Japanese geisha; Little Lord Fauntleroy and Elizabethan court dress; a "psychedelic Regency" look of silk and velvet that New Society's Yvonne Roberts described as "Lucille Ball meets Beau Brummell," replete with makeup and hair set just-so. 4 To this end, Strange remembered scrambling time by "constantly looking through history books, old film magazines and design articles, trying to come up with ideas for new images," while his friends experimented similarly with a range of outré cuts, cloths, and accoutrements (Figure 1). 5 Though always a contested label, new romanticism briefly encompassed everyone from the Burundi-beat piracy of Adam and the Ants to the languidly-somber avant-pop of Japan and the futuristic dance performances of Shock.It became a catch-all term to define what one short-lived magazine recognized asand thus named itself after-the New Sounds New Styles (1981-82) of a new decade.
The media of the time soon registered a fascination with those the Daily Mail described in late 1978 as "heavily made-up […] poseurs" and "peacocks" trying "to bring a little bezazz and brightness to their lives."6Scholars, however, have shown scant interest in the new romantics.As the 1980s unfurled, some attention was paid to what Simon Frith recognized as a shift in pop's critical focus from "the forces of production to the moment of consumption."This meant that new romantics were used as an example of a fracture developing between those holding hard to punk-defined modes of sociocultural critique and others succumbing to "packaged narratives of desire." 7Read as a continuation of 1970s glam, new romantics embraced what the influential music writer Simon Reynolds later described as "fantasy and escapism," informing and responding to the divergent "post-punk" stylings of youth culture indicative of the period. 8Indeed, Jon Stratton's 2022 Spectacle, Fashion and the Dancing Experience in Britain concluded that new romantics "exemplified" the "totalising combination of lifestyle consumer goods within the capitalist order, where mundane life was transformed by way of excess […] into spectacle." 9istorians interested in questions of gender and sexuality have on occasion interrogated the distinctive stylings of the "Blitz Kids" (Figures 2-3).Stan Hawkins and Michael Bracewell both applied the concept of the dandy to position the new romantics' "feigned masculinity" and fearlessness of effeminacy as an extension of pop's wider reimagining of gender performance. 10Blitz became what Shaun Cole called a space of "creative self-expression," where genders merged and "a whole host of new images [opened up] for men's dress." 11But as Caroline Evans and Minna Thornton have recognized, this meant signifiers of femininity were often colonized by men who impinged on the "multiplicity of selves" available to women keen to "explore the shifting relationship between being and appearance, [between] seeing and being seen." 12If the model and singer Ronny could switch from looking immaculately stern to Dietrich-androgenous (providing a template for Annie Lennox in the process), then pictures from the time also reveal women in an array of styles compiled to present a rarefied female glamour.By contrast, a band such as Duran Duran was able to construct and perform a post-Bowie "masculinity not yet seen in the absence of the association with homosexuality, and not pejorative in any way.""Gay/straight" semiotics were blurred; the feminine was absorbed rather than rendered abject; masculinity was resignified to present an alternative maleness. 13The boys, it seemed, could have it both ways.
Seeing new romanticism as a spectacular site of desire and/or gender reimagining reveals a youth culture navigating, informing, and absorbing broader processes of sociocultural and socioeconomic change.Yet popular histories of the period remain obsessed with supposed parallels between new romanticism and the "new beginning" promised by Margaret Thatcher in May 1979, imbuing 1980s pop with an aspirational business acumen that strove for success and therefore equated to "Thatcherism." 14To explain, Kari Kallioniemi argues that the marketing of new romanticism (and 1980s pop music in general) became interdependent on a politics and economics that at once facilitated and castigated the (primarily working-) class, gender, and sexual identities embraced by those within the original Blitz milieu. 15Simultaneously, "The Story of the New Romantics" remains-in the media-framed popular memory-the preserve of a few close to the scene. 16In these autobiographical accounts, relations between Blitz and Thatcherism recur, sometimes accepting and sometimes denying the connection.Blitz certainly did host an array of future pop stars, writers, stylists, artists, and designers who helped shape the form and temper of the 1980s.But accounts of new romanticism are too often teleological.In other words, they are recounted to chart the success and renown of certain Blitz habitués or to underscore an often-facile correlation between parliamentary politics and concurrent shifts in style, leisure, media, and consumption.
Contemporary interpretations of the formation of new romanticism tell a different story.Exhuming the influences evident through Blitz and performances such as the video for "Visage" allows new romantics to be understood in the process of becoming, during which time the possibilities of what Raymond Williams called an "emergent culture" remained mutable and open-ended. 17In reclaiming texts from the period, the multi-accentuality of (proto-)new romanticism becomes visible, exposing what its cultural configurations were deemed to mean-or signify-at the moment of codification and commercialization.
If "new media mediates old media,"18 then new romanticism epitomizes the experimentation with new forms of media that transformed the cultural terrain over the late 1970s/ early 1980s.This was a time when polaroids, videos, cassettes, and Walkmans became commonplace; when changes in printing techniques helped generate a widening array of colorful and glossy magazines with innovative graphic design; when synthesizers and sequencers began transforming pop's sound; when branding and advertising techniques had profound cultural ramifications. 19Yet proto-new romanticism's initial embrace of technology came with a studied alienation that embodied long-standing fears of automation.It traded in both nostalgia and dystopia, aestheticized for an age yet to come.
These new media and technologies enabled elaborate modes of-sometimes fleeting, sometimes enduring-self-creation and self-reinvention. 20New romanticism thus exposed how the intricacies of selfhood were encountered and constructed through a nexus of mediated image and spectacle. 21New romantics forged an array of identities from "looks" they adopted, then adapted from visual culture.They performed the multiplicity and instability of a modern self that could forever be reinvented and reproduced, feeding into early postmodern readings of the late twentieth century.Many Blitz kids-including Boy George (George O'Dowd) and Steve Strange-were themselves willingly commodified to become "agents and objects" of cultural production, their assembled self-identities projected and fragmented across the realms of pop music, fashion, and tabloid media. 22ommodification and consumer behavior are central to our understanding of postwar British culture. 23What became new romanticism was at once a product of trends in consumption and an attempt to escape high-street uniformity.The objective was to stand out; to look better; to be distinct, original, and different; to be set against-as much as escape from-the banalities of social and stylistic routine.Hence the emphasis on bespoke design and stylistic bricolage; but also the untethered recycling of stylized images from the past.The tensions between originality and reconstruction, between past and present, between agency and the constraints of social and commercial mores, between marginal and mainstream, between queer and heteronormative are central to parsing new romanticism's engagement with and contribution to Britain's consumer culture.
Consequently, new romanticism helps us to reconsider the post-1945 period in terms not limited to the overly-determined periodization of "consensus"/"social democracy" and "Thatcherism"/"neoliberalism." 24What was initially described as a "cult with no name," like the 1980s themselves, might just as well be understood as the product of preexisting developments in technological, sociocultural, and socioeconomic change.By recognizing how longer processes of structural and social transformation framed and underpinned late twentieth-century youth culture, we can understand shifts in discourse and sensibility that both explain and repurpose our postmodern conceptions of time and space.
Like some new romantic looking for the TV sound 25 The basic back story has been told a number of times and goes something like this.New romanticism signaled a distinct moment along a cultural trajectory that ran through David Bowie, Roxy Music, soul clubs, gay clubs, punk, disco, and Europhilia toward inspiring a whole host of stylistic ingenuities over the 1980s.Billy's, the small club in Soho where Steve Strange and Rusty Egan presented "Bowie nights" on Tuesdays in late 1978, marked the conception.Blitz, from February 1979 through October 1980, served as the incubator for much proto-1980s creativity, where fashion and aspiration found synergies to fuel new media, music, and lifestyles (Figures 4-5).Born of Blitz came pop stars such as Spandau Ballet and Boy George, ready to revitalize the charts and fill the pages of a pop-centric Smash Hits, soon to become the best-selling music magazine of the 1980s.Their coming of age was ornamented by designers fresh from St Martin's and other London art or fashion schools experimenting with styles en route to prestigious careers, among them Michele Clapton (whose costume designs would win her a BAFTA and three Emmys between 2009 and 2016) and the milliner Stephen Jones (OBE).Holding court and managing the door with discernment was Strange, who-like Boy George and Marilyn (Peter Robinson)-cultivated his own distinctive look, playing with and/or wholly subverting notions of masculinity.Egan's job was to provide the soundtrack, moving from the assuredly erudite glam-rock-turned-electronic-pulsebeat of Bowie and Roxy Music toward European electronica and early synth-based pop to herald an eclectic mix of music-e.g.Kraftwerk, Gina X, Telex, John Foxx's Ultravox!, The Normal, Fad Gadget, The Human League, Japan, Simple Minds-that rejected rock's clichés and envisioned a future world of clubs not gigs; of dancing not spectating; of dressing-up not dressing-down; of pop stars not grizzled rockers; of elongated 12-inch singles and stylized videos.
Close by, eager scenesters and fashionistas found space to propagate what they recognized as a "new movement."This allowed Robert Elms to write features for The Face and, with Steve Dagger and Chris Sullivan, ensure new romanticism fed back to embrace a modinflected soul boy heritage. 26Blitz provided a "mutual admiration society for budding narcissists," Elms insisted: "[A] creative environment where individualism was stressed and change was vital."27Though inspired-like Strange and Egan-by the style and energy of the Sex Pistols, and the ways their manager Malcolm McLaren manipulated the media and the music industry to construct a recognizable scene and subculture, they recoiled from the affected yobbishness and earnest social realism that ostensibly defined "punk" by 1978-79."Punk was a fashion," Sullivan insisted."It wasn't anything to do with politics and really angry kids."28Instead, he and others began to imagine, and then start, their own clubs and pop groups, reviving the zoot suit and delving into sounds from the non-rock past ( jazz, swing, funk, soul, salsa, Dietrich, Piaf, Sinatra).A glossy "style press"-The Face, i-D, Blitz-emerged in 1980 to disseminate the aesthetic awareness associated with Blitz, meshing music with fashion, design, and art to further eclipse the once preeminent music press (NME, etc.) and evolve toward catalogues of cultural consumption that pertained to construct confident and continually updating selves.By 1983-84, both the pop charts and London's clubland were seemingly transformed.Elms and Dagger had respectively propagated and managed Spandau Ballet to stardom; Sullivan had opened the uber-hip WAG club in Wardour Street; Dylan Jones-later to collate the most extensive oral history of new romanticism-had found his way to i-D, from where the briefly ubiquitous Perry Haines served time as a "consultant" to globetrotting Duran Duran; Boy George's Culture Club had broken America on the back of MTV; stylistic motifs associated with new romanticism had infused Paris catwalks and fed into the visual palette of film and design.The once would-be trendsetters of London's suburbs, squats, and housing estates now defined the times from inside the media, fashion, and music industries.
Of course, a messier history exists beneath the sheen of eighties success.Most obviously, the prevailing narrative has become overwhelmingly male and, given the "gender-bending" heralded as a defining feature of the Blitz crowd, oddly heteronormative.The "blokes" never "mention […] the women in the scene," the designer Fiona Dealey later complained."The Blitz was our youth club and I feel they hijacked it."29If Steve Strange and Boy George always get their dues, the narrative tends to follow Elms, Jones, and Sullivan through their club, music, and media careers into the 1990s."I was fed up of people wandering around with make-up," Sullivan later remembered of his post-Blitz clubs and the formation of Blue Rondo á la Turk in 1981, "fed up of electro music.I wanted to do something with men […] in suits, without a synthesizer in sight.I wanted to do something that represented the heterosexual side of the scene." 30Although Strange opened Club for Heroes in 1981, this soon gave way to a 1982 residency at Camden Palace that The Face's founder Nick Logan felt was "anathema to what had gone before" (that is, commercialized and seemingly behind the times). 31The queerness of Blitz, first signaled by the multi-sexual mix of those frequenting the club, was therefore rerouted back through the gay underground into clubs such as Cha Cha (opened in the back of Heaven in 1981) and on through Philip Sallon's Mud Club (1983) and Leigh Bowery's Taboo (1985). 32It was there-alongside Skin Two, the fetish club opened in 1983, and, perhaps, among the Neo-Naturists who took nakedness into clubs and galleries throughout the 1980s-that we find further continuation of the otherness associated initially with new romanticism. 33Simultaneously, clubs such as the proto-goth Batcave allowed for residual punk-glam influences to find new and elaborate expression from 1982.
The dominant narrative is also decidedly London-centric.Birmingham's Rum Runner usually gets credit for spawning Duran Duran and providing a stage for the flamboyant Martin Degville (an inspiration for Boy George) and the designers Jane Kahn and Patti Bell.But the recurring club nights hosted at venues such as Croc's in Rayleigh, Cagneys in Liverpool, Sherry's in Brighton, Le Phono and The Warehouse in Leeds, Valentino's in Edinburgh, and Cardiff's portable Tanzschau are unfairly presented as merely duplicates trying to recreate London's swish in the provinces (Figure 6). 34For example, Keenan Duffty remembered the New Outlook in Doncaster hosting "a male nun, a Cossack, a Che Guevara lookalike, a bloke in a wedding dress and the Chip Shop King of South Yorkshire," an array of "style misfits" who no doubt encouraged his move to St Martin's and career as a fashion designer. 35et, we could instead turn to Manchester's Pips as a precursor to Billy's and Blitz, a nightclub with its own mid-1970s room for Bowie and Roxy Music acolytes to gather in "homemade outfits […] glamourous and beguiling." 36In Leeds, the Adelphi did Bowie nights on a Friday in the late 1970s, while Sheffield's Crazy Daizy put aside Wednesday for Bowie and Roxy fans even earlier in the decade. 37Back in the northwest, Holly Johnson (later of Frankie Goes to Hollywood) remembered how "young people in their droves had been turning up for Roxy and Bowie nights at discos [in the mid-70s] and it was all becoming a bit commonplace." 38Certainly, Johnson's crowd in Liverpool-not to mention the resplendent Pete Burns (later of Dead or Alive) 39 -comprised one of several comparable milieus to those congregating in Soho and Covent Garden between 1978 and 1980.If Billy's and Blitz overtly conceptualized and aestheticized club nights, there were precedents in embryo, not all of which took place in London.
The stylistic interplay of past, present, and future held evident tensions that similarly problematizes new romanticism's origin story.New romanticism's "coming out" party was a Valentine's Day Ball at London's Rainbow Theatre in 1981, a "People's Palace" full of "photographers, professional and amateur," snapping "urgently away at a passing peacock throng that was only too willing to oblige with a pose and a pout." 40In an instant, new romantics were ensnared by the media spectacle, their image reified and style codified as a fashionable mélange of billowing shirts with frilly cuffs, ruffs, sashes, and makeup applied with a futuristic glaze.Adverts-such as for the Leeds-based shop Fab-Gear-appeared almost immediately in the NME, providing identikit outfits for Steve Strange replicas. 41Bands such as Classix Nouveaux were aligned to "the scene" on account of their flamboyant dress and songs of "Night People" and robots dancing.Comedy programs caricatured the artifice and aesthetics of new romanticism.The "Nice Video, Shame About the Song" sketch on BBC's Not the Nine O'clock News from February 1982 was a veritable mix of Weimar chic, synths, makeup, historic costumes, and dramatic color-shifting effects.No longer able to continually transform out of sight, the photos and polaroids snapped in Warholian fashion to simulate stardom in a self-made world after midnight became artefacts of a very particular look, time, and place. 42 Equally, however, the aesthetics and moods cultivated at Blitz were dissolving and diverging by the time new romanticism was named.Speaking in 1980, Perry Haines had already waved "goodbye/riddance" to the "Flash Gordon clones" of 1979. 44In the guise of Spandau Ballet, he and Elms heralded a "new movement, as yet unlabelled, arising to tear down the high-tech backdrops that threatened fashion […] A new movement that respects romance and adores the classics."It was "no longer on to look 'strange'," Haines concluded, the aim of his jibe obvious. 45For Sullivan, at least, it was about "turning the clocks back." 46"I wanted to take things back to before punk, to the first clubs I attended […] venues such as Crackers and Lacy Lady that played great Black music, where the dress code entailed '40s, '50s and '60s with a soupçon of now." 47 By contrast, Strange saw Visage as "a passport into the new age of the eighties," while Egan always insisted "I was a futurist, I liked new records not retro." 48In other words, he saw the DJ sets that heralded the advent of new romanticism as closer to the broader upsurge of synth-generated music evident in 1978-81 than the lounge music, soul, funk, and salsa sounds enjoyed by Sullivan at the St Moritz club or Le Kilt.Labelled "futurist" in the music press and associated with labels such as Daniel Miller's Mute and Stevo Pearce's Some Bizzare [sic], the moniker loosely incorporated an array of synth-pop and post-punk acts-from Depeche Mode and Soft Cell to even Joy Division and Cabaret Voltaire-whose records were played by Egan but also at "Sci Fi Discos" and "Electro Diskows" across the country (Figures 7-10). 49lthough a commitment to nightclub life ensured a semblance of correlation that was retained across the diversifying sounds and styles unfolding post-Blitz, a breaking point was formally declared by Elms in his "Hard Times" article for The Face in September 1982.While the shift toward "men in suits" had first been noted in the winter of 1979, a definite change of sensibility was now intimated, transforming from the bold and the bright to "an entrenched die-hard mentality where 'Good Times' is replaced by 'Money's Too Tight (To Mention)'."The dancing did not stop, Elms reported, but the attitude hardened: "gay abandon has evolved into a clenched teeth determination where […] sweat has replaced cool as the mark of a face."All semblance of dandyism and camp was jettisoned for function: denim was back and the Europhilia associated with Blitz was displaced by Black American influences. 50In the meantime, most post-Blitz pop stars began diluting or rejecting any lingering new romantic styling to concentrate on conquering the charts and infusing the pop plurality of 1981-84. 51hese faultlines raise questions as to how (sub)cultural identities, performances, and spaces complicate our approaches to periodization and social change.They reveal disparities between those keen to reconstruct imagined pasts and those preoccupied with continual innovation; between those looking for validation via cultural capital and those prompted by a need for recognition/attention; between those guided by markers of material success and those keen to adopt artifice (and pleasure) as a means to self-reinvention.They further suggest that as much as new romanticism marked the start of something-"the 1980s"-it was also a manifestation of the end of a period that had opened spaces and the means to experiment both creatively and in terms of gender and sexuality.How then was new romanticism understood at its moment of becoming?https://doi.org/10.1017/jbr.2024.57Published online by Cambridge University Press Oh look at the strange boy, he finds it hard existing … 52 For Raymond Williams, "emergent" cultures encompass "new meanings and values, new practices, new relationships and kinds of relationships […] continually being created."These always relate, in some way or other, both to the "dominant culture" of a particular period and to "residual" elements of the past."It is exceptionally difficult," he points out, "to distinguish between those which are really elements of some new phase of the dominant culture […] and those which are substantially alternative or oppositional."Nevertheless, emergent cultures are dependent on "finding new forms or adaptations of form," creating or occupying spaces as they come into being. 53ew romanticism's formative influences and responses to the dominant culture complicate any binary reading of its relationship to the past and present.In tracing new romanticism's becoming, we need to locate the array of emergent looks, sounds, and sensibilities in relation to ongoing socioeconomic transformations (such as deindustrialization, globalization, and financialization) beyond simply the advent of "Thatcherism."We must consider the cultural intervention of punk that critiqued the music industry and stimulated youthful agency; the residual influence of pop and youth culture's past that punk scrambled but never wholly denied; the wider cultural influences that circulated and coalesced through film, fashion, art, and literature; the pertinent sexual and gender politics permeating the 1970s-80s; the technological innovations that brokered affordable new media, fashions, and sounds. 54nly then do the "styles," "shapes," and "modes" Steve Strange sang about begin to coalesce.
Where to look?Not the weekly music press, which proved slow to recognize (or at least cover) what was happening at Billy's and Blitz.The club-and style-based nature of the scene rubbed against the gig-and record-oriented coverage prevalent in the NME, Sounds, and Melody Maker. 55Nor does the more chart-focused Smash Hits reveal much before 1981.Galvanized by Gary Numan and Adam Ant heralding the post-punk return of the pop star, attention turned to Spandau Ballet and Duran Duran over 1981-82, with the latter, especially, becoming regular cover stars.Prior to that, Steve Strange was dismissed as "possibly the worst dresser of his generation" but otherwise ignored until a front-page appearance in January 1981. 56As this suggests, new romantics were reported as pop music rather than stylists, as stars rather than creatives.
The Face, i-D, and Blitz offer richer pickings, though all three magazines started almost at the moment when Strange and Egan vacated Blitz in late 1980.True, the earliest issues of i-D celebrated the multiple street styles so evocative of the immediate post-punk period, pioneering "straight up" photographs that often featured Blitz habitués. 57It was in The Face, moreover, that Elms pitched Spandau Ballet and presented "the cult with no name" as the harbinger of a new generation. 58But all three "style bibles" provide better trace of Blitz's immediate legacy than new romanticism's formation.Indeed, it was David Johnson's "On the Line" column for London's Evening Standard and LWT's Twentieth Century Box that proved quicker off the mark.Johnson eagerly propagated the "Now Crowd" living for the moment throughout 1980, while LWT's feature on Spandau Ballet helped generate a buzz around the group as they played invite-only gigs and fostered expectations of becoming the "next big thing." 59etter, then, to explore the more marginal fashion, society, and arthouse publications circulating in the late 1970s, magazines that fledgling new romantics read and aspired to appear in (Figures 11-14).It was in the likes of Harpers & Queen, Tatler, and Ritz that reports on London's clubland ventured into Billy's and Blitz, aligning glamorous aspiration with high-life glitterati in ways resonant of Roxy Music's stylized visions of penthouse perfection.Featured next to images of film stars and society stalwarts, and alongside clubs such as The Embassy on Old Bond Street, Blitz's youthful coterie appeared like a new generation of decadent dandies and would-be neue-frau bohemians usurping the capital's pleasure domes.Writing in Tatler, the cultural anthropologist Ted Polhemus described Blitz's "low-tech décor of war-time austerity" as a "post-punk kingdom of heaven and hell."Thus, to the "Electro-Diskow" sounds of "German electronic pop with J. G. Ballard lyrics about love in a crashed car," "a girl dressed like Audrey Hepburn in Breakfast in Tiffany's is dancing with a boy in a jet-black plastic space suit […] His hair slicked back Valentino-style." 60olhemus wrote of the Blitz milieu "tapping our image resources" to reenvisage themselves in the context of post-punk Britain. 61Allusions to Weimar Germany were common, the travails and decadence of pre-war Berlin transferred to the turbulent 1970s, with "cabaret" becoming a buzzword amid the outfits, poses, and demeanors of a crowd schooled by such films as Bob Fosse's 1972 adaptation of Christopher Isherwood's Berlin novels and the aesthetics of German Expressionism.
Likewise, relatively short-lived titles such as Boulevard, Deluxe, Mode Avantgarde, VIZ, and ZG give sight to the cultural references and influences that informed the styles and sensibilities expressed after dark in Soho and Covent Garden.These were magazines of art and fashion, bold on imagery and keen to recognize "hybrid styles" across "diverse areas of cultural activity." 62Therein we find early reports on Helen Robinson and Stephane Raynor's shop PX, whose "clothes for the modern world" were initially caged in lockers retrieved from the old MI5 headquarters on Curzon Street and sold behind a white steel shutter with a TV monitoring the street. 63Articles on Jon Baker's Axiom and Willie Brown's Modern Classics (which were both clothing outlets and fashion labels) broadened the stylistic array, making connections between DIY designers and those beginning to garner a reputation.Attention often focused on the self-created looks of Kim Bowen, Melissa Caplan, Judith Frankland, and others, women whose originality retained an otherness that both startled and unsettled.The rouge-smeared eyes of Scarlett Cannon and the dark glamour of Princess Julia (Foder) preserved a trace of punk's provocation and difference.Similarly, the coverage given to Stephen Linard's stunning "Neon Gothic" designs presented at St Martin's in 1980 revealed a fascination with religious imagery that generated an otherworldly effect intensified by the makeup and shaved heads of Michele Clapton and Myra Falconer.As well as pre-"Boy" George's regularly transforming and startlingly imaginative looks, the "space age pope" style of Lee Sheldrick appeared more Klaus Kinski's Nosferatu than 1980s nouveau riche.With an emphasis on "looking radically different" (Cannon) and "being distinct" (Clapton), the innovations of those in and around Blitz stunned as much as seduced (Figures 15-16). 64uch images were set next to features on the sexually exploratory artwork of Allen Jones and photography of Helmut Newton (who in 1982 shot the cover for the second Visage album).Exhibitions, including ICA and Haywood Gallery shows dedicated to interwar German art and society, were covered.Fashion shoots from 1978-79 reveal military uniform looks akin to those modeled at Billy's and early Blitz, with lines from Ventilo and Hechter offering clues to how PX and Oxfam-sourced variations might be repurposed for nights out. 65Consummate stylistas-Jordan, Grace Jones, Amanda Lear, Antony Price-featured repeatedly, as did the filmmaker Derek Jarman and references to Andy Warhol, Duggie Fields, Quentin Crisp, Vivienne Westwood, and Malcolm McLaren.With their airbrushed illustrations of faces painted with futuristic cosmetics, 1979 covers of Mode Avantgarde signposted the color palette of the 1980s.VIZ's profile of Richard Sharah revealed the source of Steve Strange's elaborate makeup.Adverts flit from the glamorous to the futuristic, with shops such as Bastet (South Kensington) and Metropolis (Covent Garden) evoking Fritz Lang and presenting in ways that complemented the aesthetics performed in Blitz.In ZG, produced out of St Martin's by Rosetta Brooks, issues were dedicated to themes that "challenge our most deep-rooted orientations to the world whether in terms of art/culture, elite/ popular, or male/female": sadomasochism, image-culture, future dread, desire, heroes. 66A first issue article on "Blitz Culture" wrote of street-level performance art, recognizing "the look" as less to do with achieving a perfect reproduction of a particular style or "cinematic stereotype," and more a play of juxtaposition designed to distort and skew."Posers," Brooks suggested (citing the photographer Diane Arbus), revealed the flaws or "the gap" between "intention and effect."By so doing, the styles presented at clubs such as Blitz might disturb and unsettle, thereby revealing more leftfield impulses behind the "constantly shifting and symbolic maze." 67Deviancy and hard drugs have often been written out of the prevailing new romantic narrative.But such darker fascinations, which circulated through these magazines, informed the spirit of Blitz and help contextualize, for example, the OD deaths in Warren Street squats and drug-induced problems that later befell Strange, Boy George, Marilyn, and others once the glitz turned shit. 68The magazines that began to identify the culture of proto-new romanticism reveal how shared references, interests, aesthetics, and stimuli circulated and contributed to the bricolage that eventually cohered into a nameable subcultural style. 69wo writers, in particular, offered pertinent insights into the becoming of new romanticism, grappling with cultural formations that bore trace influences and promised new possibilities.In late 1977, Stephen Lavers and Peter York combined to write "The German Connection" for Harpers & Queen.Here they argued that synthesizers and electronics-as pioneered by Kraftwerk (among other 1970s German groups), then adapted by David Bowie with Eno in Berlin, and applied by Giorgio Moroder to Donna Summer's pulsating "I Feel Love" (1977)-proffered a futuristic collision between punk and disco."In a technological age, a formalized primitivism is an avant garde stance," their article asserted."The only logical counterpart to modern Ludditism is the cult of the machine.The only counterpart of intellectual primitivism is intellectual futurism.They are, of course, intimately linked."In a time of no future, signaled both by socioeconomic strains and political dissensus, the "arts mafia" were looking to Futurism as an answer to punk's angry despair, York and Lavers argued: that is, 64 seeking an escape from the "horrors of the present" but "accepting the modern world and trying to shape it." 70"We are the robots," to quote Kraftwerk. 71avers also wrote for Ritz, a paper started in 1976 by David Bailey and David Litchfield as a British equivalent to Andy Warhol's Interview.Attuned as he was to the political and aesthetic innovations of early punk, Lavers recognized Strange's and Egan's vision for Billy's and Blitz as being in line with his own preferred "direction for […] post-punk." 72"The elements of a successful youth culture," he insisted in August 1979, were "an identifiable genre of music, a distinctive look and, if possible, a radical ideology or mode of behaviour." 73The music was there, via Egan's electronic soundtrack and Visage's first single release in late 1979. 74The style was "in embryonic form," developing from a straight Bowie or Bryan Ferry "clone approach" into "a synthesis of the PX "extraterrestrial uniform," the [Kraftwerk] "extremist normality" look [shirt, tie], and a plain monochrome minimalism."Cross-dressing, which Lavers suggested stemmed from Bowie's dragged-up performance for his recent "Boys Keep Swinging" video, was also now apparent."All that is missing," Lavers concluded, was the "attractive ideology," though he predicted one would soon emerge. 75ork, meanwhile, continued to offer a more detached but equally intrigued analysis from the pages of Harpers & Queen.For York, the Blitz crowd was an extension of the "junior grade Them" he recognized in punk ca.1976."Them," York explained, were people who constructed a look and a way of living that was neither socially acceptable nor sexually appealing.They dressed "to look interesting," conflating art school tutelage with camp to forge a "strange sensibility" that allowed "ideas" to be put into everyday life.They were the "cognoscenti of trash" and the "aficionados of sleaze," committing to novelty and being ahead of either fashions or trends."Them" tended to present as apolitical, instinctively elitist, and concerned primarily with aesthetics.They appeared detached from material reality, living instead in self-made worlds that constructed or reassembled "versions of" a style, film, or photographic image.Circulating in the world of "Them" were Ferry, Bowie, Fields, Jarman, Zandra Rhodes, and Andrew Logan (whose "Alternative Miss World" was a touchstone).Their forebears and influences included Warhol, Crisp, Marcel Duchamp/Rrose Sélavy, and Vogue's Grace Coddington.The first punks-followed by the Blitz crowd-were to York both part of and a reaction to "Them."They comprised a new bohemia "literate in the language of style" but generationally distinct from their forebears, 76 "Post-Modern," even. 77Post-Modern," at least for York, suggested an attitude resonant of the 1970s.It pertained to a performance or creation that was "stylish, ambivalent, ironic, eclectic, a touch retro, a bit classy (but that classiness [is] distinctly ironic; post-classless, you understand)."Fragmented, rejected, and revolutionary ideas were assimilated, transitioning from pastiche toward parody and then what York feared would become a meaningless "mush."In other words, the past was looted and reworked until it lost all significance.An "uncomfortable transition" was in play, York fathomed, whereby "period references will be used without any self-consciousness."Having absorbed images from TV and magazines, young people were "cross[ing] borders they no longer see."Thus, Billy's and Blitz were "hopelessly Post-Modern," combining "period idea[s] of the future" with repositioned signifiers of the past.According to York, the question "what is postmodernism?" was doing the rounds among "Them" in 1979, the year in which Jean-François Lyotard's The Postmodern Condition was published.Fredric Jameson and David Harvey had yet to formulate their critiques.The term itself was still in the process of entering the public discourse via books such as David Watkin's Morality and Architecture (1977), a title later borrowed and reversed by Blitz playlist regulars Orchestral Manoeuvres in the Dark for their third album (1981).York used the term vaguely, evoking a structure of feeling rather than a definite concept.Yet we can see here some grasping toward what Jameson would later recognize as a weakening historicity, a fascination with surface dimension, and the collapsing of reality into mediated images driven by new technologies.79 Initially, then, postmodernism was the lens through which Blitz culture was interpreted, informing the thoughts of influential NME writers such as Paul Morley and Ian Penman as they began to consider "today's usage of yesterday's future visions."80 Style as self-creation was the principal motif, a theme Jon Savage also explored in his review of David Bowie's Scary Monsters (and Super Creeps) for The Face in 1980.Bowie was an "agent of transformation," Savage argued.Not only did he liberate "a whole range of fantasies" hitherto repressed, fusing futurism and gender confusion with choice cultural references to fantastical lives and mediated moments (Isherwood's Berlin, Warhol's New York, William Burroughs, Jacques Brel), he also embodied the enticing possibilities of combining pop and style.Now in 1980, Savage continued, Strange was the "most recent, the most absurd, yet [also] the most magnificent exponent of the suburban pose which never dies."81 At the moment of its becoming, therefore, new romanticism was a mélange of recognizable reference points coalescing as an emergent culture ready to be codified in the context of a new decade with newly positioned politics and economics.That is, the crucibles of new romanticism-Soho and Covent Garden (between the closure of the market in 1974 and the later 1980s retail renovation)-were presented as seedy and desolate settings against which extravagant fashions brought startling effect."Declinism" and social disrepair were reflected in references to Weimar Germany, a presiding motif of the 1970s present in Bowie's work, early punk, and across cultural and sociopolitical commentary more generally.82 In response, Blitz enabled an escape into self-created "fantasy worlds," finding space amid deindustrialized or dilapidated shops and clubs to forge alternative cultural forms.83 Though punk was acknowledged as a stimulus, it also marked a moment to move beyond, locating proto-new romanticism in an amorphous "post-punk" diaspora of overlapping sounds and styles.As well as various youth cultural forebears-punk, mod, soul boys/girls, glam-wider influences were applied through "looks" codified in films, books, photography, and artworks that began to push further back in time.These, in turn, were swapped and discussed around shared houses, squats, and, notably, the Ralph West Hall of Residence that served London's art schools and provided a meeting place for many of Blitz's core clientele.84 Multi-sexuality and blurred gender boundaries were integral to Blitz's aesthetic, which adopted camp and gay cultural signifiers to inform modes and sensibilities that infused pop culture through the 1970s.This was a testament to the impact of gay liberation evident also in Bowie's casual arm around Mick Ronson's shoulder on Top of the Pops in 1972.85 The embrace of new technologies gave a sense of (post)modernity, ensuring references to the past came with a sheen and a color palette soon to be resonant of the early 1980s."They are hinting at pre-and postindustrial attitudes," Savage wrote in relation to Spandau Ballet's Culloden chic and Vivienne Westwood's recent designs that evoked eighteenth-century France."What they are saying is, our society is obsolete, and unconsciously they hint at a new world."86 No future they say, but must it be that way? 87 Did the "attractive ideology" Lavers hoped for develop?To an extent, perhaps.Strange held fast to the line of self-transformation through style, a way of escaping from the tedium of everyday life.88 Though drugs and ego soon took their toll, Strange and Visage continued to reimagine their image and construct "modern dance music" that later fed into the innovations of Detroit techno.89 On one side of new romanticism, Green Gartside of Scritti Politti ditched Karl Marx for Jacques Derrida to concoct the means to explore pop's language and surfaces in ways that both revealed and expressed modes of desire.Aiming for the charts, singles such as "The 'Sweetest Girl'?" (1981) and "Wood Beez (Pray Like Aretha Franklin)" (1984) offered the most sophisticated take on the "knowing" aspiration that writers such as Paul Morley ascribed to the "new pop," which he discerned across such diverse acts as Scritti, ABC, Adam and the Ants, Haircut 100, and Heaven 17. 90 Malcolm McLaren, having briefly advised Adam Ant on his pop-centric reconfiguration of historical sounds and styles, conceived Bow Wow Wow in 1980, a group built on a situationist-inspired agenda that celebrated unemployment as an antidote to work and extolled the virtues of home-taping and teenage sexuality under the slogan "sun, sea and piracy."Akin to the new romantics' looting of history, Bow Wow Wow came dressed in Westwood ensembles that drew inspiration from pirates and eighteenth-century French Incroyables.Boy George was briefly a member, revealing connections between the Blitz crowd and Vivienne Westwood's shop at 430 King's Road that extended back to Strange's working there as a shop assistant.91 A spate of short-lived "moral panics" ensued, primarily as a result of the group's singer-Annabella Lwin-being recruited aged 13, before McLaren lost interest and the band broke up.92 Spandau Ballet, meanwhile, came closest to producing what Sounds described as a "New Romantic manifesto."93 Alongside Robert Elms, the band's manager Steve Dagger and guitarist Gary Kemp cultivated a sense of expectation around the group, presenting them as "the most contemporary statement that London can offer in terms of fashion and ideas."94 This meant much purple prose from Elms, evoking images of "oblique romance, an age when machines have lost their mystique and beauty has returned."Spandau Ballet's sound was described as "a soaring, gothic dance music that conjures up everything except rock 'n' roll," a form of "White European Dance Music" that was modern, positive, and passionate.95 For the sleevenotes to their debut album, Journeys To Glory (1981), cased in the classicist design of Blitz-friend Graham Smith, Elms imagined "angular glimpses of sharp youth cutting strident shapes through the curling grey of 3 a.m.[…] immaculate rhythms […] music for heroes […] the rousing sound on the path towards journeys to glory."96 Contemporaneity was stated in a variety of ways (Figure 17).First, rock culture was rejected as boring spectacle: succor rather than subversion.Spandau Ballet, like Visage, made music for clubs, embracing new technologies (synths, video, luscious production) and elongated 12-inch dance mixes of their songs.In contrast to gigs, where people stood and watched a band, clubs allowed "kids" to become "stars in their own environment."97 To this end, Spandau Ballet initially played invite-only performances in unusual venues (Blitz, HMS Belfast, Scala cinema, Birmingham Botanical Gardens), their presence an "applause to the audience."98 Second, the "cult with no name" was committed to fashion: style came first, with the music following as a "soundtrack to the look" (to quote Smash Hits' Mark Ellen).99 This, in turn, highlighted style and design as modes of communication beyond language and politics; it allowed for perpetual change and evolution.Paradoxically, it also tied new romantic styles and sensibilities to a youth cultural vanguard of working-class mods and "soul boy freaks." 100 Unlike Strange, whose Visage stood for visual/image (vis), travel (visa), and modernity (age), 101 Elms and Spandau Ballet regularly placed themselves in a subcultural lineage of grassroots youth cults, constructing aristocracies of style that eschewed class hierarchies.
Third, pride and ambition were extolled as an antidote to the negativity of punk's no future.People were encouraged to "make the most of themselves rather than the least of themselves." 102Elms and Dagger went to the London School of Economics.Nevertheless, they and Kemp were keen to underline the working-class and non-art school credentials of Spandau Ballet."[There] is a different working-class stereotype to your dustman, punk type," Dagger said, "we threaten that." 103Thus, Kemp presented himself as the "anti" Jimmy Pursey, referencing the lead singer of Sham 69 as the epitome of lumpen punk rock. 104Herein, too, came an apparent break from the 1970s, with correlations to Thatcherism being later registered in a shift of discourse.Youthful ingenuity and agency now found articulation as entrepreneurship; creativity became a business venture; autonomy signaled aspiration or self-centered individualism; internecine squabbles powered "healthy competition." 105uch schtick led to criticism.Most troublingly, references to classicism and "White European Dance Music" raised the specter of fascist flirtation, paving the way for positive reviews in National Front publications and censure from the left and the music press. 106lternatively, as youth unemployment rose and riots raged across Britain's inner cities over the summer of 1981, new romantics were seen as frivolous and detached: "let them eat smoked salmon," Paul Morley quipped as he watched Duran Duran perform on the same night as Birmingham burned.107Homophobic asides were not uncommon, typically as a launchpad to belittling or mocking new romantic fashion.Musically, attention centered on the relative ordinariness of the records released, as if the product rarely met the promise (or stayed too close to their Bowie, Roxy, John Foxx, Kraftwerk precursors). 108Come late 1981, therefore, Spandau Ballet and others associated with new romantism were being dismissed for their narcissism and elitism, presaging the aforementioned associations with Tory-esque aspiration.So, for Ian Penman writing in the NME's end-of-year round-up, "what had started as a jolting reassessment of the value of looking good" had since turned "formal and inflexible."In attempting to mythologize their "Nightclub life" and conceptualize their borrowed styles, the new romantics were left looking as if they "lack[ed] the assurance of anything to call their own." 109he most astute analysis came from Jon Savage, who was quick to recognize how far new romanticism and new pop more generally were both a response to and a continuation of punk's cultural intervention.Each reacted to what punk had seemingly become, be it the "boy's club socialism" of earnest politicos, the cardboard cutout of yobbo caricature, or the post-punk experimentation that pertained to offer an alternative to mainstream pop. 110In reply, the new romantics and new pop acts inverted punk's supposed motifs: "Glamour replaces grubbiness, naked elitism [replaces] inverse elitism," escape replaces commitment, "dance [replaces] thought, gold [replaces] grey." 111 Punk's techniques were thereby used not to "change the world, but to change their world." 112As such, both new romanticism and new pop learned from punk's critique of the music industry to ostensibly work through it toward creative expression, wealth, or recognition.This often shed the controversy and subversive intent associated with punk, but nevertheless took from the "lessons" outlined in McLaren's Sex Pistols' film, The Great Rock 'n' Roll Swindle (1980). 113The initial presentation of Spandau Ballet, for example, undoubtedly drew inspiration from the McLaren handbook.Finally, new romanticism-like Bow Wow Wow and Adam Ant-continued the cut-up and assimilated stylings that defined punk's beginnings, now leaving the twentieth century to escape time and place altogether.Extending his analysis from the 1970s "Me Decade" described by the American cultural commentor Tom Wolfe, Savage felt "self is now turned into an Art Object, while relations with the outside world are carried out from within a self-constructed cocoon.Self finally retreats into a fantasy vacuum, with its microcassette, video tapes, and replacement of 'nine-to-five' by the micro-chip and, of course, the dole queue." 114ere, then, Savage further developed a "postmodern" reading of new romantic form and style.Akin to Peter York's prediction of "mush," and following Penman's observation that new romantic "desires and designs" were rarely "harnessed to an innovative edge, but left to float heavily on the surface of things, like oil on water," Savage traced new romanticism through an "age of plunder." 115Plunder, that is, "not merely [of] post-war fashion but the whole of history," reassembled and recycled endlessly until style replaced meaning and all substance was lost to the empty gestures of "misapplied semiotics." 116As examples, he pointed to the John Flaxman lithographs used by Graham Smith to suggest Spandau Ballet's classicism and Chris Sullivan's Pablo Picasso pastiches for Blue Rondo á la Turk: record covers where historical artworks were used to "tart up product that has increasingly less meaning" once produced solely for aesthetic purposes and "cut loose" from the "subcultural beginning" that gave pertinence to Bowie, Roxy Music, and the Sex Pistols. 117n effect, Savage adopted the situationist theories of Guy Debord to reveal culture as a commodity to be endlessly recycled.Crucially, however, he did so by locating pop's postmodern turn next to broader technological changes and shifts in media.He reasserted pop's perennial relationship with money and marketing. 118He celebrated pop's implicit politics: its ability to resonate with the times; to sometimes transcend "mere entertainment"; to express aspects of youthful "consciousness"; to encourage agency and comment on "the relationship between the dominant (what we are told) and the subconscious (how we feel)." 119But he recognized too the limits of "overt rebellion" within the context of the media spectacle and a music industry reasserting dominancy in the wake of punk. 120avage also considered pop music and style in relation to the range of new groups and "street styles" circulating by 1981.A confusion of pop modes reigned, he suggested, pointing to the surfeit of subcultural identities revived or reimagined in the form of ted, mod, skinhead, punk, rockabilly, and "Bowie" looks compiled in i-D but also sold "off-the-peg" via clothing ads at the back of the music press.New romanticism, with its constantly changing and reassembled looks, represented a tipping point-the opening of what Polhemus later described as a "supermarket of style." 121Pop culture was entering a period of "dissolution," Savage predicted.What had once felt liberating in 1978-80, as style constructed new identities and challenged prescribed meanings, now felt meaningless as pop fragmented "into myriad markets." 122he facilitators of such change were found in the technological innovations transforming a media that expanded and diversified into the 1980s: cheaper production and advanced equipment, deregulation and globalization, video and MTV, color and gloss."Just walk through Soho and figure out where the money is coming from," Savage wrote later in 1985 as he pondered the increased coverage (and recycling) of pop music on television and in daily newspapers. 123Across the interlocking interests of new media industries owned by a gaggle of multinationals, pop culture was key to servicing the multiple vistas of an ever-more vibrant leisure economy. 124inally, Savage considered new romanticism with regard to a period framed politically by the social conservatism and free market economics of Margaret Thatcher's government, wondering if youth cultural style served merely as a means to act out the power politics of a particular government or epoch: punk's proletarian chic under Labour, new romantic's aristocratic ostentation under the Tories. 125This was less convincing.Indeed, Savage himself recognized such analogy to be more structural than actual.An extended analysis might therefore consider a Foucauldian reading of new romanticism, suggesting that the Blitz kids' adopted styles and sensibilities displayed evolving cultural logics and discourses already in the process of reshaping identities and subjectivities into the late twentieth century.Certainly, Savage's assessments appear most astute when considered through a range of metanarratives mediating broader cultural change and crisscrossing though processes of deindustrialization, globalization, and ever-expanding media spectacle.If seen as an emergent culture, the semiotics, discourse, and performance of new romanticism suggest structures of feeling both resonating with and challenging prevailing sociocultural and political mores.A cultural variant of "popular individualism" perhaps, revealing how a growing postwar desire for autonomy and agency was performed and creatively imagined in ways that realized shared modes of understanding, experience, and sensibility? 126In 1973, David Bowie declared: "Once upon a time, your father, my father, everybody's father I presume, wanted a good job, with a good income, or reasonable income [and] some chance of promotion to secure their family life […] and that's where it ended.But now people want a role in society, they want to feel they have a position, they want to be an individual.And I think there is a lot of searching to find the individual within ourselves." 127In a post-Bowie sanctioned context of spectacular self-reinvention, the urge toward greater autonomy and selfdetermination was thereby directed at expressing the extraordinary as well as the "ordinary." 128avage himself now considers the new romantic period "fantastically important," particularly with regard to gender fluidity and the evidently gay elements coming to the fore. 129s a response to both the perceived grimness of the late-1970s/early-1980s and the negation of punk, extravagant style and the lure of the dance floor offered what the BBC's Newsnight recognized in early 1981 as a "positive reaction to a difficult world." 130 As it was, new romantic attempts to avoid definition allowed commentators of the time to apply their own logic and interpretation.Once named, moreover, new romanticism proved set to diffuse into the 1980s, its pose too bold to ever quite fade to grey (Figure 18).
Boys, now the times are changing, the going could get rough 131 Accounts of new romanticism and early 1980s pop tend always to look for the end or start of something: the break from punk; the advent of Thatcherism; the onset of club-or styleculture; the arrival of a new decade.Writing retrospectively in the mid-1990s, Peter York could only see the sounds, styles, and sensibilities of the Blitz kids as Thatcherite in all but name, a culture "driven by entrepreneurial zeal."Blitz was, he maintained, akin to the Centre for Policy Studies founded in 1974 to break the supposed postwar "consensus" and develop ideas to reshape Conservative priorities through and beyond the 1980s: "two [elitist] clubs with a door policy." 132History was thereby edited, the lens refocused, and the script revised to fit the predominant cultural and political narrative of the period."Were we Thatcher's children?," Stephen Jones similarly asked much later: "In a way we were […] that entrepreneurial spirit was encouraged by her because it was somehow a fresh start." 133It's like the 1970s never happened (to paraphrase Dave Rimmer).
Yet, we could equally understand the new romantics as heralding the twilight of a particular political moment.Informed by various postwar youth cultures (mod, soul, glam, punk) and certain artistic and gay milieus, proto-new romantics occupied and repurposed spaces made available through longer processes of deindustrialization and socioeconomic change.They took over backroom clubs, bars, and disused warehouses; they found cheap accommodation in depopulating cities; they forged looks and styles out of cultural images circulating within an expanding but relatively concentrated media spectacle; they signified key aspects of the "popular individualism" redolent of the 1970s, asserting self-defined identities and perspectives that rejected social shibboleths and hierarchies.In constructing and reimagining their image and selves, new romantics embodied key aspects of the sociocultural liberalization apparent from at least the 1960s, most obviously with regard to gender and sexual fluidity.Before AIDS and the conservative push-back of the 1980s, the new romantics signaled a moment of sexual possibility, wherein genders and sexualities blurred or emerged bravely and defiantly to seek new modes of expression.134That all this was captured and accelerated by technological innovation was less the result of 1980s economic policy and more the product of new media processes developing over the preceding period, particularly with regard to digital electronics, video, synthesized sounds, printing techniques, and design practice. 135What was captured, moreover, tended to relate far more to imagery and sensibilities evocative of the years preceding the 1980s.Initially, at least, new romantics suggested louche glamour not hard-nosed business; decadence not healthy efficiency; pop culture as an escape route rather than a career; street or haute couture not "designer" style; stylized or dystopian futurism not capitalist realism; modernist bricolage poised to collapse into a postmodern sea of signs.More to the point, much of what enabled new romanticism as an emergent culture would soon be challenged over the course of the 1980s. 136Changes to education policy and funding saw university and art school provision realign; the drive toward urban regeneration cut off grassroots affordability and access to creative and habitable spaces; the media and music industries adapted to reassert the control tested by punk (reaffirming their London-centricity as a result); hip hop then house, techno, and rave delivered a very different "1980s" to that seemingly apparent at the start of the decade (albeit with synth-pop claiming some influence on the latter and various erstwhile futurists and new romantics continuing to DJ an array of dance music).New romantic ingenuity was funded and facilitated in large part by a welfare state that saw benefits and student grants slowly eroded or made more circumscribed from 1980.Many came from working-class backgrounds, finding space through creativity, and sometimes also education, to inform wider aspects of British culture from the "bottom-up" or margins. 137If this particular "groundswell of entrepreneurism" started in the 1970s, as Dylan Jones admits, then it might therefore be said to have rescinded over the 1980s as Thatcherism took hold. 138t is wrong, then, to suggest the Blitz club and new romanticism led seamlessly to a world of Groucho clubs and Soho brasseries. 139Even in 1983, the blue-eyed soul of Spandau Ballet's "True" seemed some way from the disco-throb of their first single, "To Cut A Long Story Short" (1980), let alone the futuristic polysexual playgrounds of Billy's and Blitz.Better, perhaps, to recognize new romanticism neither as a "start" nor an "end" but as a moment of coalescence en route to dissipation.If, as Williams suggests, emergent cultures contain both "new" and "residual" elements, then the poses, sounds, and sensibilities of the Blitz kids and their equivalents across the UK necessarily bore traces of the past and portents of the future.Be it consciously or unconsciously, new romantics-and the wider new pop and stylebased cultures of which they formed part-sought to establish new practices and relationships, reordering and repositioning elements from the past and present to reflect and embody structures of feeling both dissolving and emerging.These, in turn, would be shaped and channeled by ongoing processes of socioeconomic change, ensuring the recognizable signifiers of new romanticism were reframed and reinterpreted as priorities, language, and polity shifted.
Once determined, new romanticism disaggregated and dispersed through various cultural strands, permeating into clubs, fashion, pop music, dance music, film, photography, and design.In many ways, therefore, what became known as new romanticism ceased to exist as soon as it was named, the post-Blitz parade of clubs, sounds, and styles denying the culture a focal point.For a moment, however, spectacular selves were created and consciously (re)invented through an amalgam of visual culture and inspired consumption.Across the UK, worlds of possibility were envisaged and constructed by young people in liminal spaces soundtracked by a fusion of glam, punk, disco, and electronics.In living their dreamlife through their nightlife, (proto)new romantics sought to transcend the mundane and embrace the past, future, in extreme. | 2024-06-02T15:03:08.597Z | 2024-05-31T00:00:00.000 | {
"year": 2024,
"sha1": "4c008e149eb9e3239ff33f0e31415cf56cd2281e",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/F569684D8A95EE755DA5CBD7BF814B46/S0021937124000571a.pdf/div-class-title-past-future-in-extreme-looking-for-meaning-in-the-new-romantics-1978-82-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "465063ae96dab784612cbfcd54dc14a1e00fe487",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": []
} |
92211587 | pes2o/s2orc | v3-fos-license | New species of Sorbus ( Rosaceae ) for the flora of the Nakhchivan Autonomous Republic ( Azerbaijan )
As a result of perennial researches conducted, as well as the analysis of the herbaria materials and literature sources during expeditions to the various areas of the Nakhchivan Autonomous Republic during 2004–2017, the following new species of rowans, Sorbus albovii Zinserl., S. armeniaca Hedl., S. buschiana Zinserl., S. caucasica Zinserl., S. fedorovii Zaikonn., S. kusnetzovii Zinserl., S. migarica Zinserl. and S. tamamschjanae Gabr., were identified for the flora of the Nakhchivan Autonomous Republic. The article includes information about the synonyms of the types, regularity of distribution depending on altitude zones, bloom and ripening duration.
Materials and methods
In order to clarify the current situation of the representatives of Sorbus L. in the flora of the Azerbaijan Republic, we checked the herbariums kept at Herbarium Fund of the Botany Institute of Azerbaijan National Academy of Sciences, Bioresources İnstitute of the Nakhchivan Branch of Azerbaijan National Academy of Sciences, Nakhchivan State University.We also performed comparative analysis of herbarium samples from various institutions and organizations web sites.We also determined the status of Sorbus L. genus during the field expeditions of 2004-2017 (Ibragimov, 2008;Askerov, 2011Askerov, , 2016;;Talibov & Ibrahimov, 2013).We used the data from (Zinzerling, 1939;Grossheim, 1952;Konovalov, 1954;Prilipko, 1954;Gachechiladze, 1965;Trees…, 1970;Areas…, 1980) for identification of the species.It is either a tree or a shrub (Fig. 1a).Buds are naked or a little hairy.Leaves are obovate or ellipsis shape.A little shrunken in trunk or round shaped.Top side is sharp or obtuse, usually sharp tipped with 7-10 cm length and 4-7 cm width.Side veins number is 8-11 pairs, top side is naked (initially a little hairy along the veins), bottom side is green and little hairy.Sides are serrated, top side is hardly visible double serrated, sharp serrates reach the blade of the leaf.Sepal feels hairy, serrates are of triangular shaped and sharp.Petals are egg-shaped.Fruits are round or oval shaped of red color turning green afterwards.Blooms in May-June and gives fruits in August-September.
Habitat.Distributed in the middle and high mountain ranges, spread among sparse forests and bushes at a height of 1,800-2,000 m (2200 m) above sea level.
Distribution.distributed in the sparse forest surrounding Nurgut village of the Ordubad district of the Nakhchivan Autonomous Republic, along with the oak, hawthorn, or pear types, or separately 18.VII.2012.T. H. Talıbоv, A. M. Ibrahimоv; In the oak forest in the area of Bichenek village of Shahbuz district 04.VII.2014.T. H. Talıbоv, A. M. Ibrahimоv.This typus was first time identified by Y. D. Zinzerling (1939) in 1929 as a result of herbaria samples collected.While Y. D. Zinzerling (1939) was systemizing the species related to Sorbus L. type he accepted S. albovii Zinserl.species as a separate species, however, E. C. Gabrielyan (1978) identified it as a synonym of S. subfusca (Ledeb.)Boiss.species.Despite I. T. Zaikonnikova (1980) also noting the similarity of S. albovii Zinserl. to S. subfusca (Ledeb.)Boiss., a species described on basis of samples collected in the North-West Caucasus (Abago Mountain), in her other article (Zaikonnikova, 1975) she accepted it as a separate species.According to I. T. Zaikonnikova (1980) the separation of S. albovii Zinserl.and S. subfusca (Ledeb.)Boiss.species is confirmed by the difference in their geographical distribution and the number of chromosomes.Although the species S. albovii Zinserl. is found almost all over the Caucasus, the territory of the species S. subfusca (Ledeb.)Boiss.
is limited to the western part of the Caucasus.Also, S. albovii Zinserl.type is tetraploid (2n = 68), while S. subfusca (Ledeb.)Boiss. is diploid (2n = 34).These species also differ according to the variety of morphological symptoms (form, size, edge of the leaf, etc.).For this reason, it is more expedient to treat S. albovii Zinserl.as a separate species as described by S. K. Czerepanov (1995).As noted by T. I. Zaikonnikova (1975), the species S. albovii Zinserl. is found only in Dagestan and the Nakhchivan Autonomous Republic.It is a shrub or stunted tree (Fig. 1b).The leaves are the egg shaped, ellipse or ellipsoid form.Deepens toward the trunk.Top is sharp and rarely blunt with a length of 6-8 cm and 3.5-5.0cm width, edge with 5-7 margins that are not deep (lower margins reach to the half 1/2-1/3 of the width of the leaf blade.It has 30-36 sharp teeth.From the top it is dark green and naked, beneath is grey or white with thick hairs.The number of side veins is 9-10 pairs, that are clearly visible from beneath the leaf.It is multiflorous.Sides of the sepal are sharply triangular in shape.Petals are white, egg shaped.Fruits are 1.0-1.2cm length, 0.8-1.1 cm width and are oval or round shaped, squeezed from sides, gathered in scutellum alone or in 3-7 pieces.Mature fruits are red and turn blue when they dry.Blooms in May-June and gives fruits in September-October. Lectotypus.Karabach orient in extreme margine sylvarum versus cucumen m.Kirs, 18.IX.1829,Szovits (LE, isolect.BM).
Habitat.It is spread between stone-rocky slopes, arid spruce forests and bushes at a height of 1,500 to 2,300 m above sea level in middle and high mountain ranges.
Distribution.In the sparse forest surrounding Nurgut village of the Ordubad district of the Nakhchivan Autonomous Republic, in conjuncttion with the types of oak, hawthorn, apple, pear, etc. or individually The species is derived from S. persica species in the form of the leaves, the depth of the slices (in the S. persica, this depth is 1/4 to 1/5 of the leaf), and S. caucasica type, the shape and smallness of the leaves, the form of the leaf base part and the top (the S. caucasica sections are two-thirds of a half width of the leaf moon) and uncut leaf boxes.It is a low tree or shrub of about 4-6 meters height (Fig. 2a).The leaves of 10-11 cm long are dual, while in the fruit growing shoots they are wide egg shaped-ellipsoid, and in the fruitless shoots they are of the lancet type.The number of side vessels is 10-11 pairs, the edges are double edged.Bottom surface is grey greenish with felt hairs.Leaf base is a wedge shaped.While Y. D. Zinzerling (1939) related S. buschiana Zinserl type as a separate species, E. C. Gabrelyan (1978) identified it as a synonym of S. subfusca (Ledeb.)Boiss.species.Although I. T. Zaikonnikova (1980) also noted the similarity of S. albovii Zinserl. to S. subfusca (Ledeb.)Boiss.species described on basis of samples collected in the North-West Caucasus (Abago Mountain), in another article (Zaikonnikova, 1975) she accepted it as a separate species.According to I. T. Zaikonnikova (1980) while S. buschiana Zinserl. is similar to S. albovii Zinserl due to the shape of its leaves and morphological features, it is still different in its thick skin type leaves greyish at the bottom, sepal with thick hairs and fruits.S. K. Czerepanov (1995) 2b).The leaves are obovate, round, wide ellipse or in some cases long ellipse, wedge shaped close to the leaf base.The tip is curly or pointy, (8) 10-12 (15) cm long and 6-11 cm wide.The outer margin is 5-7 lobes, which are not deeper (lobes reach up to 4 (1 to 3-1) of the width of the leaf).There are 30-35 pointed teeth, the apex is dark green, bare, and the bottom is dense gray or white with felt hairs.
S. buschiana
The leaf blades are full-fledged from the inner side or there are only 1-2 smaller teeth in the upper part.The number of side veins is 7-9 pairs, the veins on the lower surface of the leaf are clearly visible and felt haired.The flower group is multiflorous, and the petiole is felt haired.The sepal is felt haired and the teeth are sharp and triangleshaped.The petals are white and obovate.The fruits are 1.0-1.4cm in length, 0.6-1.1 cm in width, slightly oval, or slightly lengthened, gathered in 7-12 (20) pieces in the scutellum.The ripe fruit is red, it is bare, and it turns blue when it dries.Red-brown seeds are 0.5-0.6 cm length.The taste is not sweet, and it twists the mouth.Blossoms in May-June and fruits ripen in August-September.It is a shrub or tree with a height of 2 m (Fig. 3a).The leaves are 7-10 cm length, the edges are double, sometimes with deep teeth, and are turned egg or elliptic shaped.The tip is sharpened and is narrowed towards to the leaf base in a wedge shape.The veins on the lower side are clearly visible.Petiole is red-brown.The flowers are collected in a floral scutellum group.The sepal is felt haired, bubble and with obtuse teeth.Petioles are naked.The petals are white, ellipse width, almost twice the size of the sepal.Fruits are small, slightly curly, red and then becoming dark-blue.Blossoms in June, and fruits grow in September-October.
Habitat.It is spread in the middle and high mountain ranges, along the upper boundary of the forest at the altitudes 1,500-2,300 m above sea level, stony rocky slopes, arid spruce woods and shrubs.
Distribution.Around Hadi Kayiib, Guzuyatan areas of Akhura village of the Sharur region of the Nakhchivan Autonomous Republic and among forest bushes 23.VI.2009.T. H. Talibov, A. M. Ibrahimov.It is illustrated by T. I. Zaikonnikova (1974) on basis of herbarium specimens collected from Ossetia.E. Gabrelian (1978) considers that S. fedorovii Zaikonn.type is a synonymous species of S. subfusca (Ledeb.)Boiss.According to T. I. Zaikonnikova not considering S. subfusca (Ledeb.)Boiss.as a polymorph and using the various signs on the shape changes illustrated S. fedorovii Zaikonn.species.However, as a result of analysis on the collected herbarium samples, it was clear that S. fedorovii Zaikonn.differs in its shape and size and the type of leaves, exuberance, edges density, fruits and so on.It is a shrub with a height of 4-5 m (Fig. 3b).The buds are felt haired or rarely naked.The leaves are broad, turned epplictic, or elliptic, while the leaf base narrows to a wedge shape.The apex is sharp or rarely blunt, with length of 5-8 cm and width of 4.0-6.5 cm.The upper side is bare, green, and the lower side is greyish with thick felt hair.The edge is double threaded, the number of side veins is 7-10 pairs.The flower petiole is white felt haired, sepal is white felt haired, in a triangular shape with serrates.Petals are white, rounded.The fruit is almost curly, 1.3 cm length, 1.1 cm width and 11-16 gathered in scutellum.The mature fruit is red, bright and slightly hairy.Dark brown seeds are 0.6-0.7 cm long.Blossoms in May-June and fruits grow in September-October.
Despite A. M. Askerov stating in his book on Conspect of Azerbaijan Flora (Askerov, 2011) that S. kusnetzovii Zinserl. is spread in Azerbaijan, he subsequently denied this idea in his later book (Askerov, 2016).According to T. A. Gasimova, Z. S. Aliyeva and T. D. Safguliyeva (2014), the species S. kusnetzovii Zinserl.isspread in the middle and high mountain ranges of the Greater Caucasus at the altitudes of 1,200-2,400 m above sea level among oak woods, in sparse forests, on open rocky slopes, and in shrubs.
Considering that S. kuznetsovii Zinserl. is a rare and endangered species, T. S. Mammadov, E. O. Iskandar and T. H. Talibov included it in the book on rare trees and shrubs of Azerbaijan ( 2016) and described methods of its conservation.Journal, 1973, 10:167.It is a shrub of 0.5-2.0m height (Fig. 4a).The buds are barely felt haired.Leaves are more or less dermal and round shaped, (5) 7-9 (10) cm long, (4.5) 6-7 (8) cm width, apex is blunt.The number of side veins is 8-10 pairs.Upper surface except the veins are naked or barely hairy.Bottom surface veins are covered with thick white felt hair, surface of veins is barely haired.Therefore, veins are clearly visible by their dark color.The edges of the leaf are full margined from bottom, the tip is (1/8-1/3) serrated.The serrates are small and sharp and their number is 20-25 from both sides.Leaf and flower stalk is short and white felt haired.Sepal is white felt haired, triangular shaped bending down after blossom and serrated.The mature fruit is dark red, with a length of 1.1-1.3cm and a width of 1.0-1.2cm.Blossoms in May-June and fruits grow in September-October.
Habitat.In middle and high mountain ranges, in the oak forests at altitudes of 1,800-2,000 m above sea level, along the upper border of the forest, in rocky slopes, in sparse arid forests and shrubs in the limestone lands.
Distribution.It is spread in sparse forest surroundings of Nurgut village in the Nakhchivan Autonomous Republic, either together with juniper, barberries, oak, hawthorn, pear, apple, hips etc. or or individually 02.XI.2011.T. H. Talibov, A. M. Ibrahimov.
It is closer to the species S. graeca and differs by its smaller petiole leaves.S. migarica Zinserl. the length of the petiole is (0.2) 0.5-0.7 (1.0) cm, while S.graeca is 1.0-1.5 or 1.5-2.0cm length.The serrates of the leaf margin are collected from the center of the apex of the leaf, not from the center of the leaf (or even not from the center to the base) as in S. graeca.
In the information of I. T. Zaikonnikova, S. migarica Zinserl species is registered as S. graeca var.cuneata Zinserl.on basis of subsequent analysis of herbarium samples collected from the Caucasus.
This kind of definition is also met in the samples stored in the Herbarium of the Institute of Botany of the Academy of Science of Azerbaijan (In the samples of the Herbarium Fund of the Institute of Botany of the Academy of Science of Azerbaijan, samples are shown as collected in Mazra and Urmus villages (16.08.1933, Karyagin) and Urmus village area (02.08.1933, İsayev;15.08.1933, Karyagin) of the Ordubad region.Therefore, E. S. Gabrielyan (1978) showed S. migarica Zinserl species as a synonym of S. umbellata (Desf.)Fritsch var.orbiculata Gabr.(S. graeca var.orbiculata Zinserl.)species.However, as a result of analysis of the literature sources and of the collected herbarium smaples it was concluded that it is reasonable to accept S. migarica Zinserl.as a separate species due to the variety of its morphological characteristics and geographical area as was mentionned by S. K. Czerepanov (1995).I. T. Zaikonnikova (1973) also mentioned that S. migarica Zinserl.species is not a narrowly endemic species as described by Y. D. Zinzerling (1939) It is a shrub or a low tree of 2-5 m height (Fig. 4b).The leaves are egg shaped or elliptical, 2.5-9.0 cm length, 1.5-5.0cm wide, close to the leaf base is narrowed in a wedge form, the apex is blunt.The top surface is dark green, bare, and the lower surface is dense gray or white felt haired.Its margin is slightly serrated and has a significant deep short lobe.The number of side veins is 7-9 pairs.Flower group is a multiflorous and peltate.The petiole initially is hairy and then becomes naked.The sepal serrates are triangular in shape,short and sharp.The petals are white and ovoid.The fruits are 1.2 cm in length, 1.1 cm in width and are wide-elliptic, 5-11 (18) are collected together in scutellum.The mature fruit is bright orange and bright.Light brown seeds are 0.6 cm in length, 0.2 cm in width.The taste is not sweet, and the mouth twists.Blossoms in May-June and fruits grow in September-October.
Habitatl.In middle and high mountain ranges, in the oak forests at altitudes of 1,800-2,400 m above sea level, along the upper limit of the forest, on rocky slopes, in sparse arid forests and shrubs in the limestone lands.
Distribution.In the sparse forest around the Garagush mountain Lizbirt area of the Sharur region of the Nakhchivan Autonomous Republic, together with oak, hawthorn, pear, hips, nuts, etc. species or individually.05.X 2011.T. H. S. tamamschjanae Gabr.species differs from S. arrneniaca Hedl.Species with its leaves narrowing in a wedge shape (not round) to the leaf base, shapes and size of its lobes, hairness, veins, orange color (not red) fruits and from S. persica Hedl.Species with its rhomb and elliptic shape leaves that are dark green on the top and green white hairy short sharp lobed leaves, bent fruit stalks and colour of its fruits.
Despite rowan trees being widely distributed in the forests of the territory of the Nakhchivan Autonomous Republic, we registered that they are mostly present in high, middle, and sometimes lower mountain ranges at the edges of forests, mostly individually or in small groups.We suggested that rowan species are the subdominant tree plants in the arid and sparse forests.
Fig. 1 .
Fig. 1.Herbarium specimens of Sorbus albovii Zinserl (a) and S. armeniaca Hedl.(b) 2. S. armeniaca Hedl.Monogr.d.Gatt.Sorbus (1901) 69; S. K. Schneid., III.Handb.d.Laubholzk.I (1906) 693; Grossg., Flora of the Caucasus, IV (1934) 289; Zinserl. in the Flora USSR, IX (1939) 396; Grossg.,The description of the plants of the Caucasus, (1949) 74; Grossg., Flora of the Caucasus, V (1952) 36; Prilipko, Flora of Azerbaijan, V (1954) 58; Trees and shrubs of Azerbaijan, III (1970) 45.It is a shrub or stunted tree (Fig.1b).The leaves are the egg shaped, ellipse or ellipsoid form.Deepens toward the trunk.Top is sharp and rarely blunt with a length of 6-8 cm and 3.5-5.0cm width, edge with 5-7 margins that are not deep (lower margins reach to the half 1/2-1/3 of the width of the leaf blade.It has 30-36 sharp teeth.From the top it is dark green and naked, beneath is grey or white with thick hairs.The number of side veins is 9-10 pairs, that are clearly visible from beneath the leaf.It is multiflorous.Sides of the sepal are sharply triangular in shape.Petals are white, egg shaped.Fruits are 1.0-1.2cm length, 0.8-1.1 cm width and are oval or round shaped, squeezed from sides, gathered in scutellum alone or in 3-7 pieces.Mature fruits are red and turn blue when they dry.Blooms in May-June and gives fruits in September-October.Lectotypus.Karabach orient in extreme margine sylvarum versus cucumen m.Kirs, 18.IX.1829,Szovits (LE, isolect.BM).Habitat.It is spread between stone-rocky slopes, arid spruce forests and bushes at a height of 1,500 to 2,300 m above sea level in middle and high mountain ranges.Distribution.In the sparse forest surrounding Nurgut village of the Ordubad district of the Nakhchivan Autonomous Republic, in conjuncttion with the types of oak, hawthorn, apple, pear, etc. or individually 18.VII.2012.T. H. Talibov, A. M. Ibrahimov; Oak forest in Bichanak village of Shahbuz region 04.VII.2014.T. H. Talibov, A. M. Ibrahimov.
Typus.Delvars, inter pagas N. Ermani et Schavlochovo, in schistosis, alt.1850-2200 m, E. et N. Busch (LE).Habitat.It is spread in middle and high mountain ranges, steep rocky slopes at the altitudes of 1,800 to 2,200 m above sea level, in arid spruce forests and bushes.Distribution.In the sparse forest surrounding Nurgut village of the Ordubad district of the Nakhchivan Autonomous Republic, along with oak, hawthorn, pear types, or even individually 16.VIII.2012.T. H. Talibov, A. M. Ibrahimov; Oak forest in Bichanak village of Shahbuz region 09.VIII.2013.T. H. Talibov, A. M. Ibrahimov.93
Fig. 2 .
Fig. 2. Herbarium specimens of Sorbus buschiana Zinserl.(a) and S. caucasica Zinserl.(b) 4. S. caucasica Zinserl. in the Her.Inst.Bot.AS of USSR IV (1923) 17-18; Grossg.Flora of the Caucasus, IV (1934) 288; Zinserl. in the Flora USSR, IX (1939) 395; Kolakovsk, Flemish Abkh.II (1939) 297; Sosnovsk.in Flora of Georgia, V (1949) 352; Grossg.,The description of the plants of the Caucasus, (1949) 74; Grossg.Flora of the Caucasus, V (1952) 34, -S.aria v. intermedia Akinph., Prilipko, Flora of Azerbaijan, V (1954) 58; Fl.Center.Kavk. 1 (1894) 159.-S.aria v. incisa Al'bov in Proceedings Tiflis Bot.Gard. 1 (1895) 72.-S.scandica auct.fl.cauc., non-fries.-S.woronowii; Shrubs and trees of Azerbaijan, III (1970) 45.A low tree is or shrub of about 4-6 meters in height (Fig.2b).The leaves are obovate, round, wide ellipse or in some cases long ellipse, wedge shaped close to the leaf base.The tip is curly or pointy, (8) 10-12 (15) cm long and 6-11 cm wide.The outer margin is 5-7 lobes, which are not deeper (lobes reach up to 4 (1 to 3-1) of the width of the leaf).There are 30-35 pointed teeth, the apex is dark green, bare, and the bottom is dense gray or white with felt hairs.The leaf blades are full-fledged from the inner side or there are only 1-2 smaller teeth in the upper part.The number of side veins is 7-9 pairs, the veins on the lower surface of the leaf are clearly visible and felt haired.The flower group is multiflorous, and the petiole is felt haired.The sepal is felt haired and the teeth are sharp and triangleshaped.The petals are white and obovate.The fruits are 1.0-1.4cm in length, 0.6-1.1 cm in width, slightly oval, or slightly lengthened, gathered in 7-12 (20) pieces in the scutellum.The ripe fruit is red, it is bare, and it turns blue when it dries.Red-brown seeds are 0.5-0.6 cm length.The taste is not sweet, and it twists the mouth.Blossoms in May-June and fruits ripen in August-September.
Typus.Caucasus, Beshtau, 1,300 m above sea level, 23.V.1887, I. Akinfiev (holo, LE).Habitat.It is spread in the middle and high mountain ranges, in the woods at the altitudes of 1,800-2,200 m above sea level, along the upper border of the forest, on rocky slopes, among the bushes on limestone soils.Distribution.Oak woods in Bichanak village of Shahbuz region of the Nakhchivan Autonomous Republic 08.IX.2009.T. H. Talibov, A. M. Ibrahimov.S. caucasica Zinserl.type S. armenica Hedl.and S. persica Hedl.are close to their species and easily hybridise with them.Considering that S. caucasica Zinserl. is a rare and endangered species, T. S. Mammadov, E. O. Iskandar and T. H. Talibov included it in their book of rare trees and shrubs (2016) and described methods of its protection. 5. S. fedorovii Zaikonn.Botanical.Magazine, 1974, 59, 11: 1605.-S.subfusca auct.non Boiss.
Fig. 3 .
Fig. 3. Herbarium specimens of Sorbus fedorovii Zaikonn.(a) and S. kusnetzovii Zinserl.(b) 6. S. kusnetzovii Zinserl.Zinserl. in the Flora USSR, IX (1939) 397 et Add.VIII: 496; Grossg.,The description of the plants of the Caucasus, (1949) 74; Grossg.Flora of the Caucasus, V (1952) 33, Prilipko, Flora of Azerbaijan, V (1954) 56; Gachechiladze, the Dendroflora of the Caucasus, IV (1965) 116; Zaikonnikova, Botan.Journal, 1980, 65, 9:1228; Zaikonnikova, Botan.Journal, 1982, 67, 1:101; Shrubs and trees of Azerbaijan, III (1970) 43.It is a shrub with a height of 4-5 m (Fig.3b).The buds are felt haired or rarely naked.The leaves are broad, turned epplictic, or elliptic, while the leaf base narrows to a wedge shape.The apex is sharp or rarely blunt, with length of 5-8 cm and width of 4.0-6.5 cm.The upper side is bare, green, and the lower side is greyish with thick felt hair.The edge is double threaded, the number of side veins is 7-10 pairs.The flower petiole is white felt haired, sepal is white felt haired, in a triangular shape with serrates.Petals are white, rounded.The fruit is almost curly, 1.3 cm length, 1.1 cm width and 11-16 gathered in scutellum.The mature fruit is red, bright and slightly hairy.Dark brown seeds are 0.6-0.7 cm long.Blossoms in May-June and fruits grow in September-October.Typus.Caucasus ossidentalis, Reservatum Publicum Caucasicum in rupilus et pratulis in declivio australi montis Zakan, 12.VII.1930,A. İ. Leskov (LE).Habitatl.It is spread in the middle and high mountain ranges, at the altitudes of 1,700-2,300 m above sea level in the oak woods, on the slopes of sparse forests, in open rocky slopes, and among shrubs.It is Habitatl.It is spread in the middle and high mountain ranges, at the altitudes of 1,700-2,300 m above sea level in the oak woods, on the slopes of sparse forests, in open rocky slopes, and among shrubs.It is found alone or in groups in sparse oak woods or among shrubs along with Rhamnus cathartica L., Viburnum lantana L., Lonicera iberica Bieb., Sorbus graeca (Spach).ex Schauer, S. aucuparia L., Euonymus verrucosus Scop., Berberis iberica Stev.& Fisch.ex DC., Juniperus oblonga Bieb.and so on.Distribution.Oak forest in Bichanek village of Shahbuz district of the Nakhchivan Autonomous Republic 09.VIII.2013.T. H. Talibov, A. M. Ibrahimov; In the sparse forest surrounding Nurgut village of the Ordubad region, in combination with oak, hawthorn, apple, pear, hips, and so on or alone 04.IX.2015.T. H. Talibov, A. M. Ibrahimov.
Fig. 4 .
Fig. 4. Herbarium specimens of Sorbus migarica Zinserl.(a) and S. tamamschjanae Gabr.(b) 7. S. migarica Zinserl.Zinserl. in the Flora USSR, IX (1939) 398 et Add.VIII: 496, tabl.26, 3 (fol.);Grossg.,The description of the plants of the Caucasus, (1949) 74; Sosnovsk.in Flora of Georgia, V (1949) 353; Grossg.Flora of the Caucasus, V (1952) 34, tabl.6, 2, sub S. graеsa (excl.petiolum); Gachechiladze, the Dendroflora of the Caucasus, IV (1965) 115, fig.19, 1 (fol.).-S.aria auct.non Crantz: Albov, Proceedings of the Tiflis Botanical Garden, 1 (1895) 70, r. r. (Prodr.Fl.Colch.).-S.graeca auct.non Hedl.: Zaikonnikova, Botan.Journal, 1973, 10:167.It is a shrub of 0.5-2.0m height (Fig.4a).The buds are barely felt haired.Leaves are more or less dermal and round shaped, (5) 7-9 (10) cm long, (4.5) 6-7 (8) cm width, apex is blunt.The number of side veins is 8-10 pairs.Upper surface except the veins are naked or barely hairy.Bottom surface veins are covered with thick white felt hair, surface of veins is barely haired.Therefore, veins are clearly visible by their dark color.The edges of the leaf are full margined from bottom, the tip is (1/8-1/3) serrated.The serrates are small and sharp and their number is 20-25 from both sides.Leaf and flower stalk is short and white felt haired.Sepal is white felt haired, triangular shaped bending down after blossom and serrated.The mature fruit is dark red, with a length of 1.1-1.3cm and a width of 1.0-1.2cm.Blossoms in May-June and fruits grow in September-October.Typus.Megrelia, mons.Migaria, 21.VII.1936,P. Panjutin (LE).Habitat.In middle and high mountain ranges, in the oak forests at altitudes of 1,800-2,000 m above sea level, along the upper border of the forest, in rocky slopes, in sparse arid forests and shrubs in the limestone lands.Distribution.It is spread in sparse forest surroundings of Nurgut village in the Nakhchivan Autonomous Republic, either together with juniper, barberries, oak, hawthorn, pear, apple, hips etc. or or individually 02.XI.2011.T. H. Talibov, A. M. Ibrahimov.It is closer to the species S. graeca and differs by its smaller petiole leaves.S. migarica Zinserl. the length of the petiole is (0.2) 0.5-0.7 (1.0) cm, while S.graeca is 1.0-1.5 or 1.5-2.0cm length.The serrates of the leaf margin are collected from the center of the apex of the leaf, not from the center of the leaf (or even not from the center to the base) as in S. graeca. | 2018-12-29T19:50:24.244Z | 2018-05-06T00:00:00.000 | {
"year": 2018,
"sha1": "99cb311c1cf5917a58a42ce706e18cbd08a758fd",
"oa_license": "CCBY",
"oa_url": "https://ecology.dp.ua/index.php/ECO/article/download/790/753",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "99cb311c1cf5917a58a42ce706e18cbd08a758fd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
} |
248279289 | pes2o/s2orc | v3-fos-license | Relationship Between Age-Related Changes in Skeletal Muscle Mass and Physical Function: A Cross-Sectional Study of an Elderly Japanese Population
Skeletal muscle mass and muscle strength are positively correlated, but the relationship between grip strength and global muscle strength is controversial. This study aimed to clarify the changes in site-specific skeletal muscle mass by age group and determine the relationship between site-specific, age-related changes in skeletal muscle mass and physical function in community-dwelling elderly people in Japan. The participants were divided into age groups of five-year intervals (65-69 years, 70-74 years, 75-79 years, and ≥80 years) and were also categorized by sex. The skeletal muscle mass of the upper limbs, lower limbs, and trunk was measured using multifrequency bioelectrical impedance analyzers (InBody 430 (Biospace Co., Ltd., Seoul, Korea) and InBody 470 (InBody Japan Inc., Tokyo, Japan)). For physical function assessment, we measured grip strength, quadriceps strength, sit-up count, sit-and-reach distance, and standing time on one leg with eyes open and performed the timed up and go (TUG) test. The results showed that skeletal muscle mass decreased with age regardless of sex at all measured sites. Furthermore, a partial correlation analysis adjusted for age, physical constitution, and the presence/absence of exercise habits revealed that the highest correlation was between skeletal muscle mass in all sites and grip strength. Thus, monitoring grip strength may be used as a representative of systemic skeletal mass even in Japanese people.
Introduction
Changes in body composition are strongly associated with physical disability among the elderly [1,2]. Previous studies reported that skeletal muscle mass decreases at a rate of approximately 5% every 10 years after the age of 30 years and that the rate of loss accelerates after the age of 60 years [3,4]. Furthermore, elderly people aged ≥75 years reportedly lose skeletal muscle mass at an annual rate of approximately 1% [5]. According to previous imaging studies on the loss of skeletal muscle mass, greater and more rapid ageassociated loss of skeletal muscle mass occurs in the lower limbs than in the upper limbs [6][7][8].
Skeletal muscle mass and muscle strength are positively correlated [9]. Many previous studies in elderly populations utilized grip strength as a measure of global muscle strength. Porto et al. observed a significant association between grip strength and muscle strength pertaining to 10 muscle groups [10]. However, it has been reported that skeletal muscle mass alone does not determine muscle strength [11]. Moreover, the authors of another study suggested caution when referring to the representativeness of grip strength as a predictor of global muscle strength [12]. Thus, no consensus has been reached on this matter.
As skeletal muscle mass varies according to race, it is necessary to investigate age-associated changes in skeletal muscle mass and the relationship between skeletal muscle mass and physical function including muscle strength in a Japanese population. Therefore, this study aimed to elucidate age-associated changes in site-specific skeletal muscle mass and determine the relationship between site-specific skeletal muscle mass and physical function in community-dwelling elderly Japanese people.
Participants
This was a cross-sectional study. We recruited participants from among the elderly people living in Yasu City, Shiga Prefecture, who were registered with a local program intended to prevent long-term care dependency and provide emotional support. Among the elderly people aged ≥65 years who took part in the annual physical fitness tests between 2015 and 2019, we recruited those with no long-term care requirement 1 2 2 2 2 2 3 3 2 certifications, history of central nervous system (CNS) disease, or suspected cognitive impairment (Mini-Mental State Examination ≧ 24). Those who participated for multiple years adopted the data of the year they participated for the first time. The participants were divided into age groups of five-year intervals (65-69 years, 70-74 years, 75-79 years, and ≥80 years) and categorized by sex. Participant characteristics were recorded.
Physical fitness tests were conducted under the supervision of public health nurses. The participants received a full explanation in advance of the measurements to be acquired and how the data would be managed. All participants provided written informed consent. This study was approved by the Research Ethics Committee of Kyoto Tachibana University (approval number: 17 -14).
Assessments
In addition to skeletal muscle mass, we evaluated grip strength, quadriceps strength, sit-up count, sit-andreach distance, and standing time on one leg with eyes open. The timed up and go (TUG) test was also performed.
For the measurement of skeletal muscle mass, we used multifrequency bioelectrical impedance analyzers (InBody 430 (Biospace Co., Ltd., Seoul, Korea) and InBody 470 (InBody Japan Inc., Tokyo, Japan)). The muscle mass (kg) of the upper and lower limbs, which constitute a proportion of the lean soft tissue mass (kg), was calculated as the total of both (the right and left) upper and lower limbs, respectively, excluding the muscle mass of the trunk.
Grip strength was measured using a digital grip dynamometer (T.K.K.58401, Takei Scientific Instruments Co., Ltd., Niigata, Japan) [13]. The grip width was adjusted such that the proximal interphalangeal joint of the index finger was flexed at 90°. The participants were instructed to stand in an upright position with the feet placed shoulder width apart and arms hanging by the sides of the body. They were then instructed to grip with maximum effort without the dynamometer touching their bodies. Two measurements were taken for the right-hand and left-hand grips, and the maximum values (kg) were considered to be representative.
The measurements were taken while the participants were in a sitting position with knees flexed at 90° [14]. Two measurements were acquired for each lower limb. The maximum values (kg) were considered to be representative.
Sit-ups were performed in the supine position with both arms crossed in front of the chest and both knees flexed at 90°. We counted the number of times that both elbows touched both thighs during a period of 30 seconds.
To measure the sit-and-reach distance, we used a specialized digital device (T.K.K.5412, Takei Scientific Instruments Co., Ltd., Niigata, Japan). Two measurements were acquired, and the maximum values (cm) were considered to be representative.
The standing time on one leg with eyes open was measured using a digital stopwatch with an upper limit of 120 seconds. Two measurements were acquired for each leg, and the longest time was regarded as a representative value. The participants were instructed to barefoot and keep both upper limbs lightly touching the sides of the body and maintain their line of sight at a fixation point ahead, 2 m above the ground.
The TUG test was conducted based on the Shumway-Cook method [15]. At the beginning of the measurement, the participants sat on a chair with their backs leaning into the backrest and their hands on their knees. At the signal of the examiner, the participants were instructed to stand up, walk 3 m forward as quickly as possible, cross a line marked on the floor, turn around, walk back, and sit back down. The time required for the participants to perform these actions was measured with a digital stopwatch. The test was performed twice, and the shortest time (seconds) was regarded as a representative value.
Statistical analysis
To evaluate skeletal muscle mass according to age, we first examined the interaction between skeletal muscle mass at different sites (upper limbs, lower limbs, and trunk) and sex. As a result, an interaction was established for all items; hence, a trend test by sex was performed for the comparison of skeletal muscle mass according to age. To clarify the relationship between site-specific skeletal muscle mass and physical function, we performed a partial correlation analysis by gender using age, body mass index (BMI), and the presence/absence of exercise habits (at least twice a week with each session lasting approximately 30 minutes), which have been reported to be related to skeletal muscle mass in previous studies, as covariates. SPSS version 25 (IBM Corp., Armonk, NY, USA) was used for all analyses. The significance level was set to 5% in all analyses. Statistical significance was determined by two-tailed tests.
Participant characteristics
Among the elderly persons who participated in the physical fitness tests, 684 people (148 men and 536 women) were included in the analysis after the exclusion of those who met the exclusion criteria. Table 1 presents participant attributes by sex. The mean age was higher among men than among women. However, there was no difference in physical constitution (BMI) between men and women. There was also no sex difference among participants receiving treatment for hypertension. However, a higher proportion of women were receiving dyslipidemia treatment, and a higher proportion of men were receiving diabetes treatment. Site-specific skeletal muscle mass and physical function by age group Table 2 shows the results of the trend tests by sex for the measurements of site-specific skeletal muscle mass and physical function. The trend test revealed that the skeletal muscle mass in the upper limbs, lower limbs, and trunk decreased with age in both men and women. In terms of physical function, grip strength, quadriceps strength, and standing time on one leg with eyes open decreased with age in both men and women. The time required to complete the TUG test increased with age among both men and women. The sit-up count decreased with age among men but not among women. No age-associated change was observed in the sit-and-reach distance among men or women.
Discussion
We studied age-associated changes in site-specific skeletal muscle mass in Japanese elderly people and examined the relationship between site-specific skeletal muscle mass and physical function. We found that the skeletal muscle mass of the upper limbs, lower limbs, and trunk tended to decrease with age in both men and women. Furthermore, the physical function parameter that had the strongest association with agerelated loss of skeletal muscle mass in both men and women was grip strength.
It has been reported that skeletal muscle mass decreases with age and that skeletal muscle mass is higher among men than among women at any age [8,16]. Our finding that the skeletal muscle mass of the upper limbs, lower limbs, and trunk decreases with age is thus consistent with the literature. Although the mechanism of age-related loss of skeletal muscle mass has not been elucidated, it may be related to an agerelated decrease in the number of motor nerves, changes in the neuromuscular junction, and a decrease in growth and sex hormone production [6,16,17].
We also examined the relationship between site-specific skeletal muscle mass (upper limbs, lower limbs, and trunk) and physical function. As skeletal muscle mass generally depends on physical constitution (e.g., height, weight, and BMI), when conducting a cross-sectional study of age-related changes, the effect of physical constitution cannot be ignored. In this study, we conducted a partial correlation analysis of the relationship between skeletal muscle mass and physical function. In the analysis, age, BMI, and the presence/absence of exercise habits were regarded as covariates. The analysis revealed a positive correlation between grip strength and skeletal muscle mass at each of the sites measured in men. Furthermore, in women, correlations were established between the skeletal muscle mass at all of the sites measured and all of the physical function parameters. Notably, grip strength had the strongest correlation with skeletal muscle mass at all sites among women.
As grip strength is easy to measure, it is a commonly used evaluation method [18] and is regarded as a measure of global muscle strength. It is associated with elbow flexor strength (r=0.64), knee extensor strength (r=0.53), trunk extensor strength (r=0.52), and trunk flexor strength (r=0.44) [19]. Moreover, grip strength is closely linked to both lower-limb muscle strength and the cross-sectional area of the lower-limb muscles and is also associated with walking ability [20]. Further, reduced grip strength is associated not only with a decline in the ability to perform activities of daily living but also with a decline in cognitive function [21]. Reduced grip strength is also linked to chronic diseases (e.g., diabetes and hypertension) [22], ischemic heart disease [23], depression [24], and increased mortality risk [25,26]. Thus, grip strength measurement is highly valuable in the assessment of physical function in elderly people [10,26]. From these facts, it was suggested that grip strength may be used as a representative of whole-body skeletal muscle mass even in Japanese.
A limitation of this study is that the participants comprised elderly people who were sufficiently independent to take part in the health support program organized by their local city. That is, the participants of this study were health-conscious elderly people. To generalize the findings of this study, it may be necessary to survey and analyze elderly people who do not or cannot take part in a health support program. Furthermore, this was a cross-sectional study. Changes that occur in individuals' skeletal muscle mass and physical function as they age cannot be tracked via a cross-sectional study. In addition, skeletal muscle mass and neurological factors are included in muscle strength, but neurological factors have not been investigated in this study. Therefore, a future longitudinal study is warranted.
Conclusions
We investigated the relationship between skeletal muscle mass and physical function in 684 Japanese general elderly people. As a result, skeletal muscle mass decreased with age. In addition, as the age increased, muscle strength, balance ability, and walking ability decreased. Among various physical functions, grip strength had the highest correlation coefficient with skeletal muscle mass. Our findings suggest that grip strength may be used as a representative of systemic skeletal mass even in Japanese people. | 2022-04-21T15:02:29.912Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "73033346cf075cad726a10c344af8e5645bd4cc2",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/91529-relationship-between-age-related-changes-in-skeletal-muscle-mass-and-physical-function-a-cross-sectional-study-of-an-elderly-japanese-population.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df0bf0e63bba5dfd2e29c66f48ebf2a98c12bc4e",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
270055671 | pes2o/s2orc | v3-fos-license | Spatiotemporal variation in ecophysiological traits align with high resolution niche modelling in the short-range banded ironstone endemic Aluta quadrata
We used species distribution modelling to estimate habitat suitability for the short range endemic, Aluta quadrata, in the arid northwest of Australia and conducted ecophysiological monitoring across different seasons. We found higher plant performance in the high-suitability site, providing evidence to support the species distribution model for A. quadrata.
Introduction
Plant species demonstrate considerable variation in their geographic range and distribution, and their capacity to respond to environmental stressors has become critical underpinnings for management initiatives (Madliger et al., 2018;Maxwell et al., 2019).Identifying management initiatives to support conservation actions is critical, particularly when plant populations are exposed to increasing disturbance pressure (Felton and Smith, 2017).For short-range endemics (SRE, species with narrow distributions; Lavergne et al., 2004), knowledge of the factors shaping their distributions and capacity to cope with environmental pressures is often lacking (Bartholomew et al., 2022).This is often a significant challenge for management initiatives, as SREs commonly have tenuous population numbers, localized to specialized habitat characteristics (Gosper et al., 2020;Howard et al., 2020).Consequently, many threatened SREs are inferred to be highly vulnerable to changes in their environment (Bartholomeus et al., 2011).Therefore, monitoring their persistence, as well as mitigating local impacts on communities, is critical to counter potential population losses, range contractions, and ultimately extinctions (Maxwell et al., 2019;Watson et al., 2020;Cazzolla Gatti et al., 2022).
Many edaphic and climatic factors, including soil physical structure, landscape topography, extreme temperatures and unpredictable rainfall, as well as unprecedented disturbance events (e.g.cyclone activity, floods, fires), are associated with shaping species distribution patterns (Maxwell et al., 2019;Pascual et al., 2022).Species distribution models (SDMs) are an effective tool in spatial ecology, environmental management and conservation for describing, explaining and predicting species' likely biogeography, particularly in response to threatening processes such as climate change or environmental degradation (Casazza et al., 2021).Correlative or phenomenological models are the most common approaches (Elith and Leathwick, 2009), identifying statistical relationships between species occurrence and local environmental factors to characterize and relate landscape elements that are critical to the distributions of species (Kearney and Porter, 2009).The resulting predictions of the probability of occurrence across a landscape are often used to infer habitat suitability (Gogol-Prokurat, 2011;Guisan et al., 2017) However, the challenge with SREs is that there are often discrepancies between the resolution of the spatial data and the geographical range of the occurrence data of the species in focus (Tomlinson et al., 2020).This can often lead to predicting large areas in the landscape that define high probability of occurrence despite known absences that may be due to geographic or dispersal barriers (Byrne, 2019;Casazza et al., 2021).Additionally, SREs are often associated with higher degrees of specialization or constraints to specific environmental conditions in an otherwise very challenging or stochastic landscape (Lavergne et al., 2004;Lannuzel et al., 2021).As such, edaphic and topographic data may be more appropriate for modelling SREs, as models can be constructed at a resolution that is biologically meaningful and informative across highly localized distributions (Tomlinson et al., 2020).Whilst correlative modelling approaches describe the patterns of association between species occurrence and environmental or climatic data, they often fall short in delivering causal explanations for the projected outcome (Peterson et al., 2015).In addition, correlative models are limited in their transferability to novel or changing environments and there are calls for in situ model validation and interpretations of causation (Kearney and Porter, 2009).
In conservation and restoration contexts, there are increasing demands for using ecophysiological measures to help measure performance, sensitivities and resilience of plants in response to natural as well as manipulated environments (Cooke et al., 2021;Schönbeck et al., 2023).Ecophysiological surveys can provide critical insights into the patterns and processes governing persistence, especially when responses demonstrate spatiotemporal variation (Grossman, 2023) and can be used as a tool to validate correlative, occurrencebased SDMs by quantifying plant-environmental responses (Tomlinson et al., 2021).This is because plant responses to the environment vary significantly, both spatially and temporally, with periods of ecophysiological activity and inactivity driven by seasonal moisture and temperature patterns (Schwinning and Sala, 2004;Hamerlynck and Huxman, 2009).Measures of gas exchange, chlorophyll fluorescence and leaf water potentials provide important ecophysiological indicators of plant performance in terms of physiological function, health and water stress (Madliger et al., 2018;Valliere et al., 2021;Schönbeck et al., 2023).In dryland ecosystems, increased physiological activity is typically triggered by sustained rainfall leading to elevated gas exchange, plant water use and productivity (Manzoni et al., 2014;Tarin et al., 2020), which can lead to improved plant health and reproductive success (Huxman et al., 2004).By contrast, during drought or thermal stress, plants undertake morphological and physio- et al., 2003).Prolonged periods of inactivity may risk reductions in cellular repair, and ultimately mortality (Larcher, 2003).These processes may become exacerbated depending on the environmental factors shaping the local niche, and plants at the edge of their distribution may experience greater environmental stress due to unfavourable niche characteristics (Abeli et al., 2014).Therefore, understanding ecophysiological responses of where species may grow and persist or where they may perish can help achieve targeted management actions to aid conservation more broadly.
Here, we constructed a high-resolution SDM informed by edaphic and topographic spatial data to identify the factors associated with the distribution of a narrow-range banded ironstone endemic Aluta quadrata Rye & Trudgen in arid tropical northwestern Australia.Using a suite of ecophysiological measures to quantify seasonal variation in plant performance in contrasting sites, this study develops an understanding of the niche that this plant occupies in contrast to a common co-occurring Eremophila latrobei F.Muell., which has a widespread distribution throughout dryland ecosystems in Australia.As such, the broad research objectives of the study were to: 1) characterize the niche that A. quadrata occupies and establish meaningful biological correlates between modelled probability of occurrence and plant performance, 2) define the ecophysiological interactions of A. quadrata in sites of contrasting probabilities and validate whether modelled probability corresponds to differential physiological performance and 3) evaluate the differences in ecophysiological performance of the SRE A. quadrata and the widespread generalist, Eremophila latrobei subsp.glabra (from hereon referred to as E. latrobei).We expected that there would be associations between ecophysiological performance and the SDM output for A. quadrata, with individuals in highsuitability locations presenting elevated physiological performance compared to individuals in low-suitability locations.Moreover, we expected that the generalist, E. latrobei, would demonstrate different spatiotemporal patterns in ecophysiological functioning to A. quadrata, but whether these would imply higher or lower performance should vary depending on biogeography.
Study location and species
The study area is located in the southern Pilbara (PIL) and northern Gascoyne (GAS) region, in the northwest of Western Australia.The climate in this area is typically characterized as semi-arid/arid, with >70% of the annual rainfall occurring during the hot summer period (average maximum air temperature: 38-41 • C; December-March; Charles et al., 2013, Bureau of Meteorology, 2023).Autumn, winter and spring seasons are typically characterized by dry, warm days and cool nights (average maximum air temperatures: 25-35 • C; Bureau of Meteorology, 2023), with infrequent or little rainfall.The landscape is characterized by elevated ranges, ridges, mesa outcrops of the Hamersley Ranges in the south and Chichester Ranges in the north (Pepper et al., 2013).An extensive network of rivers, drainage flats and floodplain systems of the Fortescue Marsh and the GAS region to the south envelope the Hamersley Ranges and the coastal Roeburne Plains to the north of the Chichester Ranges (Pepper et al., 2013).The vegetation is dominated by Triodia hummock grasslands on rocky skeletal soils, with Acacia and Grevillea mosaic shrub lands and mallees and trees along deeper soils and along riparian river and creek systems (McKenzie et al., 2009).
Aluta quadrata is a medium-sized shrub, ∼0.8-2.6 m in height, with white flowers and smooth, grey or pale brown fissured bark, and yellow-green needle-like foliage (Western Australian Herbarium, 1998-).Plant populations are restricted to a single banded ironstone range on the southern edge of the Hamersley Range in the PIL region, northwest Western Australia (Byrne et al., 2017;Binks et al., 2019) and grow in steep rocky slopes, gorges and gullies, with a preference for southern-facing slopes of rugged topography in skeletal soils, including Brockman Iron Formation substrates (Byrne et al., 2017).Currently there are an estimated 41 136 individuals distributed across three geographically discrete populations (Western Ranges, Pirraburdoo and Channar; Supplementary Fig. S1).We made ecophysiological comparisons between A. quadrata and a widespread common co-occurring plant species, E. latrobei, a medium-sized shrub, ∼0.3-3 m in height, with red-or pinkcoloured flowers, and grey-to green-coloured leaves (Western Australian Herbarium, 1998-).Eremophila latrobei plants are widely distributed throughout the arid zone region of the continent, sharing similar habitat preferences with A. quadrata, growing in stoney red sandy soils on ironstone hills, and more broadly across sandy soils on plains.Like A. quadrata, E. latrobei shares similar plant functional traits, with flowering occurring following summer rainfall between April and October, producing woody fruit, as well as becoming senescent plants by shedding leaves as seasons transition into the dry (Richmond and Chinnock, 1994;Brown and Buirchell, 2011).
Species distribution modelling
We constructed a species distribution model for A. quadrata using presence point data and publicly available datasets describing the physical soil characteristics and geomorphology, following Tomlinson et al. (2020).High-resolution spatial data for aspect, elevation and slope were sourced from Gallant and Austin (2012a) and Gallant and Austin (2012b), whilst spatial data describing the percentage of clay, silt and sand at 15-cm depth were sourced from Viscarra Rossel et al. (2014a), Viscarra Rossel et al. (2014b) and Viscarra Rossel et al. (2014c), respectively.These data were all aligned and downscaled to a consistent 1-arc-second resolution (∼25 m 2 ) by bilinear scaling using the elevation data as a template using the 'raster' package (Hijmans et al., 2015) in the R statistical environment (R Core Team, 2021) We used the maximum entropy algorithm implemented in MaxEnt version 3.3.3a(Phillips et al., 2006) to model the local distribution of A. quadrata in the three known populations along the southern edge of the Hamersley Range.Default MaxEnt parameter settings were used to develop logistic likelihoods of occurrence, with a value of 1 representing the highest likelihood (Phillips, 2008).To remove presence outliers, we applied a 10th percentile training presence, which excludes the 10% extreme (peripheral) observations.This was done to represent the 'core' of the known distribution and minimize the impact of uncharacteristic presence data.
We evaluated model performance by calculating the area under the threshold-independent receiver operating characteristic (ROC) curve (AUC), using values >0.9 to indicate well-validated models (Swets, 1988).We also calculated the True Skill Score (TSS) as a test of model robustness (Allouche et al., 2006;Williams et al., 2009) using the evalSDM function in the 'mecofun' v0.1.1 package (Zurell, 2020).Models with TSS <0.4 were identified as poor, whilst models with TSS >0.6 were identified as performing well (Beauregard and de Blois, 2014).We calculated a Boyce index of correlation between presence and suitability (Boyce et al., 2002) using the ecospat.boycefunction in the 'ecospat' package (Di Cola et al., 2017), where values close to zero indicate models with predictive performance no better than random, and models close to 1 indicate strong predictive performance (Hirzel et al., 2006).We also tested the significance of the partial response curves using pROC function in the 'ntbox' package (Osorio-Olvera et al., 2020).These performance metrics were calculated over 100-iteration bootstraps using 10% test presence, which reserves 10% of the known occurrence locations for testing the resulting models (Phillips et al., 2006;Phillips and Dudik, 2008).A full array of the test statistics available is presented in Supplementary Table 1.
Pilot models were developed using all the available candidate layers (elevation, aspect, slope, clay, sand and silt content and bulk density) and were further refined by removing layers that contributed <5% contribution to fit (Supplementary Fig. S2).The edaphic factors that the MaxEnt algorithm determined to be the best predictors of the probability of occurrence of A. quadrata were slope (percent), elevation (metres), soil bulk density (milligramme/cubic centimetre) and silt content (percent).As such, the final model was refined to these variables (Supplementary Fig. S2).The spatial projection was defined to encompass three IBRA bioregions (Thackway and Cresswell, 1997): the PIL, Little Sandy Desert (LSD) and GAS.
We interpolated a climate model to estimate microclimatic conditions associated with the spatial projection of the Max-Ent distribution model in line with the methodology and justification outlined in (Tomlinson et al., 2020).Essentially, microclimatic projections for summer (wet) and winter (dry) ambient air temperature, surface soil temperatures, soil water potential at 20-cm depth and solar radiance were calculated and averaged using the 'micro_global' algorithm of the 'NicheMapR' statistical package (Kearney andPorter, 2017 in R (R Core Team, 2021).We downscaled our spatial data to 20-arc-second resolution (∼300 km 2 ), resulting in 1 651 622 grid point locations.At each point location, representing the centroid of the associated grid square, the physical soil characteristics were summarized into a format appropriate for 'NicheMapR' following a freely available soil texture calculator produced by the US Department of Agriculture (Soil Texture Calculator | NRCS Soils (usda.gov))adapted to a computer algorithm similar to Gerakis and Baer (1999).
For each point location we calculated hourly microclimatic conditions for every day of the year, using five replicate years' resampling from the interpolated climate model (New et al., 2002).Hourly values were then summarized to average daily conditions.For lack of any quantified proxies for vegetation shading, all microclimatic projections were run assuming full sun, with recognition that this does not capture all the microclimatic variation across the course of the day.
We identified four consecutive 90-day periods when air temperature was warmest, when air temperature was coldest and with the highest and lowest rainfall, respectively.At each location, hourly values were summarized as daily averages for these 90-day periods were again summarized to a mean wettest and driest quarter average for each point location over a 10-year period.In order to rescale these data back to the native 1-arc-second resolution, we used an interpolation approach (Carter et al., 2018), where the microclimatic data at our 20-arc-second resolution were fed into a generalized linear model (GLM) informed by the edaphic and geomorphological data for each location.We generated unique GLMs for each microclimatic parameter for the wettest and driest quarters using the 'stats' package.We then used these GLMs to estimate the same parameters at point locations describing the grid centroids of the 1-arc-second landscape using the 'predict' function in R.
We extracted the climate data for 1000 random points within the training extent of the MaxEnt distribution model to construct a linear model describing the microclimatic correlates of the modelled likelihood of occurrence and habitat suitability.Following the construction of a 'full' model, we applied a model reduction using the 'dredge' function within the 'MuMIn' package (Barto ń, 2014), and the models were examined by Akaike's Information Criterion for small sample sizes (AICc; Burnham and Anderson, 2002).However, model reductions did not substantially increase model parsimony and the full model was retained and reported (Table 1).
Microclimatic conditions in contrasting sites
To further evaluate soil microclimatic conditions between high-and low-probability sites, volumetric soil moisture content (cubic metre/cubic metre) and soil temperature (degrees Celsius) were measured in the field using HOBO ® Micro Station Data Loggers (Onset Computer Corporation) that were fitted with two soil moisture (EC-5 ECH 2 O Dielectric Aquameter, Decagon Devices, Inc.) and two soil temperature probes (S-TMB-Temperature Smart Sensors, Onset Computer Corporation).The probes were buried at approximate depths of 300 mm of field and were set to log moisture content and temperature every 15 min for the entire duration of the study period (August 2021-October 2022).To convert volumetric moisture content to soil water potential, water retention curves were determined from soil composite subsamples extracted from each site, whereby three replicates of at least 5 g were saturated with water to obtain 'field capacity' moisture availability, followed by repeated oven drying at 75 • C with soil moisture measurements undertaken every 10 min using a dew point psychrometer (WP4C Dew Point PotentiaMeter, Decagon Devices, Inc.) until the measured soil water potentials were drier than −100 MPa.E) sites were 0.745 and 0.214, respectively, and were selected at similar landscape positions that were elevated and outside of major hydrological drainage areas, or creek lines.The average height × width of the measured plants was 139 ± 7 cm × 103 ± 10 cm and 152 ± 5 cm× 106 ± 5 cm for A. quadrata and E. latrobei, respectively.We did not find significant changes in plant sizes over the study period in plants.
Gas exchange: photosynthetic rate, stomatal conductance and transpiration rate
For each of the species, photosynthetic rate (Amax) and stomatal conductance (g s ) were measured using a LI-6400XT portable photosynthesis system and gas exchange analyser (LI-COR Biosciences, Lincoln, NE, USA) that was equipped with a 6400-40 leaf chamber fluorometer.All measurements were conducted between 0800 and 1200 pm, representing the time where the plant is most photosynthetically active prior to stomatal closure at solar noon.All measurements were quantified under constant light-saturated conditions, whereby photosynthetic active radiation was maintained at 1200 μmol m −2 s −1 .Additionally, internal carbon dioxide concentrations were equilibrated to 400 μmol CO 2 mol −1 and relative humidity was maintained between 50 and 70%.Thermal conditions were maintained at ambient throughout all measurements to reflect seasonal temperature conditions at the time of measurement.All measurements were quantified on 10 replicate plants.On each plant, at least three replicate measurements were quantified on 2-3 individual tufts comprised of mature needle-like leaves that were located on the terminal stem.For each of the measurements, leaf tufts were allowed to equilibrate to the internal leaf chamber conditions, whereby the stability of gas exchange parameters was monitored in real time.Following measurement, leaf tufts that were measured were harvested from the plant and returned to the ecophysiology laboratory for leaf area analysis at Kings Park Science.All measurements were leaf-area corrected prior to statistical analysis.
Leaf water potential
Leaf water potential measurements were conducted in order to determine plant available water (predawn measurements) and plant water status at the time of stomatal closure (midday measurements) (Turner, 1981).Predawn (Ψ pd ) sampling occurred prior to first light (between 0300 and 0400 am), whereby terminal stems that were ∼10 cm in length were harvested from plants and stored in a sealed foil bag in cool conditions, prior to leaf water potential assessment.Midday (Ψ md ) sampling occurred approximately between 1045-1100 am during summer and between 1100-1200 pm in winter, representing the conditions of peak stress and approximate solar noon for the region.All measurements were conducted within 15-30 min of harvesting, whereby terminal stems were cut at a 45 • angle and immediately secured within a Scholander Pressure Chamber (Model 1000, PMS Instruments Co, USA) with the cut stem externally exposed prior to pressurization (<100 bar).For each species, 10 replicate plants were measured, whereby 2-3 measurements were quantified per Chlorophyll performance: maximum quantum yield and electron transport rate.
Prior to Ψ pd assessment, chlorophyll fluorescence measurements relating to maximum quantum yield (F v /F m ) were quantified using a chlorophyll fluorometer (PocketPI, Hansatech Instruments Ltd, UK) on leaf tufts for each replicate terminal stem, resulting in 2-3 replicate measurements across 10 plants for each species, per site.Dark adaptation was not required for leaf tufts, as stems were harvested in the dark during the predawn measurement window.Electron transport rate (ETR) measures were conducted simultaneous to gas exchange measurements using the leaf fluorometer chamber attached to the LI-6400XT (see above, gas exchange measurements).For ERT measurements specifically, each of the three replicate tufts was measured a single time, equating to three measurements per plant, per site.
Statistical analysis
Soil microclimate time series data (soil temperature and soil water potentials) at 30-cm depth were analysed used generalized additive models (GAMs) using the 'gam' function from the 'mgcv'-package (Wood and Wood, 2015).For each microclimate variable, sites were considered a fixed effect to quantify microclimatic differences over the whole study period using a spline-based cubic regression smoothing term for each predictor, followed by an F-test with a global GAM without sites as a fixed effect.After fitting the GAM, the residuals of the spline-fit were visually inspected, then compared against different model combinations, smoothing terms and a linear model using AIC, R 2 and RMSE (Wood and Wood, 2015;Haslbeck et al., 2021) developed by the 'compare_performance' function in the 'modelbased'package (Makowski et al., 2020).
All ecophysiological parameters (A, g s , F v /F m , ETR, Ψ pd , Ψ md ) were analysed by fitting generalized linear mixed effects models (GLMMs), using 'glmer'-function from the 'lme4'package (Bates, 2010;Bates et al., 2015) in the R statistical environment (R Core Team, 2021).For each ecophysiological parameter, we fixed species (A.quadrata and E. latrobei), site suitability (high and low) and the monitoring period (August 2021, October 2021, March 2022, May 2022, August 2022, October 2022) with A. quadrata, the highsuitability site and August 2021 determined as the model intercepts.For parameters (Amax, gs, Ψ pd ) where we conducted multiple measurements across each plant, leaf replicate measurements were nested within plants for each monitoring period as the random effect.All main effects, as well all possible two-way and three-way interactions, were fitted, followed by assessing model strength via marginal and conditional R 2 values (Schielzeth and Nakagawa, 2013).In addition, model assumptions (i.e.normality of residuals and random effects, linear relationship, homogeneity of vari-ance and multicollinearity) for each ecophysiological parameter were assessed through graphical inspection with help of the 'check_model'-function from the 'performance'-package (Lüdecke et al., 2021).When the data did not follow model assumptions, log-(for all parameters, except Fv/Fm) or logittransformations were conducted, followed by refitting and visual inspection of the GLMM.Following model fitting, we performed type II Wald tests using the 'Anova'-function in the 'car'-package to evaluate fixed and interaction effects (Fox et al., 2007).
Species distribution modelling
The final species distribution model of A. quadrata was statistically robust, with high AUC (0.935; Pearce and Ferrier, 2000).The average habitat suitability index (HSI) at known occurrence locations was 0.68 (range = 0.02-0.92).Over 60% of the known occurrence locations (∼27 700 individual plants) were modelled at habitat >0.7.Only 11% of individuals were modelled to occur in habitat with an HSI < 0.5.The strongest contributor to the modelled distribution was slope (56.2%) followed by elevation (13.1%) and bulk density (12.4%).High-suitability sites were associated with slopes of >15%, elevation between 425 and 445 m, an average soil bulk density of 1.41 g/cm 3 and silt contents of <2%, whilst low-suitability sites were associated with slopes <10%, elevation between >460 and <420 m, soil bulk density greater than or <1.41 g/cm 3 and silt contents of >2% (Supplementary Fig. S3).The northern fringes of the Hammersley Ranges were also predicted to have a high likelihood of occurrence, despite no known populations existing beyond the three populations identified along the southern extent of the range (Fig. 1).Additionally, the intervening area between the three extant populations is predicted to have a high likelihood (up to 98.2%) of supporting A. quadrata.
Soil microclimate variation
There were significant differences between in situ soil temperature (t-value = 36.69,P < 0.001, R 2 = 0.530) and soil water potential (t-value = −57.04,P < 0.001, R 2 = 0.576) between high-and low-suitability sites.On average, low-suitability sites were 0.58 times warmer and had 1.72 times drier conditions over the study period.The largest variation in temperatures for both sites was recorded during September 2021 to March 2022, coinciding with the periods leading up to summer rainfall, with minimum and maximum temperatures between 16 and 61 • C (Fig. 2).During this period, median water potentials were ranging between −85.2 and −16.4 MPa in the low-suitability site and between −41.0 and −5.3 MPa in the high-suitability site.Thereafter, soils rehydrated following summer rainfall in both sites with median water potentials between −10.0 and −3.0 MPa in the low-suitability site and between −6.4 and −1.0 MPa in the high-suitability site for the months of January-March 2022 (Fig. 2).Late summer, autumn and winter rainfall events (between April and September 2022; Fig. 2.0 and Supplementary Fig. S3) further elevated median soil water potentials in both sites to between −1.0 and −0.2 MPa in the low-probability site and −0.9 and −0.2 MPa in the high-probability site.
Ecophysiological assessment
There were significant site-level differences between all ecophysiological parameters, except for Fv/Fm ratios (Table 2; all P < 0.029).Overall, plants in the high-suitability site had ecophysiological responses of up to 24% greater magnitude compared to those from the low-suitability sites (Fig. 3).The most responsive parameters were associated with gas exchange (Amax: X 2 = 15.10,gs: X 2 = 56.74;both P < 0.001), driven by site differences in March, August and October 2022 (Fig. 3; all P < 0.001).Species-level differences were characterized by E. latrobei having a higher photosynthetic rate, Fv/Fm, ETR and predawn leaf water potentials (Fig. 3; all P < 0.011), but not for stomatal conductance and midday leaf water potentials (Table 2).
There was strong seasonal variation in plant performance in both species between August 2021 and October 2022 as indicated by all ecophysiological parameters (Table 2 and Fig s −1 , and stomatal conductance was <0.025 mol H 2 O m −2 s −1 in both species (Fig. 3).Chlorophyll performance was reduced to maximum quantum yield measures of F v /F m < 0.3 and decreased ETR responses <50 μmol electrons m −2 s −1 (Fig. 3).As well, traits associated with plant water stress indicated low plant available water, with Ψ pd as well as Ψ md < −8 MPa (Fig. 3).
Discussion
By integrating a high-resolution SDM with mechanistic measurements of seasonal variation in ecophysiological performance, we have demonstrated a strong association between modelled habitat suitability in the SRE, A. quadrata with ecophysiological performance.Plants growing at sites with high modelled suitability according to remotely sensed edaphic and geomorphological conditions had higher rates of ecophysiological performance across most of the traits that we measured.We posit this as validation that modelled likelihood of occurrence is indicative of habitat suitability for A. quadrata.
In comparison with E. latrobei, we found that the SRE A. quadrata had decreased photosynthetic activity, chlorophyll fluorescence and predawn leaf water potentials, indicating species-level differences, even when measured in the same environment.The knowledge generated from this study will help to better understand A. quadrata within its environment and lead to improved management and conservation of this species and potentially other SRE species more broadly.
Patterns of modelled habitat suitability
By modelling the distribution, we identified locations varying in probability of occurrence based on the correlation of occurrence data with edaphic factors.Here, A. quadrata was modelled to occur predominantly on elevated, rocky slopes along the Hamersley Ranges.Of particular note, our SDM projected only 0.1% of the potential A. quadrata distribution with likelihood of occurrence, inferred as habitat suitability (Guisan et al., 2017) >0.7.The modelled preference of A. quadrata for elevated, mesic habitats, with high slope percentages and shallow, well-drained soils with low silt contents are characteristic of SRE species persisting on similar geological land forms (Gibson et al., 2012;Di Virgilio et al., 2018;Robinson et al., 2019;Tomlinson et al., 2020).Also, consistent with modelling distributions of SRE species in similar geological land forms (Tomlinson et al., 2020), the SDM identified substantial areas of high suitability throughout the Pilbara (∼1132 km 2 ), which is outside the known extent of A. quadrata.The modelling approach of this study is valuable in identifying these pockets of suitable habitat, both in proposing likely locations of unidentified populations of the species (White et al., 2020) or for guiding translocations (Guisan et al., 2017;Draper et al., 2019), especially where vacancy of such habitats is typically ascribed to stochastic extinction or failure of the species to disperse there naturally (Byrne et al., 2019).Although modelled likelihood of occurrence is often assumed to indicate habitat suitability (Gogol-Prokurat, 2011;Guisan et al., 2017), a common challenge for SDM projections is to extrapolate from this inferred suitability to verified species performance (Hereford et al., 2017).We found that inferred habitat suitability was strongly associated with differences in ecophysiological performance of A. quadrata, such that individuals in high-suitability sites had higher physiological performance compared to the lowsuitability site.In addition to the spatial contrasts explored here, we found temporal variation as seasons transitioned between peak functioning after rainfall and stress periods between seasons (e.g.March-August 2022 and October 2022, respectively) and years (August and October 2021 and 2022).These patterns demonstrated clear climate-driven underpinnings to habitat suitability at these sites, with plant activity and inactivity found in response to rainfall and drought, respectively.Nevertheless, differences in plant performance between these sites disappeared during seasons of increased water stress (e.g.August and October 2021), indicating that high average suitability does not preclude a site from imposing substantial challenges to the local population during high-stress periods, and rather, climatically favourable seasons drive site differences.As such, although habitat suitability modelling trained using edaphic traits can provide more accurate projections at finer resolution than those trained on climatic data (Tomlinson et al., 2020), the greatest challenge to such modelling approaches is to infer climatic patterns and to project these to estimate the effect of changing climates and the increased likelihood of extreme climatic events on SRE plant populations.
Our approach was advocated for on the basis of the standardized training layers allowing for directly comparable models to be developed for similar species anywhere in Australia (Tomlinson et al., 2020).The 25-m 2 spatial resolution does, however, lead to smoothing or averaging of microtopographic variation of edaphic factors within each modelled grid cell.Other studies modelling the distribution of SRE plants have used LiDAR technology to map microtopographic features at a 2-m resolution (Di Virgilio et al., 2018;Robinson et al., 2019), and downscaling may further help understand landscape variation at a local scale.The advantage to the edaphic layers that we used here is that they can be directly fed into biophysical models to downscale microclimatic conditions at each site (Kearney and Porter, 2009;Tomlinson et al., 2020).Here, we found that these microclimatic inferences closely correlated with plant physiological traits, which could theoretically be used to project likely performance under modelled future climates.Nevertheless, such projections are always going to represent inferences made on the basis of statistical correlations, and recent studies have employed mechanistic models informed by phenophysiological responses of species in relation to microclimatic niche gradients (Hereford et al., 2017;Schouten et al., 2020).These models can be particularly insightful, as they have the capacity to simulate environmental stressors across a plant life cycle (Schouten et al., 2020), potentially identifying critical stages that govern population growth, reproduction and persistence.By scaling these models across the distribution, at large management scales, it is theoretically possible to determine management triggers based on projected plant performance, but the nature of mechanistic models is to overestimate the realized niche by identifying climatically suitable space without reference to biotic filters (Peterson et al., 2015).A hurdled modelling approach (Ridout et al., 1998) may optimize the predictive potential of both techniques, where edaphically informed SDMs are used identify a template of suitable habitats, and then mechanistic models are applied within this constrained space to estimate plant performance under changing conditions.
Ecophysiology of A. quadrata and comparisons with E. latrobei
Generally, the ecophysiological performance of A. quadrata correlated well with modelled habitat suitability.Nevertheless, there were seasonal patterns in plant performance that were not well represented in the modelling, especially in seasons of extreme physiological stress.As seasons transition from the wet into the dry (e.g. during October 2022), ecophysiological activity was characterized by downregulation of gas exchange and reductions in chlorophyll fluorescence, indicating changes from productive growth phases during the wet season to plant senescence in the dry season (Manzoni et al., 2011;Vico et al., 2015).Whilst these patterns present typical responses of plants to shifts in seasonal water availability (Chaves et al., 2003;Ogle and Reynolds, 2004), both species persisted through intense plant water stress conditions.For example, during the period of highest water stress (lowest measured water availability), when predawn leaf water potentials were −9.1 to −9.9 MPa and soil water potentials were <−2 MPa stress (e.g.October 2021), we found up to 65% reductions in chlorophyll fluorescence metrics (Fv/Fm < 0.3 and ETR < 50 μmol electrons m −2 s −1 ) and up to 95% in reductions in photosynthetic activity and stomatal conductance in both species.Whilst for many plant species optimal ranges for Fv/Fm ratios typically vary between 0.75 and 0.83 (Maxwell and Johnson, 2000;Schönbeck et al., 2023), a reduction of Fv/Fm ratios <50% efficiency is typically associated with very low plant health and an increased likelihood of mortality due to photoinhibition (Demmig-Adams and Adams III, 2006).In addition, previous research has reported that recovery of photosynthetic activity in several species is not possible if stomatal conductance responses are lower than the severe drought threshold of 0.05 mol H 2 O m −2 s −1 (Flexas et al., 2006).However, for both species, we did not observe plant mortality in any of the individuals over the study period, with plants recovering to Fv/Fm ratios >0.75 in the wet season.Therefore, in highly seasonal landscapes like the PIL and GAS region, the biogeographical filters that lead to shortrange endemism may be dependent on seasonal or ephemeral conditions.Interestingly, whilst gas exchange measures presented sitelevel differences, Fv/Fm measures did not demonstrate the same level of variation.This could be explained by Fv/Fm measures representing the maximum potential efficiency of the photosystem II (PSII), which is a result of environmental variation in stressors during the seasonal window impacting on physiological activity and morphological adjustments to leaves (Maxwell and Johnson, 2000), rather than the instantaneous changes in the environment impacting on photosynthetic activity.In addition, the edaphic characteristics of the contrasting sites may have been pronounced at the same level of plant water stress, but not to the extent to cause severe impairment in PSII.By contrast, ETR responses demonstrated stronger variation than Fv/Fm ratios, which is likely explained by this trait more strongly correlating with photosynthetic activity rates (Galmés et al., 2007).Nevertheless, the pattern of recovery from the dry October 2021 to the wet March 2022 confers adaption for both species to their water-limited environment and the ability to withstand periods of severe drought stress.
Our study found species-level differences in physiological activity that were characterized by elevated photosynthetic activity in E. latrobei in contrast to A. quadrata, whilst presenting similar stomatal conductance responses over seasons and across sites.These responses typically indicate higher intrinsic water use efficiency (WUEi; the ratio between photosynthetic activity and stomatal conductance, A/gs; see Fig. 3 and Supplementary Fig. S5) which likely presents increased water stress tolerance for E. latrobei (Atkin et al., 1998;Kimball et al., 2016;Valliere et al., 2021).In addition, higher WUEi and photosynthetic rates may also suggest increased growth rates and a competitive advantage for resources over A. quadrata in the same environment (Tezara et al., 2010;Tarin et al., 2020).Despite the differences in WUEi, both species displayed average decreases of up to −2.14 MPa in midday leaf water potentials relative to their predawn measures, as well as decreasing stomatal conductance at moderate to high leaf water potentials as seasons transition.At this scale, these responses suggest both species to be anisohydric, which maintain higher stomatal conductance rates in contrast to isohydric species, allowing for leaf water potentials to decline with decreasing soil water potential (McDowell et al., 2008).Whilst our climate data showcase that plants can persist for at least 103 days between August and November 2021 without any rainfall (see Supplementary Fig. S3), and in a landscape that was beginning to experience thermal extremes as seasons were transitioning into the hot summer period, there is uncertainty about how long both species could continue to survive in a period of longer term drought.Their ability to recover without mortality over such period further supports that both species are highly adapted to their arid environment.However, further investigation under controlled environmental studies or field surveying is necessary to understand their drought survival capacity and threshold for mortality over sustained periods of severe water deficit.Additionally, whilst our study was focused on reproductive, adult plants, it is likely that tolerance to seasonal stressors would vary between seedling, juvenile and adult states, and further research is necessary to identify ontogenetic sensitivities to abiotic stress (Lewandrowski et al., 2021;Gremer, 2023).These physiological data can work to optimize emerging mechanistic models (Schouten et al., 2020) and increase capacity to explain and predict changing spatiotemporal patterns or population dynamics to guide conservation action for SRE plants.
Conservation implications
Recent studies have emphasized the importance of understanding biogeographical (Draper et al., 2019) as well as ecophysiological (Madliger et al., 2018;Cooke et al., 2021) contexts of species for conservation.When combined, these approaches can provide strategic applications for plant conservation and ecological restoration (Madliger et al., 2018;Tomlinson et al., 2021;Valliere et al., 2021;Schönbeck et al., 2023).Given high physiological activity is associated with increased productivity and reproductive success of individuals, highly suitable locations where the species are present should be considered for targeted conservation of the species.Research rarely intensively ground-validates model predictions, but where this has been done, high-suitability habitats have been found to harbour previously identified populations of SRE species (White et al., 2020).Nevertheless, habitat with high modelled suitability can also be used as recipient locations for conservation translocations (Draper et al., 2019), given the high risk of stochastic losses of shortrange endemic plant populations (Bartholomeus et al., 2011).
The modelled habitat suitabilities that we identified have proven highly correlated with physiological traits governing species persistence.However, from an applied perspective, such spatiotemporal variation can lead to a high level of uncertainty, especially when ecophysiological measurements used to validate SDM outputs are conducted in a dry season, highlighting the importance for undertaking contrasting seasonal measurements in climatically stochastic landscapes (Grossman, 2023).Nevertheless, whilst our study only investigated spatiotemporal variation in contrasting sites, the next logical step for research is to account for greater variation in landscape ecotypes and maximize spatial variability.Many A. quadrata plants are distributed along drainage channels in varying degrees of slope angles, elevation and soil bulk densities, which could further impact physiological activity.By evaluating the interactions of these edaphic factors, we will likely increase our understanding of the patterns and processes underpinning plant performance across the landscape, and deliver evidence-based insight into the ongoing management and conservation of the threatened SRE.
taken, and pay our deepest respects to elders past, present and emerging.We thank the kind and generous support for project funding, logistics and equipment support of Rio Tinto Iron Ore for the duration of the project.We also would like to thank Caroline Gill (Rio Tinto Iron Ore) and Greg Cawthray (School of Biological Sciences, The University of Western Australia) for their support during various fieldwork trips.Finally, we thank staff and students based at Kings Park Science, Department of Biodiversity Conservation and Attractions (DBCA), in particular Rebecca Campbell and Emma Dalziell, for administrative, safety, as well as laboratory support throughout the research year, as well as regional support from the DBCA Pilbara region.
Figure 1 :
Figure 1: Map of the niche model indicating a) the distribution across three IBRA bioregions-PIL, LSD and GAS; b) Geographical extent of occurrence for known presences occurrences of A. quadrata defined by three distinct populations; c) the extent of the Western Ranges population; and d) the locations of the two study sites within the study zone.Increasing intensity of colour (from blue to red) indicates a higher probability of occurrence from 0 to 1 HSI.
Figure 2 :
Figure 2: Soil microclimate variation for a) soil temperature and b) soil water potentials in high (red) and low (blue) suitability site.Microclimate parameters were measured in situ at 300-mm depth, recorded at 15-min intervals and were fitted with a spline curve to smooth the overall trends.
Table 1 :
Test statistics from one-way analysis of variation examining the microclimatic correlates of the likelihood of occurrence for A. quadrata based on microclimatic conditions calculated at 1000 random point samples across the projected landscape | 2024-05-28T05:04:33.471Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "649c378c7b5a587286d5280f44a2d478f13f0b20",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "649c378c7b5a587286d5280f44a2d478f13f0b20",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239469746 | pes2o/s2orc | v3-fos-license | Pain Behavior of People with Intellectual and Developmental Disabilities Coded with the New PAIC-15 and Validation of Its Arabic Translation
Pain management necessitates assessment of pain; the gold standard being self-report. Among individuals with intellectual and developmental disabilities (IDD), self-report may be limited and therefore indirect methods for pain assessment are required. A new, internationally agreed upon and user-friendly observational tool was recently published—the Pain Assessment in Impaired Cognition (PAIC-15). The current study’s aims were: to test the use of the PAIC-15 in assessing pain among people with IDD and to translate the PAIC-15 into Arabic for dissemination among Arabic-speaking professionals. Pain behavior following experimental pressure stimuli was analyzed among 30 individuals with IDD and 15 typically developing controls (TDCs). Translation of the PAIC followed the forward–backward approach; and reliability between the two versions and between raters was calculated. Observational scores with the PAIC-15 exhibited a stimulus–response relationship with pressure stimulation. Those of the IDD group were greater than those of the TDC group. The overall agreement between the English and Arabic versions was high (ICC = 0.89); single items exhibited moderate to high agreement levels. Inter-rater reliability was high (ICC = 0.92). Both versions of the PAIC-15 are feasible and reliable tools to record pain behavior in individuals with IDD. Future studies using these tools in clinical settings are warranted.
Introduction
Individuals with intellectual and developmental disabilities (IDD)-neurodevelopmental disorders that are characterized by intellectual difficulties and limitations in various aspects of living-are at an increased risk of acute and chronic pain compared to typically developing controls (TDCs) [1,2]. This risk entails, among other factors, a possible increased sensitivity to noxious stimuli [3,4], relatively high rates of injuries and falls [5,6], and secondary consequences of the IDD etiology related to painful diagnostic procedures, medical complications, use of assistive devices, etc. [7][8][9][10][11]. Consequently, the prevalence of pain among people with IDD is relatively high as concluded from proxy reports, e.g., [12][13][14].
The limited cognitive and communicative abilities of individuals with IDD present a major obstacle in quantifying their pain, which renders pain assessment and hence pain management a significant challenge [11,15]. In order not to have to rely on self-report alone, dozens of behavioral scales have been developed over the last two decades that aim to assess pain among non-communicative individuals, e.g., [16][17][18]. Although these scales are valid and reliable, some of them may require special expertise, and other scales may not necessarily be applicable to populations with differing cognitive impairments or ages, e.g., [19,20], thus limiting a comparison between populations. Furthermore, not all the scales are applicable to both acute/experimental and chronic pain states, potentially limiting the research on pain management interventions.
A very promising scale in this respect was developed via a European-funded international initiative (COST action TD1005) that took place between the years 2011 and 2017. A group of international (16 countries) and interdisciplinary researchers empirically investigated which items from established observational pain scales allowed for a reliable and valid assessment of pain in individuals with cognitive impairment (626 individuals with cognitive impairment of various etiologies and 59 controls were evaluated). The final product of this team was an internationally agreed-upon tool for Pain Assessment in Impaired Cognition (PAIC-15) [21,22]. The PAIC-15 was introduced as a meta-tool that can be used for diverse populations of individuals with cognitive impairments [22]. It also includes atypical behaviors such as freezing, that are frequent among individuals with IDD [11] but seldom appear in other scales.
However, to date, the use of the PAIC-15 has not been thoroughly examined among individuals with IDD. Therefore, one aim of this study was to compare pain behavior of individuals with IDD to that of TDCs, quantified by the PAIC-15. Based on previous studies, e.g., [3,4,23], we hypothesized that pain behavior of the former would be increased compared to TDCs. Due to the aforementioned advantages of the PAIC-15, it is important to disseminate its use worldwide. An imperative step in this direction is to translate the original English version into as many languages as possible. Thus far, the PAIC-15 has been translated into seven languages (https://paic15.com/, accessed on 21 September 2021). Therefore, another aim was to translate the PAIC-15 into Arabic and validate the translation.
Participants
This study included 45 adults: 30 individuals with IDD (IDD group, age 35.3 ± 6.2 years) and 15 typically developing controls (TDCs group, 31.3 ± 7.5 years). The rationale for the 2:1 allocation ratio was the much greater inter-subject variability among individuals with IDD in terms of IDD etiology and level. Individuals with IDD were recruited from day care centers for people with IDD (belonging to two organizations for people with disabilities: Alin and Elwyn). IDD was diagnosed according to clinical assessment and standardized testing of intelligence (including the Wechsler Intelligence Scale for Children-Revised and the Wechsler Preschool and Primary Scale of Intelligence) performed by a team from the Ministry of Social Affairs and Social Services, which supervises all services related to IDD. Individuals in this group had an estimated level of mild or moderate IDD and the ability to understand their mother tongue. TDCs were students and employees of Tel Aviv University or employees of the day care center for people with IDD. Exclusion criteria for all the participants were as follows: known acute or chronic pain, bruises or injuries in the testing regions (medical information on the participants with IDD was retrieved from their medical records by their legal guardian upon request, and additional information was also obtained from the primary caregiver).
This study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Tel Aviv University (3012/2012), the institutional review board of the Ministry of Social Affairs and Social Services (201323-01), and by the legal guardians of the participants with IDD. Prior to entering this study, a written informed consent document was obtained from all the TDCs and from the legal guardians of all the individuals with IDD, after they had received an explanation of this study's aims and protocols. In addition, the protocol was explained to the participants Brain Sci. 2021, 11, 1254 3 of 14 with IDD and their escorts upon their arrival at the lab, and each step of the protocol was carried out only after their oral consent was obtained.
Pressure Algometer
Pressure stimuli were delivered using a hand-held pressure algometer (Somedic Sales AB, Algometer type II, Hörby, Sweden). The algometer has a built-in pressure transducer, an electronic recording and display unit, a power supply, and a subject-activated push button connected via a cable to the instrument. It has an accuracy of ±3%, and its unit of measurement is the kilopascal (kPa). The algometer operates by exerting a constantly increasing rate of pressure that is monitored by a cursor presented on the display. The size of the tip of the algometer that is pressed against the skin was 1 cm 2 . The algometer was calibrated before each measuring day.
PAIC-15
The behavioral response to pressure stimuli was analyzed using the PAIC-15 [22]. This tool consists of 15 items divided into three behavioral domains: five items for facial expressions, five items for body movements, and five items for vocalizations. Each item has a title and an explanation to ensure the understanding of each item's meaning. The items are scored on a 0-3 scale for magnitude of appearance: 0 = not at all, 1 = slight degree, 2 = moderate degree, and 3 = great degree. There is an additional option of "not scorable" for each item. The sum score of all the items is the final PAIC-15 score. The higher the sum score, the higher the probability the person is in pain. These 15 items are: Frowning, Narrowing eyes, Raising upper lip, Opening mouth, Looking tense, Freezing, Guarding, Resisting care, Rubbing, Restlessness, Pain-related words, Shouting, Groaning, Mumbling, and Complaining. Note, one participant with IDD turned his head during noxious stimulation, which prevented us from scoring his face items. The sum score for this participant was calculated without these items.
Stimulation and Recording Procedures
The experimental protocol was designed by the experimental pain working group of the European Cooperation in the Field of Scientific and Technical Research (COST), termed "Pain assessment in patients with impaired cognition, especially dementia" (action TD1005), of which this study's authors are members. The aims of this international group are to raise awareness of the subject of pain among individuals with cognitive impairment and to develop a pain assessment toolkit for this population. The protocol was first tested on TDCs prior to testing individuals with IDD in order to verify the intensity of the pressure stimuli and the ability to endure them for the required duration [23]. Prior to actual testing, all the participants underwent a training session in which they were familiarized with the sensation delivered via the pressure algometer and were trained in rating pain sensation with various scales. For training, the participants received pressure stimuli in the thigh region (which was not stimulated further during testing) at the same intensities used during the actual testing. In addition, the participants were instructed how to maintain their head so that their expressions would be best captured by the camera.
After a five-minute break, the experiment began. The experimental setup is described at length in our previous study [24]. The examiner stood behind the subject in order not to interfere with videotaping and to properly administer the stimuli. Each subject received three pressure stimuli of 50, 200, and 400 kPa, applied with the pressure algometer, to the upper mid part of the trapezius muscle (halfway between the neck line and the shoulder line). These stimuli were chosen based on a preliminary experiment conducted on TDCs, which was aimed to search for one innocuous stimulus (for control), one mildly noxious stimulus, and one moderately noxious pressure sensation, respectively [23,25]. Each stimulus rose rapidly from a baseline of 0 kPa to the designated intensity and lasted Brain Sci. 2021, 11, 1254 4 of 14 seven seconds (a two-second increase from baseline and five seconds in the destination intensity). The inter-stimulus interval was four minutes, in order to avoid carry-over effects between stimuli and in order to provide sufficient time for the pain rating. In addition, the examiner moved the stimulation site by approximately 0.5 cm between stimuli.
The participants were videotaped throughout the entire protocol, and the behavioral responses were analyzed retrospectively, separately for each stimulation condition (four conditions in total (rest = no stimulation, 50, 200 and 400 kPa). At baseline, the subjects were not engaged in any specific activity; the analysis of the PAIC-15 was conducted for a random 15 s segment. During pressure stimulation, the analysis commenced immediately upon the examiner starting the stimulus, and it lasted the entire duration of stimulation. The coders for the original (English) version and the Arabic translation of the PAIC-15 worked separately. The coders for the original version had a background in neuroscience and one of them has been working for several years as a sport coach of people with IDD. Both were fluent in English (one was native English speaker). The two coders of the Arabic version had a background in health professions and one of them has been, and is currently working as a physical therapist for people with IDD. Both were native Arabic speakers. Among a subsample of 10 participants, an additional rater scored pain behavior in all four of the conditions (rest, 50, 200, and 400 kPa) with the Arabic version in order to calculate inter-rater reliability.
The Translation Process of the PAIC-15 from English to Arabic
In order to translate the PAIC-15 into Arabic, we followed the common forwardbackward approach [26]. First, a native Arabic speaker, who is also a health professional, translated the PAIC-15 from English into Arabic. The translator consulted with two additional native Arabic speakers who were fluent in English, and they all agreed upon the most adequate wording. The Arabic version was then translated back into English by two independent people who were native speakers of both Arabic and English. The re-translated English version was compared to the original English version, and in the case of disagreement between these versions, a meeting was initiated between the members of the team in order to investigate the reason for the disagreement and to discuss solutions (e.g., agreeing on one of the options or looking for another option). This meeting also included external advisers who were fluent in Arabic and English, for further consultation. Upon agreement on all the items by each team, the Arabic version was confirmed.
In the next step, we showed the Arabic version of the PAIC-15 as well as the original English version to a group of people, comprising health professionals and Arabic teachers who were all native speakers of Arabic and fluent in English. After a close inspection of the translated version and discussions among the group, it was decided that although the translated version was adequate for local Arabic speakers, it was not suitable for worldwide dissemination, the reason being that dialects of spoken Arabic can differ between countries and even between regions within a country.
Therefore, in order to comply with as many Arabic-speaking countries as possible, it was decided to prepare another version that was adjusted to the use of the Arabic language in Arabic literature which is more uniform across countries. To achieve this purpose, the two PAIC-15 versions were analyzed by a native Arabic speaker who is an English literature professor with a degree in professional translation, who suggested substituting words for some of the items. This version was then analyzed by a second professional, an Arabic language professor who is an expert in dialects and writing and who made final adjustments. For example, the term which was the initial translation for the "freezing" item was replaced by in order to refer to people, not to objects. Furthermore, the initial translation for the item "jaw dropper" was replaced by in order to better reflect a movement that occurs as a reaction. The back translation of this version was conducted by a professor of English literature and behavioral sciences, who is a native speaker of Arabic. The final Arabic translation was introduced to a group of health professionals (nurses, occupational therapists, and physical therapists) who were all native Arabic speakers, some of whom worked with people with IDD. Upon an open discussion of all the items of this version, they found the tool universally understood and feasible, as well as adequate. This final tool (available at https://paic15.com/www-PAIC15.com, accessed on 21 September 2021) was used to code the pain behavior of the participants following pressure stimulation.
Data Analysis
Data were analyzed with the IBM SPSS statistics software (version 25, IBM, New York, NY, USA). Whereas the sum scores of the 15 items of the PAIC-15 were considered continuous, single items were considered ordinal data. Thus, the effects of group type and stimulation condition on the sum score of the PAIC-15 as well as their interaction within each language were analyzed with a repeated-measures ANOVA and corrected post hoc comparisons. The correlation between stimulation intensities and the PAIC-15 sum scores were calculated with Pearson's r, and a comparison between the original and translated versions of the PAIC-15 sum scores were calculated using t-tests. Agreement between the original and translated versions for all PAIC-15 items and for single items was calculated using the interclass correlation coefficient (ICC), and comparison between versions was conducted via the Mann-Whitney U test. Given that the PAIC-15 is aimed to measure pain behavior, a reliability assessment of the total PAIC score as well as of each of the 15 items was performed for the 400 kPa (noxious) stimulation. The inter-rater reliability of the PAIC-15 Arabic version was evaluated as well. Internal consistency was evaluated with Cronbach's alphas. Two-tailed p-values are reported, and p < 0.05 was considered significant. Table 1 presents the participants. The IDD group did not differ from the TDCs group in age (t-test: t = 1.81, p = 0.08) or sex distribution (Mann-Whitney U test: z = −0.09, p = 0.924). Among the IDD group, there were participants with cerebral palsy (CP), Down syndrome (DS), or an unspecified IDD (UIDD), and the majority of them had a mild impairment. Medication use among the IDD group included medication for hypothyroidism (seven participants/23.3%), psychotropic medications (6/20%), muscle relaxants (6/20%), and antiepileptics (2/6.66). Figure 1 presents the sum of the PAIC-15 scores in response to pressure stimulation for the IDD and TDC groups. A repeated-measures ANOVA revealed a significant global effect of group type (F(1,35) = 13.99, p < 0.001), and of condition (F(3,105) = 13.1, p < 0.0001) on the sum PAIC-15 scores. The interaction group type X condition was also significant (F(3,105) = 3.51, p < 0.018), suggesting that the increase in PAIC-15 sum scores across the stimulus intensities was not uniform in the two groups.
PAIC-15 Original Version: Comparison between the IDD and TDC Groups
#= Mann-Whitney test or t-test between groups. Figure 1 presents the sum of the PAIC-15 scores in response to pressure stimulation for the IDD and TDC groups. A repeated-measures ANOVA revealed a significant global effect of group type (F(1,35) = 13.99, p < 0.001), and of condition (F(3,105) = 13.1, p < 0.0001) on the sum PAIC-15 scores. The interaction group type X condition was also significant (F(3,105) = 3.51, p < 0.018), suggesting that the increase in PAIC-15 sum scores across the stimulus intensities was not uniform in the two groups.
PAIC-15 Original Version: Comparison between the IDD and TDC Groups
Post hoc tests revealed a significant group effect within every stimulation condition (p < 0.0001 for all four conditions): the scores for the IDD group were significantly higher in all four conditions-rest (t = 4.81, p < 0.0001), 50 kPa (t = 3.68, p < 0.001), 200 kPa (t = 2.99, p < 0.01), and 400 kPa (t = 3.47, p < 0.01) (Figure 1). Table 2 presents the frequency of occurrence of each PAIC item in the IDD compared to the TDC groups during the 400 kPa stimulation. Overall, the IDD group was more behaviorally responsive during pain. The items "raising upper lip", "opening mouth", "looking tense", and "freezing" occurred significantly more frequently among the IDD than among the TDC group. There were several items that appeared only in the IDD group, including guarding, shouting, groaning, and complaining. The rest of the items similarly occurred among both groups ( Table 2). Post hoc tests revealed a significant group effect within every stimulation condition (p < 0.0001 for all four conditions): the scores for the IDD group were significantly higher in all four conditions-rest (t = 4.81, p < 0.0001), 50 kPa (t = 3.68, p < 0.001), 200 kPa (t = 2.99, p < 0.01), and 400 kPa (t = 3.47, p < 0.01) (Figure 1). Table 2 presents the frequency of occurrence of each PAIC item in the IDD compared to the TDC groups during the 400 kPa stimulation. Overall, the IDD group was more behaviorally responsive during pain. The items "raising upper lip", "opening mouth", "looking tense", and "freezing" occurred significantly more frequently among the IDD than among the TDC group. There were several items that appeared only in the IDD group, including guarding, shouting, groaning, and complaining. The rest of the items similarly occurred among both groups (Table 2). (3) 0.89 ICC = interclass correlation coefficients between the languages. For each PAIC item, the numbers in parentheses are % out of the individuals in each group and the asterisks signify the results of χ 2 tests comparing the frequencies between the IDD and TDCs within each language (* p < 0.05; ** p < 0.01, *** p < 0.001).ˆare the results of χ 2 tests comparing the total frequency (IDD and TDC) between languages (ˆp < 0.05;ˆp < 0.01). For the sum score, the numbers in parentheses are standard deviations and the asterisks signify the results of t-test between the IDD and TDCs within each language (* p < 0.05). The third column in each language is the average of the two groups.
PAIC-15 Original Version: Correlation between Its Scores and Stimulation Intensities within Groups
Among both the IDD and TDC groups, the PAIC-15 scores correlated moderately and significantly with pressure stimulation intensity (r = 0.55, p < 0.001 and r = 0.45, p < 0.001, respectively), suggesting a stimulus-response relation for the PAIC-15. Yet, the slopes of these stimulus-response functions were different among the groups, as can be seen in Figure 1: that of the IDD group was steeper than that of the TDC group (2.37, R2 = 0.96 vs. 0.75, R2 = 0.87), manifested also by the significant group X condition interaction.
The internal consistency of the original PAIC-15 version for the present sample was high (Cronbach's α = 0.93). Table 2 presents the frequency of occurrence of each PAIC item in the IDD compared to the TDC group during the 400 kPa stimulation. As found for the original PAIC version, the IDD group was more responsive during pain. The items "opening mouth" and "looking tense" were scored significantly more often among the IDD than among the TDC group. Approximately half of the items occurred only among people with IDD, and two items-shouting and mumbling-did not occur in either of the groups (Table 2).
PAIC-15 Arabic Version: Correlation between Its Scores and Stimulation Intensities within Groups
Among both the IDD and TDC groups, the PAIC-15 scores correlated moderately and significantly with stimulation intensity (r = 0.50, p < 0.001 and r = 0.61, p < 0.0001, respectively), suggesting a stimulus-response relation for the PAIC-15. As in the case of the scores with the original version, the slopes of these stimulus-response functions were different among the groups: That of the IDD group was steeper than that of the TDC group (1.88, R2 = 0.97 vs. 0.76, R2 = 0.81), as also manifested by the significant group X condition interaction ( Figure 2).
The internal consistency of the translated version of PAIC-15 for the present sample was high (Cronbach's α = 0.89).
Comparison between the Original (English) and the Translated (Arabic) PAIC-15 Versions
When comparing the scores of the English and Arabic versions within groups, the scores for the IDD group (Figure 3) Table 2 presents the frequency of occurrence of each PAIC item in the IDD compared to the TDC group during the 400 kPa stimulation. As found for the original PAIC version, the IDD group was more responsive during pain. The items "opening mouth" and "looking tense" were scored significantly more often among the IDD than among the TDC group. Approximately half of the items occurred only among people with IDD, and two itemsshouting and mumbling-did not occur in either of the groups ( Table 2).
PAIC-15 Arabic Version: Correlation between Its Scores and Stimulation Intensities within Groups
Among both the IDD and TDC groups, the PAIC-15 scores correlated moderately and significantly with stimulation intensity (r = 0.50, p < 0.001 and r = 0.61, p < 0.0001, respectively), suggesting a stimulus-response relation for the PAIC-15. As in the case of the scores with the original version, the slopes of these stimulus-response functions were different among the groups: That of the IDD group was steeper than that of the TDC group (1.88, R2 = 0.97 vs. 0.76, R2 = 0.81), as also manifested by the significant group X condition interaction ( Figure 2).
The internal consistency of the translated version of PAIC-15 for the present sample was high (Cronbach's α = 0.89).
Comparison between the Original (English) and the Translated (Arabic) PAIC-15 Versions
When comparing the scores of the English and Arabic versions within groups, the scores for the IDD group (Figure 3) via use of the Arabic version were significantly lower than those via use of the English version, for rest (t = 4.4, p < 0.01) and for 50 kPa (t = 2.5, p < 0.049). However, the PAIC-15 scores of the IDD group during stimulation were similar for the two languages-for 200 kPa (p = 0.132) and 400 kPa (p = 0.08)-even though a tendency toward lower scores with the Arabic version was observed. The scores for the TDC group with the English and Arabic versions were similar (not shown).
for the two languages-for 200 kPa (p = 0.132) and 400 kPa (p = 0.08)-even though a tendency toward lower scores with the Arabic version was observed. The scores for the TDC group with the English and Arabic versions were similar (not shown). The overall agreement between the English and the Arabic version was high (ICC = 0.89). Table 2 presents the comparison between the versions for each PAIC-15 item separately, for 400 kPa. Agreement between the two languages as measured with the ICC ranged from moderate to high, with one item having weak agreement (looking tense). For two items-shouting and mumbling-it was impossible to calculate agreement because they appeared only when scoring was performed with the English version, albeit only in the IDD group and with low frequency (13.3%). The frequency of appearance of most of the items was similar between the languages, with three exceptions: The item "raising upper lip" appeared significantly more often when scored with the English version than with the Arabic version (p < 0.01), and the items "shouting" and "mumbling" appeared only when scored with the English version.
Inter-Rater Reliability of the Arabic PAIC-15
The overall agreement between the two raters as calculated with the ICC was very high (ICC = 0.92). The agreement between the two raters in each condition separately varied somewhat: It was high for rest (ICC = 1.0), 200 kPa (ICC = 0.98), and for 400 kPa (ICC = 0.86). However, it was weak for 50 kPa (ICC = 0.35).
The PAIC-15 as a Measure of Pain for IDD
The first aim of this study was to evaluate pain behavior among individuals with IDD using the PAIC-15. The PAIC-15 has been used to code pain behavior among older persons with dementia in both experimental and clinical settings [21,22,27]. The current study is the first time in which the PAIC-15 was systematically used for the IDD population. The PAIC-15 was suitable for the task at hand; the coders could easily understand The overall agreement between the English and the Arabic version was high (ICC = 0.89). Table 2 presents the comparison between the versions for each PAIC-15 item separately, for 400 kPa. Agreement between the two languages as measured with the ICC ranged from moderate to high, with one item having weak agreement (looking tense). For two items-shouting and mumbling-it was impossible to calculate agreement because they appeared only when scoring was performed with the English version, albeit only in the IDD group and with low frequency (13.3%). The frequency of appearance of most of the items was similar between the languages, with three exceptions: The item "raising upper lip" appeared significantly more often when scored with the English version than with the Arabic version (p < 0.01), and the items "shouting" and "mumbling" appeared only when scored with the English version.
Inter-Rater Reliability of the Arabic PAIC-15
The overall agreement between the two raters as calculated with the ICC was very high (ICC = 0.92). The agreement between the two raters in each condition separately varied somewhat: It was high for rest (ICC = 1.0), 200 kPa (ICC = 0.98), and for 400 kPa (ICC = 0.86). However, it was weak for 50 kPa (ICC = 0.35).
6. Discussion 6.1. The PAIC-15 as a Measure of Pain for IDD The first aim of this study was to evaluate pain behavior among individuals with IDD using the PAIC-15. The PAIC-15 has been used to code pain behavior among older persons with dementia in both experimental and clinical settings [21,22,27]. The current study is the first time in which the PAIC-15 was systematically used for the IDD population. The PAIC-15 was suitable for the task at hand; the coders could easily understand each item owing to the explanations provided on the scale and could decide whether or not a certain item appeared while they watched the videos, and whether it was scorable or not, as the latter possibility also exists in the scale.
The analysis of the PAIC-15 scores showed that, as hypothesized, people with IDD have increased pain behavior compared to TDCs during noxious pressure stimulation but also during innocuous stimulation (50 kPa) and at rest. In other words, people with IDD are overall more active in the face and body compared to people with typical development. Aligning with these results are results from previous experimental studies in which pain behavior during experimental noxious stimulation was scored with various behavioral tools. For example, we previously analyzed the same participants using the Facial Action Coding System (FACS) and found them to exhibit more facial actions compared to the control group [23,28]. Barney et al. (2015) also reported increased pain behavior among children and adolescents (average age 14.8 years, range [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22] with Neuronal Ceroid Lipofuscinosis as compared to their siblings [29]. The same authors also reported increased reactivity to cutaneous stimuli among children with Global Developmental Delay compared to controls [30]. Increased and/or elongated behavioral responses among children and adults with IDD have also been reported in clinical settings, for example during vaccination [31][32][33][34] and painful medical procedures [35][36][37]. Although newborns with Down syndrome were slower to express pain during such procedures, their behavioral and physiological responses persisted for a longer durations and some of the responses were enhanced compared to typically developing newborns [38].
The underlying reasons for the increased pain behavior in IDD are not fully understood. It is also not clear whether the increased pain behavior reflects an increased perception of pain among these individuals or whether it incorporates a mixture of mental experiences that include, but are not restricted to pain such as anticipation, anxiety, and apprehension. Nevertheless, several studies have reported decreased pain thresholds among individuals with IDD in the laboratory setting, which suggests increased sensitivity to noxious stimuli [3,4,39,40], possibly explaining the increased behavioral responses. Others have reported similar pain thresholds among individuals with IDD and controls [41,42], suggesting that further study is needed in order to resolve this issue. Yet, increased cortical responses to noxious stimuli in IDD that were evident in imaging studies and evoked potential studies [28,43] support the aforementioned notion that individuals with IDD may be more sensitive and/or vulnerable to pain than typically developing individuals. Not mutually exclusive explanation is that the TDCs had reduced responses compared to the IDD group because they were more self-aware of being videotaped. Nevertheless, the possible impact of self-awareness may have decreased towards the actual experiment due to the long familiarization and training process in front of the camera.
Looking at the most painful stimulus-the 400 kPa-the items that were highly frequent among the participants were three from the "facial expression" subscale (raising upper lip, opening mouth, and looking tense), and one from the "body movement" subscale (freezing). These occurred among 60-73% of the participants with IDD. The "raising upper lip" and "opening mouth" items resemble the FACS items "lips part" and "lip raiser" which have also been reported to frequently appear among individuals with IDD in experimental [24,44] and clinical settings [32][33][34]45]. Freezing has also been reported as a frequent, seemingly atypical behavior among individuals with IDD during painful insults [31,44,46], and its formal incorporation in the PAIC-15 is reinforced by the present results. The item "narrowing eyes" was also frequent among the majority of the IDD group; however, its frequency was no different from its frequency in the TDC group. Notably, the relatively low frequency of other PAIC-15 items such as "resisting care", "rubbing", and "shouting" was probably due to the administration of experimental pain stimuli of low to moderate intensity and to the preparation and training process the participants underwent.
The Arabic Translation of the PAIC-15
The translation from English into Arabic was performed via known procedures as detailed by Sousa et al. [26]. Given that we used video recordings of the pain responses, it was possible to have several observers code the same material using either the original English version or the newly translated Arabic version of the PAIC-15 scale. This allowed us to test the validation of the translation by way of four analyses. First, there was a moderate significant correlation between stimulation intensity and the sum scores of the translated PAIC-15, which supports its criterion validity. Similar correlations were also calculated for the original English version. Second, the internal consistency of the translated version was high (0.89), as was that of the original version, which suggests its high reliability as a scale. Third, the agreement between the two raters who used the translated version was very high for the noxious stimulation conditions (200 and 400 kPa), suggesting that the tool provides reliable scores for pain behavior. The agreement was low for the innocuous stimulus (50 kPa), perhaps because the tool is not meant to score such a condition. Alternatively, the 50 kPa stimulation may have evoked apprehension among some of the participants, the signs of which were inconsistently detected by the raters. Fourth, the overall agreement between the original and translated version was high, and the agreement for single items ranged from moderate to high with only one of the 15 items showing low agreement ("looking tense", ICC = 0.39). Thus, overall, the translated version demonstrated good reliability and agreement with its original version.
Interestingly, some subtle differences were observed between the English and Arabic versions. First, the item "raising upper lip" was scored significantly more often in the IDD group when using the English version compared to the Arabic version (60 vs. 16.6%). It seems unlikely that the professional or cultural background of the coders underlie this difference although this possibility cannot be dismissed. Nevertheless, the "opening mouth" item, which reflects a somewhat related action around the mouth, exhibited no differences between the versions. Two items were scored only by the original PAIC-15: shouting and mumbling. However, these items appeared among three to four participants with IDD out of the 30 participants, and mumbling appeared among one person from the TDC group. The difference in appearance between the English and the Arabic version in these items was not significant. Thus, it appears that the use of the English and Arabic versions of the PAIC-15 resulted in an overall similar identification of items that were more frequent than others, and of increased pain behavior in the IDD than the TDC group. The tendency toward somewhat lower scores in the Arabic version for the participants with IDD may have been coincidental, may be due to differences between the observers, or it may have resulted from subtle variations in the interpretation of the items in each language. Nevertheless, the translated version resulted in similar conclusions to those of the English version with regard to pain behavior among individuals with IDD.
Limitations
Several limitations should be considered. First, larger groups would have allowed an evaluation of whether gender affects pain behavior as coded with the PAIC-15. Second, the current study was an experimental study; hence, the results apply to acute pain behavior. Future studies may wish to test the usability of the PAIC-15 among individuals with IDD who suffer from chronic pain or during clinical pain conditions. Third, although we intended to prepare a PAIC version that would apply to all Arabic-speaking people regardless of their country of origin (hence the translation into Arabic literature), we acknowledge that there might be some variations between countries. The comparison of PAIC-15 scores between different countries/regions may provide data that would help improve the accuracy of the translation.
Conclusions and Clinical Implications
Although people with IDD are exposed more frequently than others to painful conditions, they tend not to report pain, or their reports may be toned down as compared to those of typically developing individuals [47][48][49]. Thus, the use of tools to analyze indirect indices of pain such as pain behavior is imperative in order to tailor adequate pain management to these individuals. The PAIC-15 has been proven to be valid and reliable in measuring pain among people with cognitive impairment due to dementia, and the present results support its use among people with IDD as well. The use of the PAIC-15 for different populations with cognitive impairment enables the implementation of a standardized approach in the study of pain perception, in identifying etiology-related unique pain responses, and in studying pain management guidelines for this population. The tool is user friendly, does not require special professional training for the observer, and can detect pain in a dose-response manner as shown herein. Considering the advantages of the PAIC-15, its dissemination is called for. As Arabic is the official or co-official language in approximately 25 countries around the world, the translation of the PAIC-15 into Arabic is another step in the right direction, not only for people with IDD but also for people with other types of cognitive impairment. Hopefully, this scale will aid Arabic-speaking caregivers to identify pain, assess its magnitude, and provide proper treatment in order to prevent suffering among these individuals. | 2021-10-19T16:09:00.710Z | 2021-09-22T00:00:00.000 | {
"year": 2021,
"sha1": "f0e6d69a51cecc38812504c5c7e93898f26d6701",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/11/10/1254/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0bae838b4ce13ad8ae859db15aafe161704aff54",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15917478 | pes2o/s2orc | v3-fos-license | Predictors of Conversion from Radial Into Femoral Access in Cardiac Catheterization
Background Fewer bleeding complications and early ambulation make radial access a privileged route for cardiac catheterization. However, transradial (TR) approach is not always successful, requiring its conversion into femoral access. Objectives To evaluate the rate of conversion from radial into femoral access in cardiac catheterization and to identify its predictors. Methods Prospective dual-center registry, including 7632 consecutive patients undergoing catheterization via the radial access between Jan/2009 and Oct/2012. We evaluated the incidence of conversion into femoral access and its predictors by logistic regression analysis. Results The patients’ mean age was 66 ± 11 years, and 32% were women. A total of 2969 procedures (38.4%) were percutaneous coronary interventions (PCI), and the most used first intention arterial access was the right radial artery (97.6%). Radial access failure rate was 5.8%. Independent predictors of conversion from radial into femoral access were the use of short introducer sheaths (OR 3.047, CI: 2.380-3.902; p < 0.001), PCI (OR 1.729, CI: 1.375-2.173; p < 0.001), female sex (OR 1.569, CI: 1.234-1.996; p < 0.001), multivessel disease (OR 1.457, CI: 1.167-1.819; p = 0.001), body surface area (BSA) ≤ 1.938 (OR 1.448, CI: 1.120-1.871; p = 0.005) and age > 66 years (OR 1.354, CI: 1.088-1.684; p = 0.007). Conclusion Transradial approach for cardiac catheterization has a high success rate and the need for its conversion into femoral access in this cohort was low. Female sex, older age, smaller BSA, the use of short introducer sheaths, multivessel disease and PCI were independent predictors of conversion into femoral access.
Background
For the last decades, transfemoral approach in cardiac catheterization has been the preferred access for invasive cardiac procedures. However, recent evidence favors transradial approach in several observational and randomized trials. It has been shown that radial artery access decreases vascular complications with fewer access site bleeding complications, early patient ambulation, shorter length of hospital stay and lower hospital costs [1][2][3][4][5][6][7][8] . Recently, the large RIFLE study, on patients with ST elevation myocardial infarction (STEMI), has reported a statistically significant benefit from radial approach on cardiac mortality 9 . Despite its proven clinical benefit, many interventional cardiologists perceive that the decrease in vascular complications is balanced by technical difficulties and a longer learning curve, which might explain why the transradial approach is still underemployed 5,10 . On the other hand, when radial access fails, the most common alternative route is the femoral one 11,12 . In this study, we aimed to evaluate the rate of conversion from radial into femoral access in cardiac catheterization and to identify its clinical, demographic and procedural predictors.
Study design and patient population
In a prospective registry of 14750 consecutive patients from two centres, who underwent cardiac catheterization for diagnostic or interventional coronary procedures, between January 2009 and October 2012, we selected for the purpose of this analysis all consecutive patients in whom the first intention was to use the radial artery (n = 7664). Of these patients, we excluded those in whom the radial access failed, and the alternative choice was the contralateral radial (n = 26), the humeral (n = 4) and the cubital artery (n = 2) (Figure 1).
Baseline characteristics, indication for and type of the procedure performed, procedural devices, details of coronary intervention, need for access site crossover and chosen alternative access were prospectively recorded.
Written informed consent was obtained from all patients as per protocol.
Transradial technique
During the study period, in the two institutions involved in this study, there were nine invasive cardiologists with high experience (> 100 procedures/year) in radial artery catheterization and three fellows in training. The choice of the arterial access was left at each operator's discretion. Either Allen's test or oximetry/plethysmography (Barbeau test) was used, as per protocol, in all patients to access the radial artery patency and adequacy of dual hand blood supply 13,14 .
Using a dedicated arm board, with the patient's wrist slightly hyperextended, the right or left radial artery was cannulated with a short 20-gauge needle after administration of 2 to 3 mL of local anaesthetic. A straight 0.025-inch guide wire was then advanced into the radial arterial lumen through the needle, and a specific transradial 5F or 6F hydrophilic introducer sheath (Terumo Medical Corporation, Elkton, MD) was placed into the radial artery. Both long (25-cm) and short introducer (10-cm) sheaths were used at the operator's discretion.
An initial intra-arterial bolus of 5000 U of unfractionated heparin was administered to all patients. Monitoring of coagulation with activated clotting time (ACT) was used routinely during percutaneous coronary intervention (PCI) in the centers included in this registry. In case of ad hoc PCI, additional bolus of unfractionated heparin was given to achieve an ACT > 250 seconds. The use of additional glycoprotein IIb/IIIa inhibitors was left to the operator's discretion. The radial sheath was removed immediately in the catheter laboratory following completion of procedure, and hemostasis was achieved by application of an adjustable plastic clamp on the radial artery (TR Band TM , Terumo Co., Tokyo, Japan). The clamp was gradually released over 2 to 3 hours, while monitoring for access site bleeding or hematoma, and removed after satisfactory access site hemostasis had been achieved.
As per our routine, all patients undergoing elective or ad hoc PCI were preloaded with clopidogrel before the procedure (75 mg in the case of chronic treatment with clopidogrel > 10 days, or 600 mg, if not).
Definitions and statistical analysis
Procedural success was defined as successful completion of the coronary procedure (diagnostic or interventional) via the initial radial access.
Categorical variables are expressed as absolute values and percentages, and continuous variables, as mean ± SD or median (interquartile range). Continuous variables were tested for normal distribution using the Kolmogorov-Smirnov's test and for equality of variances using the Levene's test.
Baseline and procedural characteristics were compared using Fisher exact test or Chi-square test for categorical variables and Student t test for continuous variables. Multivariate analysis
Final population (n = 7632)
regression was used to determine the independent predictors of conversion from radial into femoral access. The independent variables for entry into the multivariate model were selected according to their significance in univariate testing (included those with p < 0.1 in univariate analysis). The final model was built by forward stepwise variable selection with entry and exit criteria at the p = 0.05 and p = 0.1 levels, respectively. The goodness of fit of the model was evaluated by calculating the Hosmer-Lemeshow statistic.
A significance level of 0.05 with two-sided test was used, and all analyses were done with the Statistical Pack for Social Sciences (SPSS) software, version 19.
Results
A total of 7632 patients were included in the study. The baseline clinical and procedural characteristics are described in Table 1. The mean age of the study population was 66 ± 11 years, and 32% were women. About one third were diabetic, 73.3% had hypertension, 62.7% had hypercholesterolemia and 41.9% had smoking habits. The incidence of prior PCI was 22.2%, whereas 1.7% had had prior coronary artery bypass grafting. Of the total, 2969 procedures (38.4%) were PCIs and the right radial access was the first choice in most patients (97.6%).
Conversion from initial radial access into femoral access occurred in 5.8% of all patients. Univariate predictors of conversion from radial into femoral access are described in Table 1. Comparing with the successful transradial access group, the transradial access failure group patients were significantly older (mean age of 69 ± 12 years vs. 65 ± 11 years, p < 0.001), more likely to be women (46.7% vs. 30.7%, p < 0.001), to have chronic kidney disease (7.0% vs. 4.0%, p = 0.002) and a smaller body surface area (mean BSA of 1.82 ± 0.18 vs. 1.87 ± 0.19, p < 0.001). Conversion into femoral access was also more frequent when the procedure was a PCI (7.4% vs. 4.8% in diagnostic procedures, p < 0.001), in patients with multivessel disease (8.8% vs. 5.2%, p = 0.001) and when shorter introducers were used (8.0% vs. 3.6% with long introducers, p < 0.001).
After multivariable adjustment (Figure 2), independent predictors of conversion from radial access into femoral access were female sex (OR 1.569, CI:
Discussion
In this study, we sought to identify possible predictors of conversion from radial into femoral access in cardiac catheterization.
Our main findings were: (1) a very low radial access failure (5.8%) in contemporary practice by intermediate (60-100 procedures/year) and high (> 100 procedures/year) volume transradial operators with standard radial sheaths and catheters; (2) the most common alternative access was the femoral artery; (3) independent predictors of radial access failure were the use of short introducers, PCI, female sex, multivessel disease, lower BSA and older age; and (4) both a smoking history and the use of larger sheaths (≥ 6F) were associated with radial access success.
Several aspects make radial access a privileged route. It is feasible, being superficial and easy to puncture and to compress, causing fewer complications at the vascular access site compared to femoral access. Likewise, it offers superior comfort for the patient in the post-procedural period, with earlier ambulation and higher cost-effectiveness 15 . Recent studies have shown a mortality benefit in STEMI patients 1,9 . Nevertheless, potential procedural difficulties still intimidate some operators and radial access success is highly dependent on the operator's experience and skills. Failure can be due to inability to gain radial artery access or inability to successfully engage the coronary arteries, owing to radial spasm, anatomic variations or severe tortuosity in the radial, brachial, or subclavian arteries 11,[16][17][18] .
Over the years, as expected, the use of radial access had a gradual increase: 25% in 2009 to 76% in 2012 ( Figure 3A). Focusing on radial access failure rates, one could anticipate a decrease with greater experience. Nonetheless, failure rate was higher in the last years and this could be explained by the fact that higher operator experience could have been offset by a widespread use of the technique, even in less favorable situations to the transradial approach ( Figure 3B).
Procedural failure lessens with experience, and ultimately occurs with a frequency of less than 5% 19-21 . Our higher failure (5.8%) could be partly justified by the fact that we have fellows-in-training. Moreover, in the acute coronary syndrome setting, as in the RIVAL trial 1 , a higher radial access failure rate has been reported (about 7%). After a systematic review of 23 randomized studies published up to 2007, comparing radial with femoral access in diagnostic and/or therapeutic coronary procedures, Jolly et al. 3 reported a transradial approach failure rate of 5.9%. Our radial access failure rate (5.8%) was similar, although we must consider that, in that meta-analysis, 85.3% of the procedures were PCIs, whereas in our population, the percentage is substantially lower (38.4%). In line with this remark, a non-randomized study performed in 2009 12 , including 2100 patients undergoing PCI in the acute coronary syndrome setting, reported a radial access failure rate of 4.6%.
Comparing with the Brazilian experience, the study by Andrade et al. 21 showed a very low failure rate (2.5%), but with a substantially reduced use of the radial access (< 15%), which implies a highly selected population in which this approach was used and may justify the high success rate.
The choice of the catheterization approach (femoral, radial or brachial) is usually a function of the operator, institution, and patient preference. Despite some advantages related to radial access, femoral approach is still widely used since many operators were initially trained in this access and it has also several advantages, such as allowing the use of larger sheaths (useful for procedures in need of higher catheter support and/ or bulkier devices). In addition, the femoral access has been associated with less radiation time and contrast [22][23][24][25][26] .
We found that the use of short introducers was linked to radial access failure. This could be explained by a potential selection bias (center preference concerning introducer choice) or by the fact that long sheaths, once inserted, provide protection to almost the entire length of the radial artery from further manipulation. Nevertheless, in a previous study, no association was found between sheath length and radial artery spasm 27 . We also found that the need for PCI and the presence of multivessel disease were associated with radial access failure, which are surrogates for a more challenging procedure, with more catheter manipulation and exchanges, which would probably be more difficult in transradial approach, and this was also found in other studies [28][29][30] . 2.9% 5.9% 5.9% 6.9%
Original Article
Female sex, as well as shorter BSA and older age, were found to be independent predictors of transradial cardiac catheterization failure. This is likely related to smaller size of radial artery, increased subclavian tortuosity, small aortic roots and short ascending aortas, preventing steady guide catheter coronary cannulation during the procedure, as previously described by other authors 1,11,19 .
One interesting finding in our results was the association between smoking history and a lower radial access failure. In line with the smoking paradox for coronary artery disease this could also be explained by the younger age of smoking patients submitted to catheterization (in our study, the mean age of the smoking patients was 61 ± 11 vs. 69 ± 11 years, p < 0.001) and this has also been found in other studies 31 . Nevertheless, smoking remained an independent predictor of radial access success after multivariate analysis.
Finally, the association between larger sheaths (≥ 6F) and radial access success might be due to selection bias, because the operator would select smaller diameter sheaths for patients in which he would anticipate a more difficult radial access procedure, such as smaller and older patients, in line with International recommendations 27 .
Study limitations
The present study is a registry from two high-volume centers, with bias in the selection of patients for radial access. The procedures were performed by different operators, with variable degrees of experience, and it was not possible to evaluate the impact of the operator's experience on failure rate.
Our results reflect radial access learning curve, since, in the first 2 years, less than 50% of the procedures were performed via radial access, and thus, predictors of radial access failure in more experienced centers/operators might be different.
Conclusions
Transradial approach for cardiac catheterization was associated with a high success rate. The predictors of conversion into femoral access were female sex, older age, smaller BSA, multivessel disease, PCI and the use of short introducers.
These findings could contribute to improve patient selection and increase radial access success. | 2017-06-17T05:52:30.492Z | 2013-08-02T00:00:00.000 | {
"year": 2015,
"sha1": "0a2265136f056e9fae6b40cd12d995c39b3a557c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5935/abc.20150017",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a2265136f056e9fae6b40cd12d995c39b3a557c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5856685 | pes2o/s2orc | v3-fos-license | Candidate inflammatory biomarkers display unique relationships with alpha-synuclein and correlate with measures of disease severity in subjects with Parkinson’s disease
Background Efforts to identify fluid biomarkers of Parkinson’s disease (PD) have intensified in the last decade. As the role of inflammation in PD pathophysiology becomes increasingly recognized, investigators aim to define inflammatory signatures to help elucidate underlying mechanisms of disease pathogenesis and aid in identification of patients with inflammatory endophenotypes that could benefit from immunomodulatory interventions. However, discordant results in the literature and a lack of information regarding the stability of inflammatory factors over a 24-h period have hampered progress. Methods Here, we measured inflammatory proteins in serum and CSF of a small cohort of PD (n = 12) and age-matched healthy control (HC) subjects (n = 6) at 11 time points across 24 h to (1) identify potential diurnal variation, (2) reveal differences in PD vs HC, and (3) to correlate with CSF levels of amyloid β (Aβ) and α-synuclein in an effort to generate data-driven hypotheses regarding candidate biomarkers of PD. Results Despite significant variability in other factors, a repeated measures two-way analysis of variance by time and disease state for each analyte revealed that serum IFNγ, TNF, and neutrophil gelatinase-associated lipocalin (NGAL) were stable across 24 h and different between HC and PD. Regression analysis revealed that C-reactive protein (CRP) was the only factor with a strong linear relationship between CSF and serum. PD and HC subjects showed significantly different relationships between CSF Aβ proteins and α-synuclein and specific inflammatory factors, and CSF IFNγ and serum IL-8 positively correlated with clinical measures of PD. Finally, linear discriminant analysis revealed that serum TNF and CSF α-synuclein discriminated between PD and HC with a minimum of 82% sensitivity and 83% specificity. Conclusions Our findings identify a panel of inflammatory factors in serum and CSF that can be reliably measured, distinguish between PD and HC, and monitor inflammation as disease progresses or in response to interventional therapies. This panel may aid in generating hypotheses and feasible experimental designs towards identifying biomarkers of neurodegenerative disease by focusing on analytes that remain stable regardless of time of sample collection. Electronic supplementary material The online version of this article (doi:10.1186/s12974-017-0935-1) contains supplementary material, which is available to authorized users.
Background
A growing body of literature supports a role for peripheral and central immune cells and inflammation in the pathogenesis and progression of neurodegenerative diseases including Parkinson's disease (PD) [1][2][3][4][5]. Yet the major focus in identification of biomarkers for Parkinson's disease (PD) to date has focused solely on neuronal proteins (e.g., α-synuclein, tau, and β-amyloid) because these proteins are known to play a role in the fundamental pathophysiology of neurodegenerative diseases; and in PD patients, the levels of α-synuclein have been found to be decreased in the CSF compared to that of HC individuals [6], a phenotype which does not change significantly within the first months after diagnosis [7] and suggests these proteins are not being efficiently removed from the brain parenchyma. However, inflammatory markers of disease have gained interest as potential, earlier indicators of neurodegenerative disease processes and may have a predictive value. Age is the most common risk factor for development of PD; like other immune cells, microglia display age-dependent changes in activation and regulation [8]. Post mortem analysis of CSF and brain tissue consistently indicates the presence of activated microglia, increased levels of cytokines likely to be microglialderived, increased NFkB activation, and oxidative damage at autopsy [9][10][11][12]. Brain imaging of live subjects confirms increased inflammation in the pons, basal ganglia, striatum, frontal cortex, and temporal cortex in PD [13][14][15], and pre-clinical animal models [16][17][18][19][20][21][22][23][24], and clinical studies [9,10] demonstrate that inflammation-derived oxidative stress and cytokine-dependent toxicity contribute to nigrostriatal pathway degeneration [3,25,26]. These inflammatory factors are considered to derive from chronically activated microglia and invading immune cells responding to aggregation of toxic α-synuclein oligomers, early neuronal dysfunction [25], and dying neurons in later disease stages [2], and play a pivotal role in the initiation and propagation of illness.
Although there is no consensus on the role of inflammation as a primary, causative, or early factor in PD or a secondary byproduct of disease, and the role of inflammation in PD is still a hypothesis, there is a great interest in establishing which inflammatory markers can help stage disease and identify disease endophenotypes, to inform immunomodulatory interventions, as has been effectively demonstrated in multiple sclerosis [27], for example. To this end, several studies have sampled biofluids for correlation with disease state. However, these data are conflicting and difficult to interpret. A recent meta-analysis [28] revealed that not all studies indicate elevation of key inflammatory factors but most report significant differences between PD and healthy controls, suggesting that PD is accompanied by a dysregulated inflammatory response. Specifically, several studies reported increased inflammation in CSF [29][30][31][32] and serum [33] of PD subjects, but there is disagreement regarding direction of change for several markers. For instance, serum IFNγ has been reported to be increased [33], decreased [34], and not different [28] in PD subjects at various stages compared with HCs. Similarly, serum TNF has been reported to be increased in PD subjects compared with age-matched HC subjects [33][34][35][36][37], but decreased serum TNF levels [38] have also been reported. These disparities may be due to differences in disease severity, other comorbidities, different operating protocols, individual variability, differences in sample processing, analytical methodologies, and almost certainly differences in sampling times coupled with diurnal fluctuations of inflammatory proteins.
Herein, we describe CSF and serum inflammation at 11 time points across a 24-h period (spanning in effect 26 h) in PD patients and age-matched HC subjects in an effort to identify a subset of readily detectable and stable inflammatory factors. The first part of this study evaluated the stability of key CSF biomarkers, alpha-synuclein, DJ-1, and Abeta1-42 in young healthy volunteers in a two period study using the same sampling schedule as outlined here. A minimum of two weeks separated the repeat sampling for blood and CSF [43]. Given that endogenous cortisol levels peak in the morning [39] and can affect cytokine levels [40,41], we hypothesized that a subset of inflammatory factors in central and/or peripheral compartments display normal diurnal variations in HC subjects and these may be disrupted in patients with PD. We further hypothesize that a subset of inflammatory markers does not display diurnal variability and may be different between HC and PD subjects and represent potential disease-specific inflammatory markers. Our goals were to identify the variability in potential candidate biomarkers of inflammation to power larger studies and generate hypotheses, to examine relationships between CSF and serum inflammation for each inflammatory factor, to examine associations between peripheral and central inflammatory factors and CSF levels of α-synuclein, β-amyloid 1-40 (Aβ 40 ), and 1-42 (Aβ 42 ), and to define an ideal set of inflammatory factors in serum and CSF to be used in conjunction with levels of CSF α-synuclein and Aβ proteins to distinguish PD vs HC subjects with sensitivity and specificity.
Subject inclusion/exclusion criteria
A total of 18 PD and 8 HC subjects were screened; of these, 12 subjects with PD (as diagnosed by a movement disorder-trained physician based on the widely employed criteria [42]) and 6 age-matched HC subjects were included. All study participants completed the study except one PD subject who dropped out of CSF collection due to minor discomfort. There were no significant differences in age, weight, or body mass index (BMI) between the PD and HC groups. PD subjects must have had at least two of the following symptoms: resting tremor, bradykinesia, or rigidity (either resting tremor or bradykinesia), a diagnosis of PD for ≤ 10 years, and a Hoehn and Yahr (H&Y) stage of I-III. HC subjects with current clinically significant neurological disorder and/ or a first-degree relative with idiopathic PD were excluded. Twenty-eight days prior to sample collection and 1 day prior to sample collection, subjects underwent physical and neurological examination, Unified Parkinson's Disease Rating Scale (UPDRS) assessment, H&Y assessment, spinal x-ray (unless taken within the last 12 months), vital signs, medical and medication history, 12-lead electrocardiogram (ECG), safety laboratory assessments, coagulation screening, urine drug screening, urine ethanol screening, Hepatitis B and C screening, HIV testing, and serum pregnancy testing (females). Study participants were not taking prescription or non-prescription drugs within 7 days or 5 half lives (whichever is longer) of sample collection, with the exception of PD subjects taking a stable dose (for 4 or more weeks) of PD medication (amantadine, dopamine agonists, L-DOPA, and/or MAO-B inhibitors) prior to sample collection.
Study design
Subjects were admitted to the clinic the day prior to sample collection for baseline assessments, as described above. Lumbar and venous catheters were inserted, and CSF and blood were collected concurrently over 26 h (i.e., at~5:30 AM, time 0; within 30 min of catheterization, 1, 2, 4, 6, 10, 12, 16, 20, 24, and 26 h post catheterization). Vital signs were taken regularly, and subjects remained in the clinic for at least 24 h following sample collections for monitoring, and a final neurological examination before discharge. Study procedures were safe and well tolerated with no serious adverse events related to the procedure [43].
Sample collection and handling
The samples were accessed through the 24 h Biofluids bank, a subset of a large Michael J. Fox Foundation biospecimen bank available to the community (https:// www.michaeljfox.org/page.html?id=193&navid=databiospecimens). CSF samples were collected via intrathecal catheter connected to a peristaltic roller pump. Sample collection (6 ml) started approximately 12 min before each identified sampling time and was concluded approximately 6 min after. The catheter was cleared of any residual CSF before sample collections to ensure fresh sampling. CSF samples were centrifuged at × 1600g for 15 min at 4°C, supernatants were transferred into six 0.5 mL-aliquots and three 1.0 mL-aliquots in polypropylene tubes, and stored at − 80°C or on dry ice within 1 h of collection. CSF samples contaminated with blood were discarded. The quality of the CSF samples was visually colorless, and red blood cell (RBC) analysis from the 2-ml discard sample was conducted at each time point. The analysis was conducted within 1 h of sample collection. Whole blood (10 ml) was collected in red top vacutainers, centrifuged at × 1350g for 15 min at 4°C, serum was transferred into six 0.5 mL-aliquots and three 1.0 mLaliquots, stored in polypropylene cryotubes, and stored at − 80°C until shipment on dry ice. All subjects had CSF assessment for blood contamination (hemoglobin ELISA) as part of the safety assessments, typically twice during each catheterization period. In general, if not noted in the clinical study report, this would mean a clinically insignificant finding for RBCs in~98% of the samples.
Statistical analysis and data presentation
As an ad hoc means of determining stability over time, we performed linear regression of each analyte over time for each individual. Inflammatory factors with no significant association with time were classified as "stable," and factors with a significant association with time were classified as "positive" or "negative" according to the sign of the slope estimate (Additional file 1). A Mann-Whitney U rank-sum nonparametric test was used to compare levels of each analyte between HC and PD subjects at time 0 ( Fig. 2 and Additional file 2). We used orthogonal polynomials to examine diurnal patterns, in particular the quadratic effect (Table 1). We applied repeated measures ANOVA to assess the effects of time and to examine differences between HD and PD participants ( Fig. 2 and Additional file 3). We also investigated the extent to which levels of an inflammatory marker in serum correlated with its levels in the CSF (Additional file 4). We did not adjust for multiple comparisons given the low group sizes and the nature of this study being exploratory in order to be able to power a larger study. We used repeated measures ANOVA to examine the relationship between serum and CSF inflammatory factor levels across time for each subject, as well as those between serum and CSF inflammatory markers with CSF α-synuclein and Aβ proteins across time (Additional file 5). We used ANCOVA to determine these relationships at baseline (Additional file 6, Fig. 3, and Additional file 7) as well as relationships with UPDRS and its components (Fig. 4, Additional file 8). Data are reported as the mean protein concentration (pg/ mL) ± SEM. Finally, we used linear discriminant analysis (LDA) to determine the ability of biomarker groups to discriminate between HC and PD participants ( Fig. 4 and Additional file 9). p ≤ 0.05 was considered significant.
Results
While multiple groups have reported alterations in inflammatory markers in the blood and CSF of individuals suffering from neurodegenerative diseases like PD [1], results are discordant [28]. To investigate the extent to which variability between individuals, sampling times, or diurnal patterns in specific inflammatory proteins in the blood or CSF contribute to the lack of consensus, and to generate hypotheses, we pursued a set of specific questions that required a complex set of comparisons of intra-individual and inter-group values ( Fig. 1) to investigate the extent of association between inflammatory markers and other parameters of interest.
We measured a subset of inflammatory factors (i.e., IL-1β, IL-2, IL-6, IL-8, IL-4, IL-10, IL-12, IL-13, IFNγ, TNF, NGAL, and CRP) in the serum and CSF of PD and agematched HC subjects. IL-1β, IL-2, IL-4, IL-10, IL-12, and IL-13 were not reliably above the lower limit of detection for the MSD assay and therefore were excluded from analyses. We also examined the relationship between the consistently detectable inflammatory factors and CSF levels of more established neurodegenerative disease-specific biomarkers, specifically α-synuclein, Aβ 40 , and Aβ 42 . Finally, we examined the relationship between serum and CSF factors and PD severity and duration.
Serum IFNγ, IL-8, NGAL, and TNF and CSF IL-8, NGAL, and TNF levels were relatively stable across the day in most PD and HC individuals Clinical heterogeneity and inter-individual variability are significant challenges to interpreting clinical data and critical determinants of statistical power. Thus, we first determined individual variability in inflammatory factors across 24 h. Regression analyses revealed marked variability across time in multiple inflammatory factors in both HC and PD subjects (Additional file 1). The most stable analytes across time in the majority of HC and PD subjects were serum IFNγ, IL-8, NGAL, and TNF and CSF IL-8, NGAL, and TNF. These inflammatory factors were stable in greater than two-thirds of subjects (Additional file 1). Serum and CSF NGAL were stable in 50% or more of both PD and HC subjects, indicating relative stability. While CSF CRP varied across time in An analysis of linear and quadratic trends indicated that levels of CSF inflammation rise and fall across the day in PD subjects more than HC. TNF, CRP, IL-8, α-synuclein, and Aβ 42 levels in the CSF were best fit with a parabolic (not straight) line in PD, but not HC subjects. CSF IL-6 levels were best fit with a parabolic line in HC, but not PD slightly half of PD subjects (i.e., CRP levels increased over time in 58.3% of PD subjects), CSF CRP showed minimal variability in the majority of individuals in the HC population (i.e., CRP levels increased over time in only 33.3% of HC subjects). Serum IL-6, CSF IFNγ, and CSF IL-6 displayed an increase over the 24-h period in greater than two-thirds of HC subjects, whereas in PD subjects these analytes were mostly stable across time.
Serum TNF, NGAL, and IFNγ are different at baseline in PD versus HC and remain relatively stable across the day Despite finding significant within group variability and a relatively small sample size, we found significant differences between serum TNF and serum NGAL levels in PD versus HC at baseline (time 0; Key questions: inflammatory markers and neurodegeneratation biomarkers in PD and HC subjects across time. Our analysis addressed the following a priori specified questions aimed at establishing candidate inflammatory biomarkers to focus a larger cross-sectional or longitudinal study: Investigate the extent to which each analyte varied over time in serum and CSF, the extent of correlation between serum and CSF inflammatory factors, the extent of correlation between levels of known markers of neurodegeneration and serum and CSF inflammatory factors, and identification of analytes capable of discriminating between HC and PD subjects with high sensitivity and specificity
Inflammatory proteins in CSF display greater fluctuation in PD versus HC across the day
We next determined whether inflammatory analytes, α-synuclein, and/or Aβ 40 and Aβ 42 levels fluctuate across the day by examining quadratic trends for each analyte across time ( Table 1). The pattern of CSF TNF (p = 0.02), CRP (p = 0.03), IL-8 (p = 0.02), and Aβ 42 (p = 0.006) levels across the day were best fit by a parabolic (not linear) relationship in PD subjects, but not in HCs (p = 0.07, p = 0.25, p = 0.31, p = 0.06, respectively), indicating that the levels of these CNS proteins rise and fall across the day more in PD subjects versus HC. CSF IL-6 levels rise and fall across in the day in HC subjects (p = 0.01), but not in PD (p = 0.12), and CSF Aβ 40 levels across the day were best fit by a quadratic trend in both PD (p = 0.02) and HC (p = 0.004) subjects.
PD and HC subjects displayed different relationships between serum and CSF inflammation across the day
Next, we investigated the extent to which levels of an inflammatory marker in serum correlated with its levels in the CSF (Additional file 4). Serum CRP and CSF CRP significantly co-varied across time in one group but not the other. Serum IL-6 and CSF IL-6 were significantly Table 2 for all data). *Indicates significant change in time, and + indicates significant difference between PD and HC. *** and +++ indicate p < 0.0001, ** and ++ indicate p < 0.01, * and + indicate p ≤ 0.05, and^indicates a significant difference between PD and HC at time 0; p < 0.05. Superscript numbers indicate the sampling hour where significant changes occur related in both groups, and the relationships were different in PD and HC subjects. Serum NGAL and CSF NGAL displayed a significant association that was not different between PD and HC subjects. Serum IFNγ, serum IL-8, and serum TNF did not covary with CSF analytes but were different between the two groups across time (see Additional file 4 for statistics). These data indicate that CSF and serum levels of IL-6 and CRP display significant correlations across a 24-h period and that they do so in a unique way in HC and PD subjects.
CSF and serum CRP positively covary in PD and HC subjects at baseline Next, we examined the relationship between serum and CSF inflammatory factors using the samples from the first collection period (time 0). We found no significant correlation between serum and CSF levels of any factor except CRP (Additional file 6a Next, we examined the relationship between CSF αsynuclein, Aβ40, and Aβ42 levels with serum and CSF inflammatory factors analyzed at baseline (time 0). At time 0, CSF NGAL, CSF IFNγ, CSF CRP, and serum CRP co-varied with α-synuclein, Aβ 40 , and Aβ 42 in a disease-dependent manner (Fig. 3): PD subjects demonstrated a relationship between CSF NGAL and CSF αsynuclein (Fig. 3a) as well as CSF NGAL and CSF Aβ 40 (b), while HC subjects did not. HC subjects demonstrated a relationship between CSF IFNγ and CSF αsynuclein (c), CSF CRP and CSF Aβ 40 (d), and serum CRP and CSF Aβ 40 (e), while PD subjects did not ( Fig. 3; Additional file 7 for statistics).
CSF IFNγ and serum IL-8 positively correlate with some clinical measures in PD subjects, and serum TNF and CSF α-synuclein correctly categorize individuals into PD or HC groups with high specificity and sensitivity Next, we analyzed relationships between serum and CSF inflammation and clinical measures of disease state as determined by the UPDRS, its components (mentation, behavior, and mood score, activities of daily living score, motor examination score, and complication of therapy score), and disease duration (years since PD diagnosis; Fig. 4 and Additional file 8 for statistics). Interestingly, levels of CSF IFNγ increased as the UPDRS increased (Fig. 4a). When UPDRS component scores were examined separately, CSF IFNγ levels increased as the activities of daily living (ADL) score (Fig. 4b) and the complication of therapy score (Fig. 4c) increased, indicating that the highest levels of CSF IFNγ are found in PD subjects with the most disruption of daily living activities and the most complication experienced due to therapeutic treatment. Serum IL-8 levels demonstrated a significant positive relationship with the ADL score (Fig. 4d), such that subjects with the highest levels of serum IL-8 have the most disruption in their daily activities due to PD. There were no other significant correlations noted between PD severity or disease duration and inflammation in the serum or CSF. There were no relationships noted between serum or CSF inflammatory markers and years with PD or disease severity or duration and CSF neurodegenerative markers (i.e., αsynuclein, Aβ 40 , and Aβ 42 ; data not shown).
Given that inflammatory status results from a convergence of multiple variables and factors, we used linear discriminant analysis (LDA) to determine if any particular analyte or a particular set of analytes allowed for correct assignment of group membership (with all sampling time points considered). LDA revealed that inflammation and a neurodegenerative disease marker correctly assign individuals to HC or PD groups. Indeed, serum TNF alone misclassified only 17% of PD subjects into the HC group and only 17% of HC subjects (Fig. 4e). Serum TNF with CSF α-synuclein together correctly categorized > 75% of both PD and HC groups across all time points (Fig. 4f; 82% sensitivity and 83% specificity) which was the most accurate of all parameter combinations tested.
Serum and CSF IL-6 alone incorrectly assign 100% of HC individuals into the PD group (Additional file 9). While serum and CSF levels of IL-6 correctly categorized at least 75% of individuals into the PD group across time (Additional file 9a), these two factors were not sufficient to discriminate between HC and age-matched PD subjects. Similar results were obtained when only serum IL-6 was considered (Additional file 9b for all data).
Discussion
Several studies have investigated the extent to which Parkinson's disease (PD) pathophysiology is associated with increased inflammatory status. However, due to conflicting results [28], there is no agreement on which inflammatory proteins hold promise as potential biomarkers to stage or monitor disease progression, and a clear picture of biofluid inflammation in PD is stymied by a complete lack of knowledge of the extent of diurnal fluctuations in inflammatory proteins in peripheral (serum, plasma) and central (CSF) compartments in both healthy control (HC) subjects and PD patients.
Here, we analyzed the levels of six inflammatory proteins (i.e., IL-6, IL-8, IFNγ, TNF, NGAL, and CRP) concurrently across a 26-h sampling period in the serum and CSF of PD (n = 12) and age-matched HC subjects (n = 6). The key findings are summarized in Table 2.
We stress that the sample size of this study is very small and designed to generate new hypotheses and inform a larger study. While age and BMI were controlled for (Additional file 10), and inclusion/exclusion criteria were strictly adhered to for recruitment of subjects (Additional file 11), some factors that may influence cytokine profiles were not considered. For example, PD subjects taking a stable dose of PD medication (amantadine, dopamine agonists, L-DOPA, and/or MAO-B inhibitors) for 4 weeks prior were included, and drug dose and drug type were not controlled. Dopaminergic drugs increase inflammation in PD and PD models [45][46][47][48], and increased inflammation after long-term L-DOPA There were no other significant correlations between inflammation and disease severity or number of years with disease. Serum TNF alone (e) and serum TNF with CSF α-synuclein (f) demonstrated ascending accuracy and precision in discriminating between HC and PD subjects when analyzed with linear discriminant analyses (LDA; see Additional file 9: Figure S2 for all LDA factors considered) treatment may contribute to the development of dyskinesia [46,47]. However, there is some evidence that DA drugs decrease inflammation [49] or do not change circulating inflammation at all [50]. Our concerns about the potential confounding effects of dopaminergic therapy are lessened given that dopaminergic drugs did not skew inflammatory levels up or down across the board. Despite the fact that not all patients were on the same kinds of medication, which is usually the case in the clinic, we were able to find PD-specific attributes to some of the analytes. These and other factors could influence inflammation [51,52], and should be taken into account in future studies. While we found significant temporal variability in both PD and HC groups across the day in many serum and CSF inflammatory factors, serum IFNγ displayed stability across the day in 83% of HC individuals and 92% of PD individuals, and both serum and CSF TNF displayed stability across the day in 100% of HCs and 92% of PD subjects. These data suggest NGAL and particularly TNF in the serum may be good candidate biomarkers to pursue as indicators of inflammatory state in crosssectional studies where it may not be possible to adhere to an exact sampling time. Differences were noted between PD and HC groups in serum TNF and NGAL at time 0 with TNF being significantly decreased, and NGAL being significantly increased in PD. While serum IFNγ levels are more variable within groups than serum TNF or NGAL, significant differences in serum IFNγ were noted between HC and PD groups when all time points are considered, with IFNγ being significantly lower in PD subjects as compared with HC subjects. However, differences in serum IFNγ between PD and HC subjects were not noted at baseline (time 0), indicating that multiple sampling points may be required. However, repeated sampling protocols increase intra-subject variability of CSF Aβ protein levels [53], highlighting the importance of choosing a relatively stable biomarker.
Identification of TNF and IFNγ as potential candidate biomarkers is noteworthy given that both of these cytokines have been strongly implicated in degeneration of nigral dopaminergic neurons and basal ganglia pathologies in pre-clinical and post mortem studies [54][55][56][57][58][59][60][61][62][63]. Levels of serum TNF have been reported to be increased in PD subjects compared with age-matched HC subjects [33][34][35]37], but decreased serum TNF levels [38] as well as no difference in serum TNF levels [64] have been reported in PD versus HC subjects. As our data indicate stability in the levels of serum TNF across the day both individually, and within groups, we likely rule out that diurnal variability accounts for this disparity in the TNF literature. In a recent meta-analysis of inflammatory serum levels in PD subjects and HCs, Qin and colleagues [28] determined that of the 25 studies considered in the analysis, 9 demonstrated an increase of serum TNF levels in PD. However, age was found to be a confounding factor. Like serum TNF, serum IFNγ has also been demonstrated to be increased [33] and decreased [34] in PD subjects compared with HCs. Here, we demonstrate that when all sampling time points are considered, there is a discernable decrease in serum IFNγ in PD subjects compared with HCs. These data suggest that sampling time (and/or other circumstances of collection) could account for IFNγ variability reported in the literature, as we found no difference between HC and PD subjects in serum IFNγ at time 0, and a moderate, though insignificant, degree of variability across the 24-h period in both HC and PD groups. Standardizing sampling paradigms (e.g., number of samples, volume of each sample, total sample volume) and differences between CSF collection methods (e.g., gravity drip, syringe draw, peristaltic pump in a closed system as used here) would likely reconcile some inconsistencies in the literature [53].
Together, our results strongly suggest that serum TNF and serum NGAL are the most promising candidate inflammatory biomarkers because they remain relatively invariant throughout a 24-h period in the majority of subjects and because their levels are significantly different between PD and HC groups both at time 0 and across the day. To our knowledge, this is the first study to investigate NGAL protein levels in blood or CSF of PD subjects. NGAL (also known as lipocalin 2, 24p3, uterocalin, and siderocalin) is an acute phase protein [65] involved in innate host defense against bacteria [66] that is both upstream and downstream of TNF signaling, and sensitizes cortical neurons to β-amyloid toxicity [67]. Several studies have identified increases in NGAL in subjects with familial amyloid polyneuropathy [68] and more recently in patients with late-life depression [69][70][71], Down's syndrome with dementia [72], and Alzheimer's disease with depression [73]. Finally, CSF NGAL holds great potential as a novel companion inflammatory biomarker in PD because it is easy to measure, it is stable across time, and it is part of a signaling network with TNF [67].
We next demonstrated that CSF TNF, CRP, IL-8, and Aβ 42 levels fit a quadratic equation indicating a parabola shape when graphed across the day (i.e., significantly changed from time 0 levels and then significantly changed back toward time 0 levels) in PD subjects but not HCs. CSF IL-6 levels rise and fall across in the day in HC subjects, but not PD, and CSF Aβ 40 levels across the day were best fit by a quadratic trend in both PD and HC subjects. CSF TNF has the clearest pattern of change across the day, increasing from below 1 pg/mL at 5:30 AM to around 2 pg/mL at around 11:30 AM and finally back down to below 1 pg/mL at 5:30 AM the next day. However, there was no difference in CSF TNF levels between PD and HC subjects at any time point, and the lack of significant quadratic trend in HC CSF TNF is likely due to a small sample size. Although few studies have sampled biofluids across the day to determine diurnal inflammatory patterns, one study in adult insomnia patients and age-matched HCs (~27-31 years old) found disturbed rhythms in plasma IL-6 and increased plasma TNF (but no diurnal rhythm) in insomniacs, when measured every 30 min across 24 h [74]. Although we have no information on the extent of sleep disruption in subjects participating in this study, we noted a decrease in serum TNF with disease, we similarly noted stable TNF levels across the day in PD and HC subjects and found similar levels of IL-6 and TNF in our~50year old subjects (~3-6 pg/ml and~2-3 pg/ml, respectively). While we did not find significant rhythmicity in serum IL-6 across the day, this could be due to our small sample size as the plasma IL-6 pattern noted by Vgontzas and colleagues [74] bears striking resemblance to what we noted for serum IL-6 with a more robust increase and then apparent decrease across the day in PD subjects and insomniacs versus HC subjects whom displayed a relatively flat profile across the day in both studies [74]. Additionally, our data confirm previous reports demonstrating rhythmicity in CSF Aβ 40 and Aβ 42 across the day [75]. Together, these data suggest that there is more variability in central inflammation across the day in PD subjects as compared with HCs. Cytokines and other proteins (including cortisol) are significantly increased after knee surgery, and rise and fall more in the CSF than in the serum, indicating that increased fluctuation of CSF proteins as compared to serum levels is not uncommon [76]. One reason could be due to the relatively low dilution of cytokines by CSF versus the much higher volume of the blood circulatory system, and Bromander et al. speculate that the greater fluctuation seen in CSF versus serum could be because the inflammatory systems of the brain and the periphery are regulated separately, and suggest that CSF cytokine fluctuation may indicate BBB disruption, a characteristic known to be associated with neurodegenerative disease [77].
Though serum and CSF inflammatory factors have been suggested to reflect one another, there are conflicting reports [28]. Therefore, we investigated the relationship between serum and CSF levels of all detectable analytes in HC and PD subjects across the day and found that levels of IL-6 and CRP covary in serum and CSF across the day, and they do so in a diseasedependent manner. Surprisingly, we found that of all analytes evaluated at time 0, CRP is the one inflammatory factor in serum that reflects levels in CSF (although concentration ranges are an order of magnitude lower in CSF). While we demonstrated no difference between serum or CSF CRP between HC and PD subjects, CRP has been reported to be associated with increased risk of death and indicative of life expectancy in PD subjects [78]. These discrepancies could be accounted for by individual variability, as there was substantial withinsubject variability in serum and CSF CRP across the 24-h sampling period. Interestingly, while CRP levels were variable across the day in the serum and CSF of the majority of PD subjects, CSF CRP levels were stable in the majority of HC subjects, indicating that diurnal patterns of CSF CRP may be disrupted in association with PD. These important findings suggest that blood analysis of CRP could feasibly be used to probe neuroinflammation to inform inflamm-aging [79][80][81] or disease status without a need for the more invasive lumbar catheter puncture for CSF collection. Additionally, mechanism-based hypotheses about neuroinflammatory status may be gleaned from analyzing existing data on serum CRP levels in PD subjects.
Alpha (α)-synuclein, Amyloid-beta-40 (Aβ 40 ), and Aβ 42 are proteins currently under intense investigation as potential biomarkers of neurodegenerative disease. Levels of inflammatory factors in the brain are likely to be changing prior to frank neuronal death because microglia, the innate immune cells in the brain produce cytokines when activated in response to aggregated proteins [1,82]. A wealth of evidence indicates that toxic oligomers of α-synuclein and Aβ peptides trigger inflammatory responses in vitro and in vivo and compromise neuronal health and survival [83][84][85] and that brain inflammation, in turn, increases aggregation of those oligomers [25,[86][87][88]. Our data demonstrate that, across time, serum IFNγ, serum CRP, CSF TNF, and CSF CRP covary with all three biomarkers of neurodegenerative disease (α-synuclein, Aβ 40 , and Aβ 42 ) in a diseasedependent manner. These exciting and novel data suggest that serum and CSF inflammation may be associated with abnormalities in CSF levels of α-synuclein, Aβ 40 , and Aβ 42 . Interestingly, all significant relationships between CSF and serum inflammation and CSF toxic peptide levels at time 0 are positive such that as inflammation in CSF or periphery increases, so do CSF αsynuclein and Aβ peptides, providing additional evidence that inflammation and toxic oligomer species may be part of a feed-forward cycle of protein aggregationneuroinflammation.
The Unified Parkinson's Disease Rating Scale (UPDRS) is comprised of several components including the mentation, behavior, and mood score, the activities of daily living score (ADL), the motor evaluation score, and the complications of therapy score. Our data indicate that serum IL-8 levels were significantly and positively associated with the ADL scores component of the UPDRS.
As serum IL-8 was stable across the day in the majority of PD individuals, this factor may be of interest in future longitudinal studies to determine whether it is associated with disease severity irrespective of disease duration or time of day. Consistent with this idea, the association of IL-8 with clinical severity was also reported in a recent analysis of human serum in a multi-center cohort of 142 subjects with familial PD arising from leucine rich repeat kinase 2 (LRRK2) mutations where high levels of IL-8, MCP-1, and CCL4 were associated with the presence of a specific clinical subtype that is characterized by a broad and more severely affected spectrum of motor and nonmotor symptoms [89]. With regards to inflammatory markers in the CSF, IFNγ levels were higher in PD individuals with higher UPDRS total scores, higher activities of daily living (ADL) scores, and in PD individuals that have more complications from therapeutic treatment, suggesting that IFNγ levels may be of particular interest in disease staging and monitoring. IFNγ regulates the expression of major histocompatibility complex II (MHCII) on monocytes, microglia, and macrophages [90]. A single nucleotide polymorphism located in the first intron of the MHCII Human Leukocyte Antigen (HLA)-DRA gene was found to be significantly associated with sporadic PD in a recent genome wide association study [91], indicating that IFNγ may contribute to disease severity by affecting antigen presentation and the resulting inflammatory response. Finally, increased expression of IFNγ in the CNS driven by a viral vector in mouse brain resulted in basal ganglia calcification and nigrostriatal degeneration, reminiscent of human idiopathic basal ganglia calcification (IBGC) [63].
Given that PD is an extremely heterogeneous disease with differing rates of progression [35,92], our results demonstrating that inflammation did not increase with years since PD diagnosis are not entirely surprising. Indeed, these findings are in line with imaging data demonstrating increased inflammation in the pons, basal ganglia, striatum, and cortex of PD subjects irrespective of disease duration [13], lending credence to the hypothesis that changes in inflammation likely occur early in disease and remain present throughout the course of disease. One additional potential reason for the finding that these inflammatory markers positively correlate with some measures of PD severity but not duration is the fact that the biofluids analyzed in this study were taken from a MJFF repository originating from an experimental medicine study that selected subjects able to tolerate the continuous CSF procedures. Therefore, it is a subset of the PD population (usually younger) and therefore with shorter duration of illness. However, severity of PD in this population might well be related to individuals with more aggressive deterioration and therefore linked to inflammatory biomarkers. The potential value of inflammatory biomarker assessment is in its predictive ability in identifying who is likely to have a faster rate of decline despite years with disease.
The key to resolving this issue is to replicate the findings in a larger cohort of subjects.
We used linear discriminant analysis (LDA) to determine if any particular analyte or set of analytes allowed for correct assignment of any one sample to PD or HC group membership across time. To our surprise, there was a relatively minimal misclassification when only serum TNF levels were considered; serum TNF alone misclassified, at most, only 25% of PD subjects (3 of 12) into the HC group, and 33% of HC (2 of 6) subjects into the PD group across all time points. When serum TNF and CSF α-synuclein levels are considered together, ≥ 82% sensitivity and 83% specificity were achieved in both HC and PD groups across all time points; CSF tau and Aβ demonstrate comparable sensitivity (but less specificity) as diagnostic tools for Alzheimer's disease [93]. Together, these analyses indicate that, at a minimum, TNF measured in the serum and α-synuclein measured in the CSF have high potential utility for sensitive and specific detection of PD state.
Conclusions
In summary, our data indicate that serum and CSF NGAL, TNF, IFNγ, and CRP, and serum IFNγ are promising candidate biomarkers of inflammation. Importantly, we emphasize that the sample size in our study was very small and confirmatory studies will be needed in a larger cohort of subjects to validate these findings using current biorepositories with banked serum and CSF, details regarding the employed collection protocol, and good clinical history data for each subject (such as in the PPMI, Precept/PROBE and HBS cohorts [94] as well as in the DeNoPa cohort [44]), including information about autoimmune and chronic systemic disease (diabetes, obesity, cardiovascular disease, etc.) and other comorbidities that could influence the levels of inflammatory proteins in their biofluids. In the future, we propose that a similar 24-h collection study should be performed in subjects experiencing prodromal pre-motor symptoms of PD to investigate the extent to which the factors we identified could be used to aid in earlier diagnosis of individuals at risk and to monitor their disease trajectory, and hope that the data reported in this small study will generate hypotheses regarding potential inflammatory profiles in patients with neurodegenerative disease. | 2017-08-19T05:40:18.518Z | 2017-08-18T00:00:00.000 | {
"year": 2017,
"sha1": "795a6f89ca85341e6fbce0e0a47ed77039fb0a19",
"oa_license": "CCBY",
"oa_url": "https://jneuroinflammation.biomedcentral.com/track/pdf/10.1186/s12974-017-0935-1",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6fd39c6e428e58089bb2c2d51301d5ff072ab9d9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244718034 | pes2o/s2orc | v3-fos-license | Primary Pericardial Mesothelioma: A Rare but Serious Consideration
Primary pericardial mesothelioma (PPM) is an extremely rare malignancy with a very poor prognosis. It poses a diagnostic challenge given its often late and non-specific presentation. This report describes a 74-year-old man who presented with central pleuritic chest pain and mild breathlessness. The patient was febrile and mildly tachycardic with crepitations in the right lung base. Blood tests revealed raised inflammatory markers and chest X-ray showed no acute pathology. Following admission, CT pulmonary angiogram showed a large left-sided mediastinal mass (approximately 110 x 70 x 85 mm) centered on the pericardium. Further post venous phase CT imaging identified possible myocardial invasion alongside suspicious liver nodules. Later, outpatient fluorodeoxyglucose (FDG) positron emission tomography (PET) imaging highlighted further FDG avid pleural and liver lesions. CT-guided biopsy of the pericardial lesion was undertaken, with histology and immunohistochemistry indicating epitheliod-type mesothelioma. A significant malignant pericardial effusion was also identified, which ultimately required pericardial window formation. Immunotherapy was commenced utilizing dual nivolumab and ipilimumab, a novel regime for the treatment of mesothelioma. Palliative radiotherapy to the pericardial lesion will also be performed. Here, we demonstrate the diagnostic challenge of this vanishingly rare condition, which is usually diagnosed upon the development of associated complications. Early recognition gives the best chance of improved mortality, however, diagnosis requires a high index of clinical suspicion alongside prompt investigation, primarily involving cross-sectional imaging.
Introduction
This case report describes a 74-year-old man who presented to the emergency department (ED) with nonspecific chest pain. Initially diagnosed as a lower respiratory tract infection, further imaging and investigation in fact revealed primary pericardial mesothelioma, an extremely rare condition. This case highlights the non-specific and occult nature of presentation for this rare but serious condition, along with associated complications and management.
Case Presentation Presentation
A 74-year-old retired boilermaker presented to ED complaining of central pleuritic chest pain and mild breathlessness. He reported lethargy but denied unexplained weight loss or night sweats. The patient had a background of stage IIIc sigmoid carcinoma, which was in remission, alongside pleural plaques and paroxysmal atrial fibrillation.
On examination, he appeared slightly pale. He exhibited a moderately increased work of breathing but was saturating normally on room air. The patient was febrile and tachycardic on the initial review. Coarse crackles were noted at the right base on auscultation and he was clinically euvolemic. Physical examination was otherwise unremarkable.
Initial
A bedside ECG showed sinus rhythm with no ischaemic changes. Serial ECGs showed no dynamic changes. Blood tests obtained in ED revealed microcytic anemia, normal white cell count, normal renal function, mild hyponatremia (130 mEq/L), mild hypokalaemia (3.4 mmol/L), and a C-reactive protein of 150 mg/L. Initial troponin blood test was 16 ng/mL followed by a repeat at four hours of 14 ng/mL.
Imaging
On initial assessment, a plain chest radiograph showed diffuse bilateral pleural plaques, which were stable when compared with previous imaging, and no other sign of significant pathology. Due to persistent pleuritic chest pain on a background of prior malignancy, a CT pulmonary angiogram (CTPA) was undertaken to rule out pulmonary embolus. A large left-sided mediastinal mass centered on the pericardium (measuring approximately 110 x 70 x 85 mm) was found. There was peripheral contrast enhancement with areas of hypoattenuation centrally.
Enlarged para-aortic and sub-carinal lymph nodes increased suspicion for malignancy. In addition, a new hypodense lesion adjacent to the gallbladder fossa was noted, suggestive of metastatic disease. A moderate pericardial effusion was found, along with stable bilateral calcified pleural plaques. Arterial phase CT chest and portal venous phase CT abdomen and pelvis images were obtained to further characterize and stage the mass (Figures 1-3). In the abdomen, further FDG avid masses were seen in segments five and seven of the liver, as well as FDG avid nodular thickening of both adrenal glands ( Figure 5). Multiple suspicious lymph nodes in the abdomen, mediastinum, and supraclavicular regions were noted. No evidence of local recurrence of the previously treated colorectal carcinoma was seen.
Differential diagnosis
Initial diagnosis in ED was that of a right-sided pneumonia due to right lung crepitations alongside markedly raised inflammatory markers. Atypical acute coronary syndrome was also considered but ruled out following serial negative troponins alongside benign-appearing ECGs.
Following admission, initial diagnosis and management centered on community-acquired pneumonia.
Later, due to ongoing chest pain, a CTPA was performed to rule out pulmonary embolus, which subsequently revealed a large mediastinal mass as the main significant finding. The radiological appearance of the mediastinal mass favored a primary thoracic malignancy. However, given the history of prior bowel malignancy, a metastatic process was still a differential. Possible primary malignancies included mesothelioma, lymphoma, or thymoma. Of note, there was a history of asbestos exposure alongside known asbestos-related lung plaques. CT-guided biopsy of a subpleural lingular nodule obtained three cores showing predominantly lesional tissue. Histopathology and immunohistochemistry findings were in keeping with epithelioid-type mesothelioma. The tumor cells were found to be positive for CK7, calretinin, D2-40, CK5/6, and WT1. After analysis of the imaging and histopathology, it was thought that the primary lesion was pericardial mesothelioma.
Treatment
The patient was commenced on first-line therapy for primary mesothelioma in the form of immunotherapy agents, nivolumab and ipilimumab. Radiotherapy to the primary lesion was also being planned at the time of writing. Both immunotherapy and radiotherapy were undertaken with palliative intent.
PET-CT imaging showed a large malignant pericardial effusion, which was significantly increased in size from previous imaging. The patient was re-admitted to the hospital and drainage was achieved by way of pericardiocentesis and pericardial window without major complication.
Discussion
Primary cardiac tumors are rare phenomena, with primary pericardial mesothelioma (PPM) being exceptionally rare overall. A large necropsy study of 500,000 cases found an incidence of <0.0022% [1]. These tumors are diagnostically challenging, with patients frequently presenting with non-specific symptoms. Onset is insidious with symptoms typically due to tumor-related sequelae. These include pericardial effusion with or without cardiac tamponade, constrictive pericarditis, and heart failure secondary to the neoplastic invasion of the myocardium. Approximately 200 cases have been reported in the literature, with only a quarter as antemortem diagnoses [2]. Appearances on CT and MRI are variable with potential for both cystic and solid appearances, pericardial effusion or pericardial thickening [3]. Here, the lesion demonstrated peripheral enhancement with central hypoattenuation in keeping with cystic changes, alongside a pericardial effusion. Additionally, there was a loss of flat plane appreciation between the mass and left ventricle in keeping with possible myocardial invasion.
Asbestos exposure is a well-known risk factor for pleural and peritoneal mesothelioma, for PPM this link is more ambiguous. One study analyzed PPM cases between 1993 and 2008 and found only three out of 14 total cases had previous asbestos exposure [4]. PPM tumors are known to present diffusely or as a localized mass. Three main histological types have been described -spindle cell, epithelioid, and mixed [5]. In this case, the histology almost certainly represents epithelioid-type disease.
Prognosis is generally very poor with median survival reportedly between two and six months [6,7]. This is likely because patients develop symptoms late in the disease course, by which time complete surgical resection is not possible. Moreover, PPM tends to respond poorly to both chemotherapy and radiotherapy.
There is no current standard treatment for PPM, with treatment for localized disease focusing on surgical debulking of the primary tumor. Treatment options are understandably limited in unresectable disease. Indeed, it has been noted that PPM patients are less likely to receive chemotherapy and have worse overall survival when compared with pleural mesothelioma patients [6]. However, a recent phase 3 study named CheckMate 743 demonstrated the efficacy of a combination of two monoclonal antibodies (nivolumab and ipilimumab) in the treatment of unresectable pleural malignant mesothelioma [8]. This combination of checkpoint inhibitors was most effective in treating epithelioid subtypes with a durable improvement in overall survival seen. This was most notable at two years with 41% overall survival in the ipilimumab and nivolumab arm, compared with 27% for the standard platinum-based chemotherapy arm. It is important to note that the study excluded patients with primary pericardial presentations and those with a prior malignancy without a three-year disease-free remission. Therefore, direct applicability to this case is difficult.
Conclusions
Due to a paucity of cases, especially pre-mortem, studying this disease with clinically applicable findings is difficult. The late-stage at diagnosis and aggressive nature of mesothelioma mean that prognosis is poor, further hampering effective study. Here, we show how the diagnosis is commonly due to pathological sequelae, in this case, pleuritic chest pain likely secondary to a mediastinal mass. Although some improvements have emerged in the management of pleural mesothelioma, it remains to be seen how transferable these benefits will be to the treatment of PPM.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2021-11-30T16:16:49.305Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "a03a2b4049cd26eca1006449d19c2a4a70886e6e",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/74741-primary-pericardial-mesothelioma-a-rare-but-serious-consideration.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0a972297346ba118121892014ca9d1427f2af13",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222151354 | pes2o/s2orc | v3-fos-license | Cognitive Training Deep Dive: The Impact of Child, Training Behavior and Environmental Factors within a Controlled Trial of Cogmed for Fragile X Syndrome
Children with fragile X syndrome (FXS) exhibit deficits in a variety of cognitive processes within the executive function domain. As working memory (WM) is known to support a wide range of cognitive, learning and adaptive functions, WM computer-based training programs have the potential to benefit people with FXS and other forms of intellectual and developmental disability (IDD). However, research on the effectiveness of WM training has been mixed. The current study is a follow-up “deep dive” into the data collected during a randomized controlled trial of Cogmed (Stockholm, Sweden) WM training in children with FXS. Analyses characterized the training data, identified training quality metrics, and identified subgroups of participants with similar training patterns. Child, parent, home environment and training quality metrics were explored in relation to the clinical outcomes during the WM training intervention. Baseline cognitive level and training behavior metrics were linked to gains in WM performance-based assessments and also to reductions in inattention and other behaviors related to executive functioning during the intervention. The results also support a recommendation that future cognitive intervention trials with individuals with IDD such as FXS include additional screening of participants to determine not only baseline feasibility, but also capacity for training progress over a short period prior to inclusion and randomization. This practice may also better identify individuals with IDD who are more likely to benefit from cognitive training in clinical and educational settings.
Introduction
Fragile X syndrome (FXS) is a genetic condition associated with the full mutation of the fragile X mental retardation 1 (FMR1) gene. FXS occurs in an estimated 1 of every 4000 to 11,000 live births and is the most common inherited cause of intellectual disability [1]. Males tend to be more severely affected, with over 90% of males but only 30-50% of females with the full mutation having IQ scores in the intellectually disabled range (IQ < 70; [2]). Extensive research using both neuropsychological testing and functional magnetic resonance imaging (fMRI) studies has demonstrated the significant deficits in executive function (EF) associated with the condition. These deficits include problems with working memory (WM), inhibitory control, cognitive flexibility/perseveration and selective and divided attention [3][4][5][6]. While there has been extensive preclinical research and human clinical trials focused on potential disease-modifying pharmacological treatment, primarily focused on improving behavior, mood and anxiety, there has been limited research targeting cognitive function in FXS.
Cogmed is a computer-based WM training program that has been the subject of over 80 peer-reviewed publications. Randomized, double-blind, placebo-controlled studies have documented that Cogmed and other WM training procedures may improve WM and academic achievement, reduce symptoms in children with ADHD, increase auditory attention and WM in preschool children, and improve inattention in daily life [7][8][9][10]. While some research has supported these claims, the benefit of WM training programs remains controversial with other researchers arguing that improvements in training are not generalizable beyond the trained tasks [8]. However, as children with FXS have specifically demonstrated WM deficits, this cognitive intervention was seen to have the potential to ameliorate some of the EF problems in this population.
After a small noncontrolled trial demonstrated feasibility of Cogmed for children with FXS [11], we conducted a randomized controlled trial (RCT) of Cogmed training in 100 children and adolescents with FXS, and targeted WM, EF and behaviors associated with EF (attention, hyperactivity/impulsivity) as outcomes of interest [12]. Participants were randomized 1:1 to either the standard Cogmed program that adapts difficulty (memory span) according to performance (adaptive condition) or a control condition utilizing an identical version of Cogmed that does not adapt to performance, with each trial fixed at 2-span items (nonadaptive condition). Within adaptive and nonadaptive versions, participants received either Cogmed JM (generally for younger and/or lower functioning participants) or Cogmed RM (for older and/or higher functioning participants). Participants completed 5-6 weeks of training, totaling 20-25 (mean = 24.2) days at home supported by a parent training aide and supportive coaching by phone. At the group level, children with FXS in the adaptive condition were able to progress by gradually, though modestly, expanding their memory span while using the Cogmed games. However, considerable variability was observed across participants. Nonadaptive training was selected as the comparison condition, rather than a wait-list or treatment-as-usual condition, in order to control for potentially beneficial factors such as parent and coach input and attention to the child, expectation of treatment benefits and placebo response, and any general effects that may be associated with engaging in a computer task or game. The primary result of the trial showed that both the adaptive and the nonadaptive groups improved WM after the Cogmed training, but there was no difference in degree of improvement between groups. The intervention was feasible, and the full sample demonstrated significant improvements in WM and EF objective measures, as well as parent-and teacher-reported attention and EF. For full results, see [12]. One explanation for the gains in both groups and the lack of separation between adaptive and nonadaptive control conditions may be that a substantial number of children with FXS experience the nonadaptive condition as quite challenging and potentially beneficial. However, factors other than the training itself may have contributed to gains in both groups such as placebo or practice effects. Given the results of this study, with both groups experiencing improvement, we determined to conduct further analyses in an exploratory "deep dive" of this rich data set to better understand what factors were associated with improvements.
Cogmed is a well-researched cognitive training program. However, most studies have attempted to understand the significance of clinical outcomes by contrasting experimental groups [13,14]. Only a few studies have examined individual variability or any subcomponents of the training itself, and those have focused on the different types of games within the Cogmed program [15,16], with interest in identifying the specific aspects of WM that are targeted (e.g., verbal vs. visual spatial aspects). Only one study examined predictors of WM training in individuals with IDD [17]; this study showed that females and participants with an IDD but no additional diagnosis, on average, had more progress during training. Additional studies have evaluated other computerized cognitive training programs, such as Lumosity, but these have mainly evaluated patterns of performance by age [18,19]. To our knowledge, no published studies in any population have attempted to link training behavior parameters to outcomes, and no published study has examined potential effects of variation in the training environment on outcomes.
The current study used the detailed training behavior data from the FXS Cogmed RCT [12] and had three primary aims: (1) to characterize the training data, identify training quality metrics, and identify subgroups of participants with similar training patterns; (2) to identify predictors of training efficacy; and (3) to determine which child, training behavior, or home/environmental factors were associated with clinical outcomes during the Cogmed intervention.
Participants
Participants were 98 children with FXS that participated in the RCT of Cogmed; 2 of the original 100 participants were missing the detailed training data. Participants were between the ages of 8 and 18 years, with an average IQ of 64. They were 63% male, all with normal or corrected to normal vision and hearing and residing in various locations throughout the U.S. and Canada. For all relevant information on participants, see the original study 12].
Measures
The same primary and secondary outcomes from the original FXS Cogmed trial were used as the clinical outcomes of interest in the present study. These consisted of the Leiter-Revised [20] Spatial Memory subtest, the Stanford Binet 5 (SB-5; [21]) Block Span subtest; the Wechsler Intelligence Scale for Children, Fourth Edition (WISC-IV; [22]) Digit Span subtest; and the parent versions of the Conners Third Edition (Conners 3; [23]) and the Behavior Rating of Executive Function (BRIEF; [24]). The WM composite, comprised of the Spatial Memory and Block Span subtests, was the trial's primary outcome measure. Teacher-reported behavior from the Conners and BRIEF and the Kiddie Test of Attentional Performance (KiTAP; [25]) were collected in the trial but not included in the present study due to insufficient sample size and limited power (only approximately 50% of the participants had teacher ratings).
In addition to the demographic, primary outcome, and secondary outcome measures reported in the original study, the following measures not previously reported were collected during the visits to further explore factors related to training success and clinical outcomes. The Home Observation for Measurement of Environment or HOME Inventory is an instrument designed to provide a systematic measurement of the family environment. The disability adapted Middle Childhood HOME Developmental Delay [26] was administered at the baseline assessment and consists of 59 questions generated from examiner observation and parent interview. The HOME Inventory covers the following domains: Responsivity, Encouragement of Maturity, Emotional Climate, Learning Materials and Opportunities, Enrichment, Family Companionship, Family Integration, and Physical Environment. The total score was used in the present study. The Symptom Checklist-90-Revised (SCL-90-R [27]) is a standardized self-report measure of psychological symptoms and was completed by the parent acting as a training aide as a self-report of parental mental health. Ninety questions are clustered into the following symptom dimensions: somatization, obsessive-compulsive, interpersonal sensitivity, depression, anxiety, hostility, phobic anxiety, paranoid ideation, and psychoticism. We examined the Global Severity and Depression scores for this study. The Parenting Stress Index-4 (PSI-4 [28]), also completed by the parent training aide, measures stress in the parent-child system based on parent's perceptions of child characteristics, personal characteristics, and interactions between the child and parent. We focused on the parental distress and dysfunctional parent-child interaction scores.
To characterize training quality, the detailed Cogmed data from the 5-6 week training period for each participant were obtained from Cogmed. These data include summaries for each game at the level of each training day and trial-by-trial performance for each game played on each day.
Statistical Analyses
The first aim of the present study was focused on characterizing the training data and identifying clusters of participants with similar training behavior patterns. Four metrics were explored: (1) maximum trial difficulty achieved for each game, each day (adaptive group only, as the nonadaptive group had a fixed level); (2) response time on each trial for each game for each day of play; (3) standard deviation of response time for each game for each day of play (response time variability; see [29]); and (4) percentage of correct trials for each game for each day of play (accuracy). Trials for a particular game in which the response time was either negative (indicating a response before the end of the trial presentation) or greater than the 99th percentile (extreme delay in response; ranging from 3 seconds (s) to over 200 s) across all days the game was played by participants were considered invalid trials and removed from analyses, including computation of the standard deviation of response time. A sensitivity analysis was conducted removing trials with times greater than the 90th percentile and results were similar. Repeated-measures, random-effects models were used to assess general patterns over time and whether differences existed in those patterns between the adaptive and nonadaptive training groups. The standard deviation of response time was transformed using the natural logarithm prior to analysis to meet the assumptions of the models [30,31]. Time, in days since the first day of training, was used as the time scale for all models. Trial-level outcomes further included trial number as a factor. Models included random intercepts and slopes to account for variability in starting place and change over time not explained by the fixed effects.
Semiparametric mixture models were fit to the repeated measures at either the daily game level or the trial-by-trial level to identify clusters of training patterns for each outcome, separately for each game and for each training group [32]. Separate models were fit for the first three weeks of training (early training period) and the last three weeks of training (late training period), especially for trial-level data, due to software limitations. Bayesian information criterion (BIC) was used to select models and identify the number of subgroups present in the data; models with two, three, or four subgroups were considered. From the best models for each game, the likely subgroup for each participant for that game was also determined. Graphical illustrations for the subgroups suggested similar training patterns across games. Therefore, a single subgroup classification was assigned per participant as the most common identified subgroup across games.
For each of the four training metrics, trial difficulty level (adaptive group only), response time, standard deviation of response time, and accuracy training behavior/patterns were identified. Individuals that fell into one behavior group based on a training metric did not necessarily fall into the same group for another training metric, so individual participants were not classified into the same training behavior group across all training metrics. Instead, each training metric was evaluated separately to assess differences on child, parent/training aid and home characteristics using two sample t tests.
The last aim of this study was to determine which child, training, or home environment factors related to clinical outcomes (improvements in scores) reported in the trial. For demographic, parent, and home environment predictors, Time 2 (post-training) assessment was used as the outcome, with the Time 1 (pre-training) assessment as a covariate in an analysis of covariance (ANCOVA) model. Models further included total training time and treatment condition (nonadaptive vs. adaptive) as covariates. Separate models were run using each demographic, parent, and home environment factor as a predictor; interactions between the predictor and treatment condition were also considered. Similar models were fit to assess whether there were differences in clinical outcomes by training behavior groups. Secondary analyses considered training groups as predictors of clinical outcomes in each treatment condition separately.
Training Metrics: Cogmed Adaptive and Nonadaptive Groups
Accuracy was lower in the adaptive group than the nonadaptive group in RM games but not significantly different in JM games at the first day of play, with the nonadaptive group increasing over time and the adaptive group remaining stable across training days (Table 1). Over most games, as expected, trial difficulty (adaptive group only) increased over time ( Table 2). There were no differences between groups or changes with time in the standard deviation in response time (data not shown). For the trial-level data, the average response time decreased across the training days for most games (data not shown). Rate of decrease in response time did not differ by group (adaptive vs. nonadaptive).
Training Patterns
The optimal number of subgroups was identified using BIC based on data from the training period. Most games and outcomes suggested two subgroups, suggesting a two-cluster solution was appropriate. For examples of identified subgroups for a Game 9 in the adaptive group, see Figure 1; patterns were similar for other games as well as for the nonadaptive group. The first two plots, with trial difficulty on the y axis (within the adaptive group only), showed one group that had an essentially flat profile during both the early and late training periods (blue curve, no improvement) and another group that increased trial-level difficulty during the early training period and then stabilized or showed minimal decreases during the later training period (Figure 1). For response time (y axis), both groups had a slight decline in the early period and became more stable in the late training periods, but one subgroup (in blue) tended to have faster responses than the other subgroup (red). For response time variability, one group had smaller standard deviations (blue) than the other (red) in the early training period, where smaller standard deviation indicated more consistent response times across trials. In the later training period, the group with less variability initially (blue) increased over time, while the second group (red) showed a decline in response variability. One group (red) had much higher accuracy than the other group (blue) across the entire training period. Because patterns in the early and late periods generally reflected positive training behaviors (e.g., faster response time, lower variability in response time, better accuracy, and higher difficulty) and less positive training behaviors, we categorized individuals according to whether they remained in the "positive training behavior" group during both training periods or not for future analyses.
Demographics and Family Characteristics
For the trial difficulty metric, defined only for the adaptive group, those in the greater difficulty group (n = 14) had significantly higher IQ [mean of 74.
Predictors of Clinical Outcomes
For the WM composite (Leiter Spatial Span and Block Span) and Digit Span, there were no interactions between child, parent, or home environment variables and treatment condition, so results are presented for models containing no interaction. Higher baseline IQ was associated with greater gains in each of these outcome measures during the training period (p < 0.02). Higher mental age was also associated with greater gains on the outcomes (p < 0.01), except for Block Span which
Predictors of Clinical Outcomes
For the WM composite (Leiter Spatial Span and Block Span) and Digit Span, there were no interactions between child, parent, or home environment variables and treatment condition, so results are presented for models containing no interaction. Higher baseline IQ was associated with greater gains in each of these outcome measures during the training period (p < 0.02). Higher mental age was also associated with greater gains on the outcomes (p < 0.01), except for Block Span which approached significance (p = 0.06). There were no significant predictors of change on the Parent BRIEF WM or Global Executive Composite (GEC) or the Connors scores. No other child, parent or home environment variables were significantly related to gains in outcome measures. See Table 3 for full results. In the total sample (see Table 4), those with consistently faster response times had larger increases in Digit Span scores during the intervention (1 point greater), on average, than the remaining participants (β = 1.0; SE = 0.4; p = 0.02); this difference was significant after adjusting for IQ (β = 0.9; SE = 0.4; p = 0.03) but not after adjusting for mental age (β = 0.6; SE = 0.4; p = 0.1). Conners Inattention (β = −2.1; SE = 0.9; p = 0.03) scores decreased more in the faster responding group compared to other participants and remained significant after adjusting for IQ (raw: β = −2.1; SE = 0.9; p = 0.03; T: β = −3.9; SE = 1.9; p = 0.04) or mental age (raw: β = −2.1; SE = 1.0; p = 0.03; T: β = −3.8; SE = 1.9; p = 0.048). In the total sample, those in the low standard deviation in response time (those with consistently lower standard deviations) had WM gains that were 2.0 points higher, on average, than remaining participants (β = 2.0; SE = 1.0; p = 0.04), but not after accounting for IQ (β = 1.6; SE = 1.0; p = 0.1) or mental age (β = 1.4; SE = 1.0; p = 0.2). However, BRIEF GEC scores declined more in this lower standard deviation group (β = −6.7; SE = 3.0; p = 0.03) compared to the others, and remained significant after accounting for IQ (β = −6.8; SE = 3.2; p = 0.04) or mental age (β = −6.9; SE = 3.2; p = 0.04). There were no differences in gains in any of the clinical outcomes between the higher accuracy group and the remaining participants. Table 4 contains full results. Follow-up identical analyses were conducted for adaptive and nonadaptive groups separately. These results revealed that the links between training behavior and outcomes were predominantly driven by significant associations in the adaptive, but not the nonadaptive group (see Tables 5 and 6). For example, in the adaptive group only, the group defined by higher trial difficulty (those who showed progress in difficulty level over time with increasing span lengths) had WM composite score gains that were 3.9 points higher, on average, than the rest of the adaptive group (β = 3.9; SE = 1.6; p = 0.02; Table 6); this difference remained significant after accounting for IQ (3.5 points higher; SE = 1.6, p = 0.03) and mental age (3.1 points higher, SE = 1.6, p = 0.05).
Discussion
Whether computer-based cognitive training contributes to meaningful improvements in child functioning and quality of life remains a topic of considerable debate. Children with FXS are especially impacted by their cognitive deficits but have access to very few validated treatments, making the search for effective interventions to alleviate disability especially critical. Furthermore, a growing number of putative targeted pharmacological treatments for the disorder that might normalize brain function could be paired with structured cognitive therapy paradigms to examine whether these medications accelerate learning and cognitive growth. Our previously published study of the efficacy of WM training for children and adolescents with FXS, the first controlled trial of a cognitive intervention for the disorder, found that participants in both the adaptive and nonadaptive conditions demonstrated WM improvements on clinical assessment. Specifically, children in the nonadaptive condition, those who completed an identical intervention that did not adapt in difficulty according to performance, demonstrated gains and clinical improvement during the course of the trial that was similar to the adaptive group. This raised questions as to whether both interventions benefitted participants, or whether other factors may have explained improvements in the children. In the present study, we revealed numerous details regarding variability in the training behavior of participants, characteristics of their training environment and parent training aides, and the association of these variables with trial outcomes in order to provide greater insight into child individual differences in performance and outcomes, to clarify the factors contributing to gains in each intervention group, and to inform future studies.
The results of the present study demonstrate that baseline child characteristics as well as cognitive training behavior are associated with clinical changes during the intervention period. Training behavior metrics were linked not only to gains in WM performance-based assessments, but also to reductions in inattentive and other behaviors related to EF reported by caregivers. These patterns of association were stronger in the adaptive (experimental) training group. It should be emphasized that these analyses cannot confirm causal links between training behavior and clinical gains during training. However, the results do suggest that subgroups of children with FXS who can progress and expand memory capacity over time have better outcomes, perhaps better response to the intervention, than those who are unable to progress. Level of intelligence does not explain these effects fully, as several associations between training behavior and outcomes survived adjustment for baseline IQ. Therefore, future cognitive intervention trials with individuals with IDD should include additional screening of participants to determine not only baseline feasibility, but also capacity for training progress over a short period prior to inclusion and randomization. This practice would reduce the proportion of eligible participants but likely contribute to greater sensitivity to the efficacy of interventions and generalizability of results. In terms of baseline child characteristics, only IQ and mental age were related to clinical outcomes with higher mental age and IQ being linked to greater gains in WM. This is similar to results reported by Söderqvist et al., who found that higher baseline ability was associated with greater working memory training gains in children with IDD [17]. As pointed out in that paper, the results run counter to literature in typically developing children which often reports that the individuals with lower baseline ability show the most improvement when provided with targeted training. It may be that children with greater cognitive impairment (those within the IDD range) need additional exposure to training (i.e., longer or more frequent) or they may need training in more than one cognitive domain to experience clinically meaningful benefits. Aside from IQ and mental age, no other child, parent training aide, or home environment variables were related to gains in the clinical outcomes of the study. To our knowledge, this is the first examination of potentially moderating home and parent factors on treatment outcomes in FXS. Although child outcomes were independent of these factors, it is worth noting that the majority of families had fairly high scores on the HOME inventory, suggesting that most home environments were positive and conducive for learning. Similarly, the majority of parent training aides had SCL-90-R and PSI scores in the average range. Therefore, children with FXS in more adverse home environments and with parents struggling with serious mental health issues or high levels of parenting stress were not adequately represented in this sample. Nevertheless, it may be useful for investigators and clinicians to be aware that these important parent and environmental metrics do not appear to have substantial impacts on child training outcomes within this study.
The analyses of training level data show that training behavior can reliably identify participant subgroups in several dimensions. Four metrics of training quality-difficulty, accuracy, response time and response time variability-were used to quantify training quality. Difficulty is a metric of advancement in training for the adaptive group, with some children progressing in difficulty over the course of trainings while others remaining relatively flat with no appreciable gains in performance. This stark difference was not appreciable in our prior group-level comparisons of the primary trial results, which suggested modest gains overall [12]. Children who displayed positive training behavior defined by advancement in difficulty had better clinical outcomes than those that did not, even after accounting for difference in baseline IQ. Therefore, it is unlikely that the clinical improvement seen in this subgroup is explained by their higher functional status. Thus, it is likely that the Cogmed training program is most appropriate and has greatest potential utility for individuals with FXS who are capable of increasing their WM span capacity. As noted previously, we utilized an inclusion criterion characterized by the ability to perform at least some 3-span items at baseline, reflecting an increased probability that the children had the potential to make gains beyond the nonadaptive level of 2 span. Given the results reported here, the demonstration of at least some short-term gains during an early exposure to the program may be the best indicator of potential benefit from sustained training during intervention. Similar results were found for the two other metrics of training quality, response time and variability in response time. These metrics are thought to be a measure of how attentive and engaged participants were in the games. Participants who were more attentive and engaged (faster response times and more consistency in response) were also more likely to show clinical gains after the completion of the training. These results are promising, as they show that quality of engagement with the training procedure may be a driving force behind clinical gains.
One of the important questions raised by the primary trial results is why the nonadaptive Cogmed group improved over the course of training. The findings of the present analysis do not establish consistent links between training behavior and outcomes in the nonadaptive group. One explanation for the lack of significant association may be decreased variability in training metrics in the nonadaptive group, as these participants were less challenged and had fixed trial difficulty. The nonadaptive group also had less variability in the primary outcome measures than the adaptive group (see Table 2 in [12]), perhaps making it more difficult to detect potentially meaningful correlations.
We considered the analyses reported here to be exploratory and as such, we did not adjust for multiple comparisons or tests. However, we note that all of the significant patterns of association between training behavior and outcomes (11/11 significant results) were in the expected direction (e.g., lower standard deviation of response time associated with gains in WM score or a reduction in BRIEF GEC score from baseline to end of intervention), suggesting a low likelihood of chance association. As we noted previously [12], we elected to use the nonadaptive Cogmed training as the control condition for this trial to determine efficacy. We did not include an additional comparison group-for example, treatment-as-usual or another contrast group-which is a limitation of the study. Cogmed is predominantly focused on WM as its target-although this is an important area in need of remediation for FXS patients, their deficits span a broad range of executive dysfunction. Interventions addressing multiple domains of function and perhaps in multiple contexts may be needed to produce robust effects that translate to improvements in quality of life. Although the original trial was powered to detect differences between treatment conditions, it was not powered to detect differences in associations between child/parent/home factors and clinical outcomes by treatment condition. Similarly, the training behavior groups detected in the analyses were relatively small in size, indicating limited power to assess differences in associations between these training behavior groups and clinical outcomes by treatment condition. Finally, the training behavior groups are data driven and may differ in other studies.
Conclusions
In summary, the present analysis afforded an opportunity to examine details of the cognitive training process and individual differences that are typically omitted from standard clinical trial reports. While the efficacy of Cogmed or other cognitive training programs for FXS and individuals with IDD remains an open question, the high-resolution training data we report allowed for identification of more-vs. less-responsive participants and further highlight possibilities to integrate cognitive training paradigms in treatment research for this population. Future cognitive training trial designs for FXS and IDDs should carefully consider the type of control condition utilized to examine training efficacy and perhaps more rigorous screening to ensure that participants are capable not only of performing cognitive tasks but also demonstrate capacity for making gains. The latter criterion may be analogous to determination that a drug of interest can engage its target in the population of interest before starting a trial. The results presented here lay the groundwork for a subsequent Cogmed trial that may require both a wait-list control group and a comparison condition that entails equivalent participant and caregiver contact and computer exposure but entails no cognitive training. Another potential design would be to capitalize on the demonstration that a subset of children with FXS expand their WM span over time, and pair the training with a targeted pharmacological intervention vs. placebo. In this scenario, the investigator may compare the slopes (degree of memory expansion or Cogmed indices of improvement) in those treated with medication vs. placebo to determine whether learning is enhanced by the drug. | 2020-10-06T13:34:06.448Z | 2020-09-25T00:00:00.000 | {
"year": 2020,
"sha1": "cc22c8c7bcf951f52535943b35492b583288ecbb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/10/10/671/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e4ba33c89df99a8d2ba96b291d77d0f6952827c",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
266132754 | pes2o/s2orc | v3-fos-license | Protocol for producing an adeno-associated virus vector by controlling capsid expression timing
Summary Conventional adeno-associated virus (AAV) production systems generate vast numbers of empty capsids, which should be eliminated before clinical use. Here, we present a protocol for efficient AAV vector production. We describe steps for separating replicase and capsid genes from the plasmid and controlling capsid expression until sufficient AAV vector genome replication is achieved. This protocol can produce AAV vectors in various serotypes. For complete details on the use and execution of this protocol, please refer to Ohba et al.1
2. Clone the AAV2 replicase gene downstream of CMV promoter in a plasmid.3. Clone each capsid gene from various serotypes downstream of Tetracycline (Tet) -regulating promoter (TetP) into plasmids.
Note:
The optimal length of the additional sequence after the capsid gene depends on the AAV serotype.Both TetP systems (TetP and repressor are separated into two plasmid, or one plasmid containing TetP and repressor) can be used.
Note: Any E. coli strain can be used for plasmid amplification.If some plasmids are not amplified efficiently, special competent cells such as NEB stable and Stbl3 can improve plasmid yield.
5. Culture E. coli in optimal medium volume at 37 C for 12-20 h.
Note: Amplifying some plasmids may be challenging.In this case, culture volume and time can be increased.
6. Harvest and purify DNA plasmids from E. coli, aliquot plasmid DNA in tubes, and store at À20 C before use.
Establishing stable cell lines (Optional)
Timing: > 2 weeks 7. Prepare HEK293 cells to 60%-70% confluency.8. Transfect a plasmid containing the Tet repressor gene, such as pcDNA6/TR, and culture cells for 24-48 h. 9. Change the medium to fresh culture medium containing antibiotics (Zeocin for pcDNA6/TR) and culture the cells at 37 C in a CO 2 incubator for 24-72 h.
Note: Single colonies can be isolated at this time.
10. Repeat the passage and culture cells for 2-3 weeks, then store cells at À80 C before use.
Note:
To obtain HEK293 cells with the Tet repressor gene in the genome, you must culture cells in an antibiotic-containing medium for at least 2 weeks.
Protocol
Note: Sterilize 1 3 PBS-MK using a bottle-top filter (0.22 mM) and store it at room temperature (20 C-25 C) before use (Maximum time for storage is one year at RT).
Note: Sterilize 2 M NaCl/13 PBS-MK using a bottle-top filter (0.22 mM) and store it at room temperature (20 C-25 C) before use (Maximum time for storage is one year at RT).
Note: Prepare the buffer day of use.Add 45 mL of phenol red/10 mL of 54% Opti-prep buffer for a 54% gradient.
Note: Prepare the buffer day of use.
Note: Prepare the buffer day of use.Add 30 mL of phenol red/12 mL of 25% Opti-prep buffer for 54% gradient.
Note: Prepare the buffer day of use.
Alternatives: Different transfection reagents will likely not impact AAV production.Other preferred reagents can be used in this protocol.Cesium chloride (CsCl) gradients could be used for ultracentrifugation purification step of AAV instead of iodixanol.
STEP-BY-STEP METHOD DETAILS
Preparing HEK293 cells -Day 0 Timing: 12-24 h 1. Plate HEK293 cells in a Minimum Essential Medium (MEM)-containing plate containing 10% fetal bovine serum (FBS) and non-essential amino acid (NEAA) (If possible, it would be better not add antibiotics to the culture medium at this time).
Note:
The cell seeding number depends on the culture scale.
Note: Cells should reach 80%-90% confluency by observing under microscope at the time of transfection.
Note: Cell condition is one of the crucial factors for AAV vector production.If the condition is poor, including the presence of strange cell shapes, use a new cell stock to properly produce AAV vectors.
Transfection of AAV plasmids -Day 1-5
Timing: 1 h (for step 3) Timing: 1 h (for steps 4 to 8) Four plasmids containing the AAV genome, adeno virus helper genes, Tet-Cap, and CMV-Rep2 are co-transfected into HEK293 cells for AAV vector production.Plasmids for conventional AAV vector transfection can be prepared as control.For comparison, the plasmid transfection volume must be adjusted similarly by adding an empty plasmid.Any transfection reagents can be used for this protocol.The following protocol is for transfection using Cell Factory (Culture medium; 400-500 mL.i. Prepare 50 mL (1/10 volume of culture medium) of 150 mM NaCl solution in a 225 mL tube.j.Add plasmids to the tube in the following ratio: k.Add 6825 mg of PEI-MAX (3.5-fold to total DNA amount (mg)) to each tube.
Note:
The volume of PEI-MAX needs to be optimized to obtain best results by your cell condition such as transfection efficiency and cytotoxicity against DNA and reagents.l.Mix samples well by up-side-down and incubate them for 20 min at RT. m.Take approximately 100 mL of culture medium from culture flask and add it to a new tube.n.Add the plasmids mixture to 100 mL of culture medium and mix well (the total volume is approximately 150 mL).o.Return the mixture to the flask for plasmid transfection into cells and mix the medium gently.p. Incubate the cells at 37 C for 12 h in a CO 2 incubator.
Note: Cells must be adherent in this transfection step to obtain maximum AAV vector production.If some cells float during transfection, the efficacy of AAV vector production decreases.
4. At 12 h post-transfection, prepare fresh, warmed complete culture medium. 5. Add doxycycline (Dox) with a final 2 mg/mL concentration to fresh medium.6. Discard culture medium from the flask.7. Add fresh complete medium containing Dox in the flask.8. Incubate cells at 37 C in a CO 2 incubator for 3 days (small scale) or 4 days (large scale).
Note: Generally, adding Dox at 10-15 h post transfection is advantageous for efficient AAV vector production.However, vector production efficiency may depend on the cell condition and experiments.
Harvest cells and supernatants -Day 5
Timing: 1-2 h 9. Prepare the AAV lysis buffer, 1 3 phosphate-buffered saline (PBS), chilled 5 mM EDTA/PBS solution on ice, and 40% PEG solution.Note: Only supernatant is needed to harvest AAV vector in culture meidum at this moment.Detached cells may exist in supernatant at this step.These cells can be separated from the supernatant by centrifugation, and the cell pellet can be mixed with sample at step b-5.
g. Carefully wash the cells using 1 3 PBS while avoiding detaching cells, and discard PBS.
Note: If many detached cells are present in this step, collect the samples in a tube and mix them according to step b-5.
h.After the PBS wash, add 100 mL of ice-chilled 5 mM EDTA/PBS solution to the cell flask, and incubate the cells at RT for 10-15 min to detach the cells.i. Collect solutions containing cells in a tube, and centrifuge at 2000-3000 3 g for 10 min, and then discard the supernatant.j.Wash the flask with 100 mL PBS and collect the cells in tubes.k.Repeat step b-5 twice to collect the cells from the flask completely.l.Centrifuge samples at 2000-3000 3 g for 10 min, and then discard the supernatant.
Note: If the entire volume of supernatant cannot be collected in a tube, centrifugation can be performed separately for each wash sample.AAV vector extraction and purification -Day 5-6 Timing: 1-2 days (for steps 11 to 34) For AAV vector extraction and purification, you can use global standard methods. 10,11Ultracentrifuge steps to purify AAV vectors require items described in Figure 1A and key resources table.
Extract AAV vectors 11.Prepare a 37 C water bath and liquid nitrogen (or cold ethanol using dry ice).12. Incubate the frozen samples in a 37 C water bath for 10-15 min until they have thawed entirely.13.Mix the samples well by vortexing for 10-20 s. 14.Then place samples in liquid nitrogen (or cold ethanol using dry ice) for 20 min until completely frozen.15.Repeat steps 12-14 thrice (four times in total).Protocol 16.Centrifuge the samples at 5000-8000 3 g for 10 min to pellet the cell debris.17.Transfer the supernatants to new tubes (The supernatants comprise the crude AAV vector solutions).
Note: For small scale preparations, you can filter the supernatants using 0.22 mm syringe-top filter to eliminate cell debris after centrifugation (step b-16).Note: Normally, centrifugation is performed for 12-15 h (overnight).
24. Check the layers after ultracentrifugation and collect the entire second layer from the bottom (Figures 1B and 1C), which contains full and empty AAV vectors, with syringe.
Note:
The second layer contains full (including AAV genome) and empty AAV particles.To select the portion containing full AAV vectors, the fractionation for second layer can be performed.
Note: If further purification is needed, re-ultracentrifugation can be performed using the samples in this step.
25. Add 1 volume of AAV lysis buffer to dilute the sample.26.Prepare Amicon Ultra 100 kDa MWCO column (4 mL or 15 mL, Merk) by adding 4-15 mL of AAV lysis buffer to pre-wash the membrane, and then centrifuging the column at 3000 3 g for 5 min.27.Discard the flow-through, and immediately add the sample (at step 25) to the pre-washed column.28.Centrifuge the sample at 3,500-4,000 3 g for 40-60 min.
Note: Samples are sticky and may not easily pass through the column.In this case, increase the centrifugation time until most samples pass through the column.
Note: Adjust the optimal remaining volume of solution in the column by centrifugation time.
31.(Optional) Wash column by repeating step 29-30.32.Collect the remaining solution above the column, which contains AAV vectors.33.Filter sample further using a 0.45 mm syringe.34.Aliquot AAV vector solutions into tubes and keep them in À20 C or À80 C before use.
Note: Because AAV vectors are usually stable, they can be stored at 4 C for short periods.
Pause point: You can stop experiment after finishing this step.
Timing: 3-4 h
To use the produced AAV vector for various assays, you need to determine the AAV vector titer using quantitative polymerase chain reaction (qPCR) or other methods.
35.Prepare the qPCR reagent, primer sets targeting the inverted terminal repeat (ITR), standard (quantified AAV vector, plasmid containing AAV genome, fragment of ITR), and samples.36.Prepare 10 7 , 10 6 , 10 5 , 10 4 , 10 3 , 10 2 vg/mL of standard samples and a negative control.37. Place the standard and samples in separate wells in a PCR plate in duplicate or triplicate, and seal the plate.38.Incubate the plate at 95 C for 10 min to denature AAV vectors.
Note: Instead of DNA plasmids or fragments, we recommend using viral particles for the standard and denaturing them together with samples in the same plate because the efficacy of AAV genome release from particles and capsid proteins in the reaction may affect the qPCR result.39.Prepare the qPCR master mix, including primers, according to manufacturer's instructions.
(Example; TB Green Premix Ex Taq II https://www.takarabio.com/documents/User%20Manual/RR82LR/RR82LR_UM.pdf)Note: The concentration of the samples in this protocol are 1/10 dilution from the original samples.40.Perform the qPCR reaction according to manufacturer's instructions.
Note:
The reaction conditions depend on the reagents and instrument.Fast and conventional PCR cycles are applicable.
Analyze AAV vector titer in samples.
Note: After determining the AAV vector titer, these vectors can be used for various assays such as enzyme-linked immunosorbent assay (ELISA), western blots, immunoprecipitation, infection studies on cells and injection into mouse models.STAR Protocols 4, 102542, December 15, 2023 Protocol Note: After aliquoting, the AAV vectors can be kept at À20 or À80 C for long-term storage, at 4 C for short-term storage, for 1-2 weeks, before use.
EXPECTED OUTCOMES
We show a detailed step-by-step protocol to efficiently produce AAV vectors by capsid (cap) expression timing control.Compared with conventional system, we have successfully produced high quantity and quality of AAV vectors using the tetracycline promoter controlling Cap expression (Tet-Cap) system.Furthermore, our protocol applies to most of AAV serotypes.AAV vectors produced by the Tet-Cap system have similar cell infectivity and tissue distribution in mice than the conventional system, 1 indicating that the change of protocol does not affect AAV vector function (Table 1).
Many studies have attempted to generate Cap-mutated AAV vector to modify tissue tropism.However, introducing mutant viral components, including capsid genes, sometimes reduces viral fitness, including a low viral yield.In contrast, our protocol can increase vector quantity (Figures 2A and 2B) and quality (Figure 2C), even for Cap mutants such as PHP.eB and PHP.s, 12,13 indicating that this method is possibly applicable for various AAV mutants to improve AAV vector production.However, using AAV vectors for gene therapy has a high cost due to generating large numbers of empty capsids during vector production, requiring multi-purification step before clinical use. 14,15ble 1.The comparison summary between the present protocol and conventional methods summary of results a Increase in total AAV vector yield 1 2-10-fold b Full (incorporation of AAV genome)/empty vector ratio 10%-50% improvement a Our protocol can efficiently produce AAV vectors, increase the total yield by approximately 2-10fold, and reduce the empty capsid ratio.Our system could be used as a bioreactor system to increase the production scale. 16Thus, we expect that our system can help reduce the cost of gene therapy in the future.
LIMITATIONS
Our protocol can increase total AAV vector yield and improve its quality compared to conventional AAV production system.However, because improving the AAV vector production is based on the comparing the Tet-Cap and commercialized AAV vector production systems, it is unclear whether Tet-Cap system has an advantage over the modified AAV vector production system.Additionally, the cell condition in each laboratory may differ, such as the number of passages and cell culture medium for example.Because cell condition is a crucial factor for viral production, improving AAV vector production varies between laboratories.Moreover, the different techniques used to purify AAV vectors in each laboratory may affect the AAV vector yield results.
TROUBLESHOOTING Problem 1
Cells are detached from the plate after transfection to produce AAV vectors (refer to step 3 to 8).
Potential solution
Plasmid DNA and proteins expressed in plasmids are sometimes toxic for cells.Therefore, optimizing the total plasmid number for transfection, suitable for your laboratories' cells, could reduce detached cells.
In addition, the character of the HEK293 cells may differ from the original cells after long-term culturing or changing culture conditions such as using Dulbecco's Modified Eagle Medium (DMEM) medium.Therefore, this protocol recommends using appropriate culture conditions like MEM supplemented with 10% FBS and NEAA, the original culture medium for HEK293 cells.In some cases, you may obtain similar results to our paper 1 if cells used for AAV vector production are cultured with a different medium, such as DMEM supplemented with 10% FBS.However, if you cannot obtain a similar result, using original culture condition for HEK293 cells is recommended for this protocol.Additionally, long-term culturing using different media results in different cell characteristics, which may not be recovered to the original characteristics.In this case, using original cells and proper culture conditions as far as possible are strongly recommended for this protocol.
Problem 2
Improving the total yield and full/empty particles (E/F) ratio of AAV vectors cannot be observed (refer to step 40 and expected outcomes).
Potential solution
Usually, HEK293 cells are adherent and must be active state for AAV vector production.Like the solution for problem 1, transfection conditions need to be optimized for efficient AAV vector production.Additionally, because viruses are generated in cells, the cellular machinery for viral production must be intact.Therefore, the cell condition is crucial to produce AAV vectors efficiently.In addition of culture conditions, cells must be healthy and adherent during AAV vector production as long as possible.If cell condition is poor, preparing healthy cells using original cell stocks or obtaining cells from companies or cell banks would improve the results.
HEK293 grown in suspension can be used for this protocol.However, because AAV vector production system using suspension HEK293 cells is already more efficient than the conventional system, the improvement of the total yield and E/F ratio using this protocol may be small.
m. Discard the supernatant and keep the pellets on ice or in À80 C before use.n.Centrifuge the supernatants-PEG mix (after the 4-h 4 C incubation) from step b-2, at 2000-3000 3 g for 20 min.o.Discard the supernatant, and dissolve the supernatant-pellet in 8-10 mL of AAV lysis buffer.p. Add dissolved samples (step b-11) to each tube containing cell pellet at step b-9, and store samples at À80 C before use.Pause point: You can stop experiment after freezing samples.
18. Digest free DNAs in solution with a DNase treatment.[Smallscale] a. Prepare DNase solution, such as Turbo DNase, according to manufacturer's instructions.b.Take 4 mL of sample and digest free DNAs in a 20 mL reaction volume.Example: TURBO DNase or TURBO DNA-free kit (ThermoFisher) c.Incubate the samples at 37 C for at least 4 h.d.After the reaction, inactivate the DNase according to the manufacturer's instructions.(Forexample, inactivate DNase by adding 1 mL of inactivation reagent from the Turbo DNA free kit and mixing samples well.Centrifuge samples at 8,000-10,000 3 g for 5 min and then collect supernatants into a new tube for elimination of DNase.)e. Add 1 volume of water to dilute DNase-treat samples.Note: Samples are diluted with 1/10 from the initial concentration.
Figure 1 .
Figure 1.Ultracentrifuge for AAV vector purification (A) Items used for ultracentrifugation to purify AAV vectors.(B) Diagram of layers containing AAV vectors before and after ultracentrifugation. (C) The position of the needle to collect AAV vectors after centrifugation.
a
The differences depend on serotypes.b AAV9 distribution in mice.
Figure 2 .
Figure 2. The present protocol applies to capsid mutants (A and B) AAV vector yield in PHP.eB and PHP.S capsid mutants using Tet-Cap system.The fold difference of AAV vector yield on AAV PHP.eB (A), and AAV PHP. S (B) with the change in medium and doxycycline (Dox) stimulation 12 h post-transfection.The data was normalized to the qPCR value of normal samples (RC; conventional system).Graphs and statistical analyses were performed using GraphPad v 8 software from three independent experiments.The asterisk in panel indicates as follow: ** = p < 0.01, which presents statistical significance of the t-test with Welch's correction.Error bars indicate the standard error of the mean (SEM).(C) Western blot of Cap proteins after immunoprecipitation of AAV PHP.eB and PHP.S.The same titer of AAV vector (5 3 10 8 vg/sample for PHP.eB and 2 3 10 8 vg/sample for PHP.S) calculated using qPCR was subjected to immunoprecipitation using ADK8/9 antibodies, and protein A/G magnetic beads before western blotting.The panel shows representative western blotting image.
All plasmids are added in a tube containing 150 mM NaCl solution.c.Collect the cells and supernatant in a tube.d.Store samples in À80 C before use.Large scale.e. Prepare the tubes for the samples.f.Collect the culture supernatants in tubes and add 1/4 volume of 40% PEG solution, and store the mixture at 4 C for at least 4 h after mixing well until AAV vector collection in supernatants in subsequent steps.
10. Harvest cells and supernatants.Small scale.a. Prepare the tubes for the samples.b.Pipette the culture medium to detach all cells from plate.
Because AAV vector yield by small scale is small, samples before DNase treatment (step 17) can be used for various assay.However, the step of DNase treatment (step 18) is required to check AAV vector titer in solution (Quantification; step 34-40). | 2023-12-10T16:24:27.313Z | 2023-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "74ad90a7a9190e59966eadff2a4bf76015ed44ee",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.xpro.2023.102542",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2cb961f0a404147de63282600b467f795f277992",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
} |
264660336 | pes2o/s2orc | v3-fos-license | Association of electronic learning devices and online learning properties with work-related musculoskeletal disorders (WMSDs): A cross-sectional study among Thai undergraduate students
Computers and mobile devices are becoming the primary instruments used by students worldwide in all facets of their working and learning activities. This study aimed to investigate the relationship between the use of electronic devices, the characteristics of learning properties, and the potential predictors of work-related musculoskeletal disorders (WMSDs) among Thai undergraduate students. In this cross-sectional study, data were collected using Microsoft Forms with an online self-administered scale. The research instrument comprised four categories: demographic and health history characteristics, online learning properties, psychological health, and perceived WMSDs. Using multistage sampling, 4,618 samples were collected from 18 schools nationwide. A total of 3,705 respondents were eligible for the analysis. Descriptive statistics, chi-square, and binary logistic regression analyses were used for the data analysis. The results showed that the majority of the respondents had online learning only in some semesters/subjects (67.3%), used mobile phones for learning (43.3%), had an appropriate desk workstation (66.1%), used non-office chairs (76.0%), spent prolonged periods sitting (91.6%), had a bent posture while sitting (78.2%), had a private working space/room (92.4%), had proper lighting (85.4%), and experienced normal levels of stress (81.1%). Overall, 42.1% of Thai university students experienced WMSDs in any area of the body in the prior 6 months. Six significant predictors (p = 0.05) of WMSDs were obtained from the multivariate analysis, including stress, use of electronic devices, bent posture, prolonged sitting, year of study, and online learning classes (The adjusted odds ratio ranged from 1.43 to 3.67). High-risk students who mostly used mobile learning devices should be prescribed interventions to reduce stress, develop postural awareness and skills, emphasize positioning solutions, and reduce extended sitting time. The results indicated that preventive measures are warranted and required because the risk predictors were identified as preventable.
Introduction
The global trend of online learning in higher education institutions has gained momentum, especially with recent advancements in technology and the COVID-19 pandemic.However, this shift towards online education has raised concerns regarding its potential impact on the musculoskeletal health of students.Prolonged sitting and improper ergonomics while using computers or digital devices have been associated with musculoskeletal disorders (MSDs) such as neck and shoulder pain, backache, and wrist strain [1][2][3].A previous study revealed that the majority of computer users who spent at least 2 h per day sitting (97%), sitting with the back bent (40.8%), performing activities in a fixed position (52.1%), and taking no work breaks (95.1%) suffered from MSDs in a single region of the body (77.6%) [4].The lack of an appropriate workspace setup, inappropriate chairs, and limited awareness of the ergonomic principles of students contribute to an increased risk of developing MSDs [5].
Musculoskeletal problems are the most common type of illness associated with work and are the primary cause of work absences or disabilities [6].MSDs are soft tissue injuries of the muscles and tendons of the musculoskeletal system that can occur suddenly or gradually due to force, repetitive motion, vibration, and awkward posture [7].They may affect various body parts and cover all types of illness, from minor, transient conditions to irreversible, incapacitating injuries, with substantial costs and impacts on the quality of life [3,6,7].Working environments are widely acknowledged to significantly contribute to the onset and persistence of MSDs, which have been shown to have multiple etiologies [3,6].As such, students that participate in online learning, like other kinds of computer and mobile device users, are exposed to work risk factors daily, which can be related to some adverse physical and psychological health problems [8][9][10].
For undergraduate students in Thailand, classes are typically delivered in person rather than online [11].However, before entering into the current relievingCOVID-19 situation, Thai undergraduate students faced a year of online learning [12].Online learning requires that students spend a considerable amount of time on digital devices.Previous studies have highlighted that prolonged online learning can contribute to the development of poor physical posture among students, characterized by hunched back and bad neck postures.According to Yaseen and Salah, university students use laptops or tablets for learning for an average of 6 h per day [13].Moreover, other studies with undergraduate students linked the result with experiencing MSDs (almost 80%) during the COVID-19 pandemic [13,14].They stated that taking proper precautions with postural balance might have helped the exposed students to prevent a considerable percentage of MSDs [6].
Studies have shown that students from disadvantaged backgrounds faced major postural challenges due to the unavailability of suitable space, furniture, internet connectivity, a separate room, and convenient technological devices, which compelled them to utilize electronic gadgets in poor body positions or on the floor, increasing the risk of musculoskeletal problems in the younger population [5,15].However, the physical learning environment of learners participating in online learning activities has rarely been investigated [16].Additionally, previous studies have predominantly focused on identifying physical factors as primary risk factors for MSDs, neglecting to include questions pertaining to psychosocial factors [2,17].Furthermore, previous studies on these problems have commonly employed a standardized Nordic questionnaire that focused on identifying the presence of musculoskeletal pain or discomfort without incorporating an additional assessment of pain intensity [2,14].The assessment of outcomes should incorporate the characterization of pain levels and risk perception to ensure a comprehensive approach to capturing work-related MSDs (WMSDs) [18,19].These gaps in the existing literature represent a significant limitation, particularly considering the crucial association between specific risk factors and the development of WMSDs among undergraduate students [13].Hence, the primary objective of this study was to determine the prevalence rate of WMSDs among Thai undergraduate students.Additionally, we aimed to examine the comprehensive association between students' postural imbalance, electronic device usage, working environment, and psychological stress, as well as potential predictors of the specific risk factors for WMSDs.Preventing WMSDs among students is crucial because many of these disorders can be avoided.Developing an effective prevention strategy, particularly targeting conditions that contribute to pain onset in this specific population, can be obtained from the results of this study.
Ethical approval
The study protocol adhered to the ethical guidelines and regulations of the Declaration of Helsinki.The primary ethical approval for the study protocol was obtained from the Walailak University Institutional Review Board (Ref.No. WUEC-22-007-01).The Khon Kaen University Center for Ethics in Human Research (Ref.No. HE652094) also gave its approval.
Study design
This study was a component of a larger research project titled "Effects of e-learning during the COVID-19 pandemic on the prevalence and factors associated with musculoskeletal disorders (MSDs) among Thai, Indonesian, Vietnamese, and Laos faculty members and students."Since cross-sectional studies analyze data from a population at a point in time, this study was designed to be conducted from April to June 2022, when countrywide lockdowns and social gathering prohibitions were enacted due to the coronavirus outbreak.The study was conducted at 18 educational institutes providing bachelor's degrees in nursing, accredited by the Thailand Nursing and Midwifery Council.
Population and sample size determination
The study population included Thai undergraduate nursing students nationwide.An infinite population proportion formula was used to obtain the sample size [20] (p = 0.70 [21], d = 0.02, and z = 1.96).A sample size of at least 2,017 was required.A low rate and inconsistent response were recognized for a sample size of 4,618 [5].According to the simplest rules of cases-to-IVs for logistic analysis planned for use, the number of cases should be greater than 50 + 8 m, where "m" represents the number of independent variables (IVs) [22].Thirteen IVs were used in this study.Hence, 4,618 cases exceeded the threshold of 154.
Inclusion criteria for the sample.Undergraduate nursing students, both female and male, aged 17-25 years and willing to participate in this study, were included as suitable participants for at least 6 months.However, female participants prevailed over male participants because of the enrollment status of females in undergraduate nursing courses in Thailand.
Exclusion criteria for the sample.The exclusion criteria were pregnant women; women within a year postpartum; and those with a history of kidney disease, spinal deformities, gout, rheumatoid arthritis, deformities, and back surgery.
Sampling technique.
Based on the 2021 Thailand Nursing and Midwifery Council database [23], 96 nursing institutes were distributed across five regions.Using multistage sampling, two of the five regions were selected in the first step, namely the southern and northeastern regions.In these regions, there are 37 nursing institutes with three affiliations, including the Ministry of Education, the Ministry of Public Health, and the Private Sector.Fifteen nursing institutions were selected using a non-proportional stratified sampling technique.In addition, three nursing faculties were conveniently sampled from the central region, reaching a total of 18 faculties.The 18 institutions sampled had 8,534 students.Research partners at the 18 institutions were able to provide an information sheet and a method for providing an online voluntary consent form by clicking on a voluntary response box for 5,395 students.Of the 5,395 students, 5,136 1 st -4 th -year students indicated a willingness to voluntarily participate in this study.Code numbers were created to safeguard students' privacy.Of the 5,136 students, simple random sampling was performed to sample 4,618 1 st -4 th -year undergraduate students (Fig 1).
Only the authors of this study had access to individual students' data.
Measurement and data collection
Data collection through Microsoft Forms.The research instruments used in this study were developed based on previously validated instruments [24][25][26][27].These instruments were adapted and designed as an online self-administered scale using Microsoft Forms (MS Forms).The scale comprised four sections: demographic and health history characteristics, online learning properties, psychological health, and perceived WMSDs.The research instrument was considered content valid using the index of item objective congruence (IOC) by three experts, whose IOCs ranged from 0.67 to 1.00.
Data collection through demographic and health history approach.For demographic information, questions on sex (male or female), age (in years), study year (1 st , 2 nd , 3 rd , or 4 th year), body weight, and height were asked.Using height and weight, the body mass index (BMI) was determined and classified into three groups: underweight (<18.5 kg/m 2 ), normal (18.5-22.9kg/m 2 ), and overweight/obese (�23.0 kg/m 2 ) [28].In terms of health history, the questionnaire included inquiries about the following conditions: pregnancy status (yes or no); being one year postpartum (yes or no); and history of operation, deformity, or disease (yes or no).Participants with a history of illness were asked to identify the specific conditions they had experienced.These conditions included kidney disease, spinal deformities, gout, rheumatoid arthritis, other deformities, and prior back surgeries.
Data collection through online learning properties.The following variables related to online learning properties were assessed in this study: 1. Online learning classes: The participants were categorized based on whether they engaged in online learning for only a few semesters/subjects or the entire academic year.
2. Types of most frequently used electronic devices: The participants identified the electronic devices they most frequently used for learning activities, including mobile phones, iPads/ tablets, notebooks, or personal desktop computers.
Appropriateness of the desk workstation:
The participants assessed the suitability of their most frequently used desk workstation in terms of width, depth, and height compared to their own body, categorizing it as either appropriate or inappropriate.
Use of a non-office chair:
The participants specified whether they used a non-office chair (e.g., backrest chair, stools, or floors) or an office chair (e.g., chair with backrest and armrests) for their learning activities.
5. Prolonged sitting: The participants indicated whether they engaged in continuous sitting for 2 h or more per day by selecting either yes or no.
6. Bent posture: The participants reported whether they maintained a bent posture continuously for 2 h or more per day, choosing either yes or no.
7. Use of a working space: The participants disclosed whether they had a designated working space, categorizing it as a private working space/room or none.
8. Perceived proper lighting: Participants assessed the adequacy of lighting in their learning environment, classifying it as either proper or improper.
Data collection through psychological health.
Respondents' stress levels were assessed using the Depression Anxiety Stress Scale (DASS-21).The psychometric properties of this scale have been validated across cultures [25].To complete the scale, the participants were required to identify the symptoms they had experienced in the preceding week.Each item on the stress scale was rated on a scale of 0 (did not apply to me at all over the past week) to 3 (applied to me very much or most of the time over the past week).The stress scale consisted of seven items, yielding a total score ranging from 0 to 21.The stress levels were categorized into five groups based on their scores: "normal" (0-7), "mild" (8-9), "moderate" (10-12), "severe" (13-16), and "extremely severe" (17+) [26].The internal consistency of the stress scale was assessed using Cronbach's alpha coefficient.In the pilot testing phase of this study, conducted with a similar sample of 30 undergraduate students, Cronbach's alpha coefficient for the stress scale was 0.91.
Data collection using the Nordic Musculoskeletal Questionnaire.The Nordic Musculoskeletal Questionnaire (NMQ), which normally records MSDs in nine different body regions (i.e., feet/ankles, knees, buttocks/hips/thighs, lower back, upper back, hands/wrists, elbows, shoulders, and neck) with prevalence in the previous 7 days or 12 months [27,29] was employed to collect data.In this study, the NMQ was expanded to encompass four additional body regions: the upper arm, forearm, fingers, and lower leg.Participants were requested to report any MSDs in any of the mentioned body regions in the past 6 months.The NMQ is a popular, accurate, and trustworthy instrument for musculoskeletal surveillance and exposure evaluation [30].The pain level was determined using a numerical rating scale (NRS).The NRS normally consists of a list of numbers with verbal anchors ranging from 0 to 10 and indicates the full conceivable range of pain intensity [31].The respondents were also asked whether their working or learning activities were related to musculoskeletal pain or discomfort (yes/ work-related or no).Moreover, the respondents reported WMSDs when they had MSDs with a pain level of at least 4/10 at any site and acknowledged that these pains were related to work or learning.
Recruitment of participants
The principal researcher obtained formal authorization from the deans or directors of each nursing institute through a letter addressed to them.Additionally, the researchers established informal contacts with research collaborators responsible for data collection at each institute.The dean or director of each institute received a list of research partners associated with their respective institutes.To ensure effective communication and collaboration, the researchers conducted telephone or online meetings with 41 research colleagues from these institutes.These meetings served as a platform for discussing various aspects of the research project, including the information sheet, online informed consent process, and data collection methods.The recruitment process for student participants was conducted by research partners from the participating universities, starting from various dates between April and June 2022, which coincided with the last semester of the 2021 academic year.A total of 8,534 enrolled undergraduate students who had been studying at the participating universities for a minimum of 6 months were invited to participate in the study.The recruitment of 5,395 students was successfully carried out by representatives from each institute utilizing social media platforms, including Facebook, Line, and Zoom meetings.These channels served as effective means to reach out to potential participants and communicate the study's objectives and requirements.Students were given access to the participant information sheet and consent form through a link or QR code.All students who consented to participate in the study checked the "I accept to participate" checkbox online before the survey began.Of the 5,136 students who completed the online consent form, a simple random selection was conducted, resulting in a final sample of 4,618 students.
Statistical analysis
Data cleaning was used to minimize statistical analysis errors (n = 4,618 respondents).This technique involved checking the response consistency and excluding cases not meeting the eligibility criteria.Descriptive statistics, such as frequencies and percentages, were used to provide an overview of the sample and variables in the study.Quantitative data were expressed as Mean ± standard deviation.To examine for bivariate association, a chi-square test was used.The degree of the relationship was assessed using crude odds ratios (CORs) and 95% confidence intervals (Cis).The adjusted odds ratios (AORs) and 95% CIs for multivariate variables were determined using binary logistic regression analysis.Model fitting was evaluated using the Hosmer-Lemeshow goodness-of-fit test.The assumptions of the chi-squared and binary logistic regression analyses were carefully examined.All statistical analyses were performed using the Statistical Package for Social Sciences statistical software (SPSS, Version 23.0,IBM Corporation, Chicago, IL, USA).
Characteristics of participants
This study enrolled 3,705 participants.A large proportion of participants were female (94.2%).Of the participants, 35.4% were first-year nursing students.A significant proportion of the participants were aged 20 years or older (69.3%).Approximately 51.4% of the participants had a normal BMI, with a mean BMI of 21.3 ± 3.9 kg/m 2 .These data have been partially presented in our previous publication [32].
Online learning risk factors and WMSDs regarding the participants (n = 3,705)
In Thailand's 2021 academic year, most respondents engaged in online learning only in some semesters or subjects (67.3%), while 43.3% used a mobile phone and 41.2% used an iPad or tablet.Some respondents reported that they had an appropriate desk workstation (66.1%) and used nonoffice chairs (76.0%).Most of the respondents spent prolonged periods sitting (91.6%), had a bent posture while sitting (78.2%), had private working spaces or rooms (92.4%), had proper lighting (85.4%), and experienced normal stress levels (81.1%).Over the past 6 months, 42.1% of the Thai university students experienced WMSDs in any region of their body.Two-thirds of the undergraduate students reported WMSDs at their neck (69.1%) and shoulder (62%) regions.Approximately half of the students claimed to have WMSDs, with 55.9% and 52.6% reporting that they experienced the disorders in the lower and the upper back regions, respectively (Table 1).
Association between the related risk factors and WMSD prevalence
The effects of the variables on the association of WMSDs are summarized in Table 2.The relationship between the risk factors and the occurrence of musculoskeletal issues was examined using a chi-square test.Logistic regression analysis was performed to determine the association between the two variables for the CORs.The results showed that age, year of study, online learning classes, electronic devices, desk workstations, prolonged sitting, bent posture, lighting, and stress were strongly associated with WMSDs among Thai undergraduate students.The CORs of the nine positive predictors ranged from 1.27 to 4.57.The most substantial prevalence of subcategories in each significant variable associated with WMSDs was in the respondent's subgroup, including being aged between 18-19 years (49.6%),studying in the first year (49.4%), enrolling in an online learning class at 100% (51.1%), using an iPad or tablet (46.0%), adopting an inappropriate workstation (46.1%), adopting prolonged sitting periods (43.6%), sitting with a bent posture (46.9%), having improper lighting (49.6%), and experiencing extremely severe levels of stress (73.3%).Four variables showed non-significant associations with WMSDs: sex, BMI, type of chair, and working space.The findings revealed that female and male students with WMSDs comprised 42.5% and 36.3% of the study participants, respectively.Moreover, we found that university students with WMSDs who were overweight or obese, underweight, or had a normal BMI comprised 43.3%, 43.0%, and 41.1% of the study population, respectively.The university students who had WMSDs and sat on office and nonoffice chairs comprised 44.4% and 41.4% of the study population, respectively.The participants who had WMSDs and worked in non-private and private offices comprised 45.6% and 41.9% of the study population, respectively (Table 2).
Predictors of WMSDs
The predictive factors of the different subgroups of WMSDs and their predictive values in the logistic model are presented in Table 3.The predictive risk factors for WMSDs among nine significant independent variables, including stress, electronic devices, bent posture, prolonged sitting, year of study, age, online learning class, desk workstation, and lighting, were evaluated after assessing multicollinearity and multivariate variables using binary logistic regression.In the logistic regression analysis, dummy variables were created for polychotomous variables, including the study year, type of electronic device, and stress.The results showed that six predictors, including stress, electronic devices, bent posture, prolonged sitting, study year, and online learning class, were associated with WMSDs.However, three predictors, including age, desk workstation, and lighting, were not included in the logistic model.The AOR of the six positive predictors ranged from 1.43 to 3.67.The participants in the extremely severe, severe, moderate, and mild stress group levels were 3.67, 2.86, 2.56, and 2.11 times more likely to experience WMSDs, respectively, than those who experienced normal levels of stress.The respondents who used iPads/tablets, mobile phones, and notebooks for e-learning were 2.84, 2.47, and 2.14 times more vulnerable to developing WMSDs, respectively, than personal desktop computer users, as was evident from their AOR values.Similarly, participants who adopted a bent posture and prolonged sitting were 2.32 and 1.56 times more likely to develop WMSDs, respectively, than those who adopted non-bent and non-prolonged postures.Fourth-year students were 1.55 times less likely to develop WMSDs than first-year students (AOR = 1.55, 95% CI 1.25-1.91).We found that WMSDs were more prevalent when learning was conducted entirely online than when it was conducted partially online.The findings (constant of -2.776) showed that the data fit the logistic model (p = 0.164), and the classification accuracy of the risk prediction model was 64.7%.
Discussion
The rapid growth of online learning has led to a significant increase in the number of students participating in online educational activities.Understanding the specific risks and challenges faced by online students with regard to WMSDs is essential for the development of effective preventive measures and interventions.However, the incidence and prevalence of WMSDs among Thai university students are unknown.Therefore, this research investigated the primary concern of exploring the data on WMSDs in academia.Research on physical learning environments, psychosocial factors, pain intensity assessment, risk perception, and WMSDs among undergraduate students is limited in the existing literature.
According to research objectives, the findings showed that 42.1% of the participants experienced WMSDs over 6 months.Due to these linked factors, the sudden change in posture during the COVID-19 pandemic resulted in the emergence of physiological abnormalities, indicating a high risk of developing WMSDs.This is in line with what has been observed in other countries, such as Slovenia, where 39.6% [33] of university students had MSDs, and the USA, where 41% [15] of university students had MSDs.Another study conducted among Chiang Mai's smartphone-addicted students found that 30% developed MSDs [17].This study focused on the site of pain development; the highest percentage of pain was observed in the neck, while remarkable shoulder, lower back, and upper back pain were also noted.Except for the lower back pain variation, the findings are consistent with another study on musculoskeletal disorders among students that utilize smartphones at Khon Kaen University in Thailand [34].This variation in the percentage of musculoskeletal problems and the highest areas of complaints may be due to different predisposing factors, assessment tools, and populations.
As mentioned previously, the etiology of WMSDs is multifactorial [6,8,9].The second theme this study hoped to address was the significance of online learning-related risk factors regarding WMSDs; however, causal relationships could not be identified due to the cross-sectional nature of the study.Moreover, the results of the bivariate analysis showed that sex was not significantly associated with WMSDs in the 6-month prevalence period.Due to disparities in biological and anthropometric characteristics between the sexes, the prevalence of WMSDs varies by sex [7,21].A similar study on musculoskeletal disorders among students has been conducted previously [14,21].
In this study, we observed that respondents who were 18-19 years old had a 1.55 times higher possibility of developing WMSDs than those aged 20 or older.This result is consistent with a prior study that discovered the development of MSDs during the lockdown and showed statistically significant (p < 0.05) age variability [35].Additionally, first-year students had a 1.71 times higher possibility of developing WMSDs than fourth-year students.This may be due to their lack of familiarity with university students' working/learning activities, leading to unnecessary stress that increases fatigue and decreases their body's ability to recover properly, as is evident in poor work practices [7].These findings are consistent with a prior study conducted by Felemban et al. [36], who discovered variations in the frequency of MSDs based on the academic year, which may have been caused by various workloads.
In our study, the WMSD prevalence during the 6 months was found to be non-significantly correlated with BMI.This non-significant association between higher BMI and musculoskeletal discomfort is similar to that reported in previous studies [14,37].One explanation could be that the majority of the participants in the current study (51.4%) had a normal BMI.Their BMIs were not risk factors for acquiring WMSDs; therefore, more participants with higher BMIs are needed to further investigate this association.There is controversy surrounding the relationship between BMI and MSDs because, contrary to a cross-sectional study conducted in Portugal, BMI and reported shoulder, wrist, and hand symptoms are linked to musculoskeletal discomfort [38].
Respondents who took a course entirely online during the current academic year had a 1.72 times higher risk of developing WMSDs than those who took a course partially online.This aligns with a previous study that linked the duration and degree of discomfort to the amount of time spent learning online [13].This may also be linked to the long-term use of electronic devices during the extended period of e-learning.
Additionally, the findings indicated that respondents who used iPads/tablets and mobile phones for online learning had and 2.17 times higher rates of developing WMSDs, respectively, than those who used personal desktop computers.This was consistent with a previous study among students who reported an association between increased MSDs and the use of desktops, laptops, or tablet computers [13].Our investigation highlighted the highest WMSD prevalence in iPad and tablet users, which was due to the impact of long-term e-learning during the COVID-19 pandemic, eventually leading to sedentary habits in students.The findings of this study are consistent with those of earlier research on Shanghai adolescents in terms of the association between MSDs and the use of digital devices.In that study, laptop and desktop users were less likely to have MSDs [39] because the use of a personal desktop computer allows for a more flexible placement of its components (such as the screen, keyboard, or mouse) and can adopt a more natural posture and comfort, decreasing the likelihood of pain.In contrast, tablet users not only adopt a reader's posture but also frequently use one hand to touch their screens.Improper tablet use may result in inaccurate bilateral force asymmetry, leading to uneven bilateral shoulder levels.However, a tablet can be used in the same manner as smartphones [39].Electronic devices used were consistent with a previous study, which showed that 35.1% of the participants used desktops, laptops, or tablets to study [13].
The likelihood of WMSD development correlated with an inappropriate workstation in our study, which is consistent with a previous study showing that an inappropriate workplace width was associated with a higher risk of MSD development [24].Therefore, an individual's physical needs should be sensibly matched to befitting workstations and postures, which may be covered by an ergonomic education process.
Our results showed that chair type was not significantly associated with WMSDs.Our findings are inconsistent with those of Parvez et al. [40], who found a significant relationship between university furniture and MSD development in students.This might be because our study participants used uncomfortable non-office chairs (e.g., backrest chairs, floors, and stools) for online learning at home, which could have caused them to alter their posture frequently.
Respondents who sat for prolonged periods had a 2.19 times greater probability of developing WMSDs than those who did not.The negative effect of sitting with the respondent's back bent was more likely to cause MSDs because the adverse posture overloads the muscles and tendons surrounding the affected joints and applies excessive force on the joints.When a joint moves closest to its mid-range motion, it performs optimally.The risk of MSDs increases when joints work outside this midrange regularly or for prolonged periods without adequate recovery time [7].This result is consistent with research from Ethiopia, which indicated that people who sat with their backs bent were four times more likely to acquire WMSDs than those who sat with their backs straight [4].This is because poor posture can lead to stiffness and compression throughout the skeletal and muscle areas, causing discomfort and pain in numerous body parts.In accordance with another study, whether using tablets or desktop/laptop computers, students typically slouched forward when seated in chairs [13].Therefore, poor sitting posture throughout the study activities caused them to experience body aches [40].
The results showed that working space was not associated with WMSDs.This may explain why most participants in this study (92.4%) had private workspaces or rooms.Their working environment posed no risk of WMSD occurrence.This differs from the findings of Aschenberger et al. [16], who found that a dedicated study space was beneficial for students' motivation, focus, learning performance, and overall well-being in the classroom.
Respondents who had improper lighting had a 1.43 times greater chance of developing WMSDs than those who did not.These results are consistent with a previous study that showed that participants who had an inappropriate workstation in the context of their dimensions (width of seat and lighting intensity) had a 5.72 times greater chance of developing WMSDs [24].According to our findings, the respondents were more likely to develop WMSDs if exposed to extremely severe, severe, moderate, or mild stress levels.This positive correlation between psychological stress and an increase in muscular tension has been postulated as a risk factor for MSDs by other researchers [37].The final theme of interest was to assess the risk factors for WMSDs, and six significant predictors of WMSDs among Thai nursing students were identified.Since humans are multidimensional, focusing on a single cause of MSDs will limit our ability to develop a precise and accurate MSD prevention model.Evidence from electronic device users also highlights the link between work-related (physical, environmental, and psychological risk factors) and unrelated (individual risk factors) factors.Owing to the combination of personal, psychosocial, and physical risk factors, it is believed that an increase in muscle load or activity could be an early sign of musculoskeletal disorders in people who are at work [6,9,41].Therefore, an assessment of multidimensional risk factors [7], such as those in the workplace (e.g., stress, bent posture, and prolonged sitting), should be a priority for the target group of first-year students with online classes and those engaging with electronic devices (e.g., iPad/tablet, mobile phone, and notebook) for online learning.
Strength and limitations
This study had several strengths, including a large sample size and a randomly selected sample, enhancing the generalizability of the findings.The use of reliable and valid questionnaires enables comparisons between the general population and students from various disciplines.This study examined the prevalence of WMSDs among Thai university students and identified associated risk factors in the context of online learning.The major statistical analyses provided insights for a broader population, suggesting that all undergraduate students may exhibit WMSD risk factors.
However, this study had certain limitations.This study focused solely on undergraduate nursing students, which limits its generalizability to other populations.The cross-sectional design prevented the establishment of causal relationships between the risk factors and musculoskeletal discomfort owing to the absence of follow-up data.Self-reporting through questionnaires introduced a potential for recall bias.Additionally, response bias may have been present among students who did not participate in the study or were hesitant to do so during the recruitment process conducted through platforms such as Facebook, Line, or Zoom.However, it is challenging to conclusively determine the extent and direction of the bias.
Conclusions
This study highlights that the extensive use of electronic learning devices is associated with a higher risk of WMSDs.The multivariate analysis identified six significant factors influencing the occurrence of WMSDs, particularly among first-year students who predominantly used mobile devices for learning.Preventive measures should be implemented to reduce the negative consequences of these risk factors and prevent chronic pain and disability.These include addressing stress, promoting postural awareness, improving postural skills, emphasizing positioning strategies, and reducing prolonged sitting time.Clinical trials incorporating ergonomic and physical therapy interventions are recommended to alleviate WMSD pain.Further research is also required to understand the causes of and remedies for musculoskeletal discomfort among undergraduate students.Communication of posture issues to medical professionals and families is essential, and students should adopt the suggested postures when using mobile devices to mitigate the effects of poor posture.Corrective exercise is also important for improving postural habits.Providing ergonomic interventions, assessments, knowledge, and support can protect students from WMSDs, maintain physical fitness, and prevent chronic pain and disabilities. | 2023-11-01T05:05:01.027Z | 2023-10-30T00:00:00.000 | {
"year": 2023,
"sha1": "df92852267186814b58e2c59aab94dfe585cf39b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "df92852267186814b58e2c59aab94dfe585cf39b",
"s2fieldsofstudy": [
"Medicine",
"Education",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9792099 | pes2o/s2orc | v3-fos-license | Facial pain due to elongated styloid process
Pain is the most frequent cause of suffering and disability. The etiology of orofacial pain is still elusive. However, the etiology has to be ascertained for definitive treatment. Only after a systematic and careful evaluation can a treating surgeon be aware of the underlying cause. Though dental causes predominate in the diagnosis of orofacial pain, the rare cause of facial pain have to be excluded, which would prevent unnecessary and fruitless dental treatment. The present case is an example of a rare condition that may be overlooked during examination. This paper will describe a case of vague unilateral orofacial pain, the diagnosis of which zeroed down to an elongated styloid process.
Introduction
Pain is the most frequent cause of suffering and disability. Misdiagnosis and multiple failed treatments are common in some patient population. Patients with orofacial pain frequently undergo numerous dental procedures that fail to eliminate symptoms, and are often referred to the oral and maxillofacial surgeon for evaluation and treatment. Facial pain can be the presenting, and sometimes the only, complaint of many disorders that originate from cranial structures. In the clinical setting, the identification of the underlying cause, and therefore the decision about the investigations needed, occasionally represents a challenge, even for experienced surgeons.
Patients with pharyngodynia, neck, and facial pain can lead to an extensive differential diagnosis. An elongated styloid process may be taken in account. American otorhinolaryngologist Watt Weems Eagle in 1937 defined "stylalgia" as an autonomous entity related to abnormal length of the styloid process or to mineralization of the stylohyoid ligament complex. [1][2][3] The stylohyoid complex derives from Reichert's cartilage of the second branchial arch. The styloid process is an elongated conical projection of the temporal bone that lies anteriorly to the mastoid process. Its elongation may be associated by pharyngodynia localized in the tonsillar fossa and sometimes accompanied by dysphagia, odynophagia, foreign body sensation, and temporary voice changes. In some cases, the stylohyoid apparatus compresses the internal and/or the external carotid arteries and their perivascular sympathetic fibers, resulting in a persistent pain irradiating in the carotid territory. [4][5][6] An elongated styloid process occurs in about 4% of the general population while only a small percentage (between 4% and 10.3%) of these patients is symptomatic. Hence, the true incidence is about 0.16%, with a female-to-male predominance of 3:1.
The etiology of the elongation is debatable.
In 1937, Eagle described two possible clinical expressions attributable to elongated styloid process [7] as follows: • The "classical Eagle syndrome" is typically seen in patients after pharyngeal trauma or tonsillectomy, and it is characterized by ipsilateral dull and persistent pharyngeal pain, centered in the ipsilateral tonsillar fossa, that can be referred to the ear and exacerbated by rotation of the head. A mass or bulge may be palpated in the ipsilateral tonsillar fossa, exacerbating patient's symptoms. Other symptoms include dysphagia, sensation of foreign body in the throat, tinnitus or cervicofacial pain. • The "second form" of the syndrome ("stylocarotid syndrome") is characterized by the compression of the internal or external carotid artery (with their peri-vascular sympathetic fibers) by a laterally or medially deviated styloid process. It is related to a pain along the distribution of the artery, which is provoked and exacerbated by rotation and compression of the neck. It's not correlated with tonsillectomy. In case of impingement of the internal carotid artery, patients often complain of supraorbital pain and parietal headache. In case of the external carotid artery irritation, the pain radiates to the infraorbital region.
The elongated process is treated surgically and non-surgically. A pharmacological approach by transpharyngeal infiltration of steroids or anesthetics in the tonsillar fossa has been used, but styloidectomy is the treatment of choice. Styloidectomy can be performed by an intraoral or an extraoral approach. The intraoral approach may result in a restricted operative field, in the possibility of an incomplete control over many important vascular and nervous structures and in the risk of deep cervical infections. On the other hand, external surgical approach results in cutaneous scars, longer hospitalization, and risks of facial nerve injuries. The treatment's choice usually depends on the experience of the surgeon.
Case Report
A 45-year-old female patient reported to our maxillofacial unit with the chief complaint of pain on swallowing, a swelling in the throat, and vague facial pain over the right face and temple region. The pain never crossed midline. She also complained of pain on turning the head toward the left side and some unusual sensation in the tongue. These complaints, according to the patient, had been for 2 years. She had been administered non steroidal anti-inflammatory drugs, carbamazepine by some practitioners without any response.
After a series of questioning and examination, a mass was palpable in the right tonsillar region. She did not have any extra-oral swelling or asymmetry. Her cervical lymph nodes were not palpable. All teeth were present and in good shape. It was easy on our part to rule out odontogenic pain. She neither had any history of tonsillar surgery nor could she remember any history of trauma. Her computed tomography (CT) scan revealed an elongated styloid process (33 mm) of the right side [ Figure 1]. The temporomandibular joint appeared normal. She was planned for resection of the styloid process [ Figure 2a-c]. An extraoral approach was carried out for the resection. She did not sustain any injury to the facial nerve or any vessels. The patient has been under regular follow-up and is free of the pre-operative symptoms .
Discussion
An elongated styloid process must always be considered in the differential diagnosis of orofacial and neck pains.
Eagle defined the length of a normal styloid process at 2.5-3.0 cm. The normal length of the styloid process varies greatly as follows: 1. From 1.52 cm to 4.77 cm, according to Moffat et al. (1977) [8] 2. Less than 3 cm, according to Kaufman et al. (1970) [9] 3. From 2 cm to 3 cm, according to Lindeman (1985). [10] This case has mixed characteristics such as dysphagia/ odynophagia, feeling of foreign body, which fit into the classical variant. It also has features like pain on rotation of the head and peritonsillar pain not related to tonsillectomy, which fit into the stylocarotid variant.
The diagnosis of the trait has to rely on thorough physical examination and radiography. A plain computed tomography scan and an orthopantomogram served our purpose. Palpation of the styloid process in the tonsillar fossa is indicative of elongated styloid in that processes of normal length are not normally palpable. Palpation of the tip of the styloid should exacerbate existing symptoms. The surgical treatment is the first choice in the literature.
Conclusion
While establishing the differential diagnosis for orofacial pain, the history, clinical examination and relevant investigation have to be given due importance. Though, it is a common belief that common causes should be expected first, rare, non-dental causes of oro facial pain as described in the case must be distinguished to avoid unnecessary dental treatment and also helps in appropriate referral. Though uncommon, an elongated styloid process should be considered in the differential diagnosis of orofacial pain. An alert and responsible maxillofacial surgeon should always bear this probability when dealing with such cases. | 2018-04-03T00:15:45.505Z | 2013-04-01T00:00:00.000 | {
"year": 2013,
"sha1": "e0d21f6d4e6720c02fca2f77863195c5e1b5baf0",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0976-237x.114879",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "27e560998e0a71d377c74dd6372be1955832e0c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52218816 | pes2o/s2orc | v3-fos-license | Association between Anthropometric Parameters (WC, BMI, WHR) and Type 2 Diabetes in the Adult Yazd Population, Iran
Diabetes Mellitus (DM) is one of the most common chronic diseases in the world and the most challenging health problems of the twenty first century [1]. It is estimated that by the 2030 the number of people with diabetes will increase to more than 366 million, more than twice the number in 2000 [2,3]. Most of these new cases are from developing countries and it seems that the Middle East is among the regions that will have the largest increase in prevalence of diabetes by 2030 [3].
Introduction
Diabetes Mellitus (DM) is one of the most common chronic diseases in the world and the most challenging health problems of the twenty first century [1]. It is estimated that by the 2030 the number of people with diabetes will increase to more than 366 million, more than twice the number in 2000 [2,3]. Most of these new cases are from developing countries and it seems that the Middle East is among the regions that will have the largest increase in prevalence of diabetes by 2030 [3].
In 2011 it was estimated that 366 million people worldwide had diabetes [4], but its prevalence is increasing rapidly because of increasing age of the population and surge of obesity in many countries including Iran. In Iran, about 10% of the general population had diabetes mellitus or impaired fasting glucose in 2008 [5]. And in the recent study in Yazd, the result of study showed the prevalence of known diabetes and impaired fasting glucose was 16.3% and 11.9% respectively [6].
Type 2 diabetes is a chronic disease characterized by hyperglycemia and dyslipidemia due to underlying insulin resistance. The condition commonly progresses to include micro vascular and macro vascular complications [7,8]. Obesity and particularly abdominal obesity are strongly associated with insulin resistance [9,10]. Diabetes results from the combination of genetic and environmental factors [11]. There are strong evidences to suggest that modifiable risk factors such as obesity and physical inactivity are the non-genetic determinants of the diabetes [12,13].
The occurrence of rapid and major lifestyle changes in the many countries has increased the prevalence of obesity and other non-communicable disease risk factors such as hypertension and dyslipidemia, which have been reported to be the major etiologic factors the rising incidence of type 2 diabetes around the globe [14].
The Body Mass Index (BMI), defined as the weight in kilograms divided by the height in meters squared, the Waist to Hip Ratio (WHR), and the Waist Circumference (WC) are three main anthropometrics parameters to evaluate body fat and fat repartition in adults. And these parameters have ethnic susceptibility [15,16]. Some authors showed that BMI and WHR were predictors of type 2 diabetes outcome [17]. Whereas in other studies, WC was a better predictor of type 2 diabetes mellitus and was more strongly correlated to intra-abdominal fat than WHR [18,19]. The aim of this study was to quantify the association between three anthropometric measurements (body mass index, waist to hip ratio, waist circumference) and type 2 diabetes mellitus in the adult Yazd population, Iran. between body mass index, waist to hip ratio, waist circumference and type 2 diabetes mellitus in the adult Yazd population, Iran conducted in a period from December 2012 to May 2013. Yazd city, as reference study population, is located in central of Iran and composed of 980000 populations with unique and homogenous ethnic group. Using appropriate formula and considering proportion of W/H ratio of %20 in general population, confidence interval of 95%, study power of 80% and at least odds ratio of 1.65, totally 400subjects (200 cases and 200 controls) selected by random method. Inclusion criteria for case group were male & female subjects with age>30 yr. residing in Yazd city and having history of known DM in last 3 years (new cases). Patients were excluded if they had a history of diabetes mellitus type 1 or not living in the Yazd city. Controls were recruited from subjects who referred to the Yazd Central Laboratory For any other reason except for Diabetes and Chronic Diseases. One control was selected and matched on sex and age (± 2 years) using frequency matching. The criteria for controls were not having history of DM or receiving any diabetic medication, not having impaired fasting glucose or type 2 diabetes mellitus following a Fasting blood glucose test. The study was approved by the Medical Ethics Committee of Shahid Sadoughi University of Medical Sciences and Health Services of Yazd. Informed consent was obtained from all participants, which were carried out in accordance with the Declaration of Helsinki. Subjects were interviewed face-to-face by trained interviewers using pretested questionnaires. Information concerning age, gender, family history of diabetes, history of hypertension and dyslipidemia, other information was collected by questionnaire. Anthropometric measures included height, weight; waist and hip circumference were measured according to standard protocols and were recorded. Height was measured in a standing position, without shoes; using a tape stadiometer with a minimum measurement of 1 cm. Weight was measured with each subject wearing light clothing in kilograms by using digital scales (0.5 kg accuracy). Body Mass Index (BMI) was calculated as weight in kilograms divided by height in meters squared. BMI was categorized according to WHO recommendation and obesity was defined as BMI ≥ 30 kg/m 2 [20]. Waist Circumference (WC) was recorded to the nearest 0.1 cm at the umbilical level and hip circumference at the maximal level over light clothing, using an unstretched tape meter, without pressure on the body surface. The waist-to-hip ratio (WHR) was calculated as WC divided by hip circumference. We used the criteria of the National Heart, Lung, and Blood Institute (NHLBI) to define the cut-off points for central (or abdominal) obesity. A measure of WC over 88 cm in women and in102 cm in men was considered at risk. Waist hip ratio cut-off points used were ≥ .95 for men and ≥ .85 for women [21].
Occupational, commuting, and leisure-time physical activity was assessed using a questionnaire and was categorized in Occupational activity and regular or moderate intensity activity.
Systolic and diastolic blood pressures were measured twice in a seated position in the left arm by digital pressure gauge and the mean value was considered as the subject's blood pressure. Hypertension was defined, according to the JNC7 report ( the report Joint National Committee 7), as a systolic blood pressure ≥ 140 mm Hg and/or diastolic blood pressure ≥ 90 mm Hg, or current use of an antihypertensive medication [22].
Dyslipidemia was defined when one of the following was present: Triglyceride (TG) concentration more than 150 mg/dl or Cholesterol concentration more than 200 mg/dl or HDL cholesterol less than 50 mg/dl in females and less than 40 mg/dl in males or LDL more than 100 mg/dl. This classification was conforming to ATP III (Adult Treatment Panel III) guidelines [23].
Statistical Analysis
Data analysis was done using the Statistical Package for Social Sciences (SPSS) for Windowsversion 16. The Student t-test was used to assess differences between mean values of two continuous variables. Chi-square analysis was performed to test the differences in proportions of categorical variables between two groups. Unadjusted and adjusted Logistic regression analyses were performed to quantify the association between type 2 diabetes and the explanatory and categorical variables (BMI, WC, WHR). Adjustment was done on all significant covariables in the univariate analysis (among age, family history of diabetes, hypertension, dyslipidemia, physical activity and History of gestational diabetes in women). The analyses were performed for each sex and the Odds Ratios (ORs) of type 2 diabetes and their 95 % confidence intervals (CI95%) were estimated. The level P value less than 0.05 was considered significant for all tests.
Result
In this study, 200 patients with type 2 diabetes mellitus (age>30 years) and 200 healthy controls (age>30 years) were studied. Table 1 shows Characteristics of diabetic and non-diabetic subjects by sex. The mean age of the subjects in the study group (cases) was 53.18 years and that of the subjects in the control group (controls) was 52.60 years. It was collection methods and not using appropriate statistical methods may be related, to differences in obesity.
In our study group, no relationship was found between diabetes and physical activity which was defined as occupational or moderate activity. Whereas the inverse relationship can be seen in cross-sectional studies between physical activity and type 2 diabetes [34][35][36]. In this context, prospective studies have shown that physical activity can prevent type 2diabetes [37,38]. Overall, the evidence suggested an important role of physical activity in the prevention of type 2 diabetes. The results from this study showed that waist circumference was strongly associated with type 2 diabetes in both sex. And these parameter which is good measures of abdominal fat, should be used in routine practice for the follow up of patients with type 2 diabetes. observed that among diabetics the mean of BMI (29.47), WC (99.59), and WHR (.95) were significantly high compared to non-diabetics (p<0.05). Whereas no significant difference was noted in relation mean of age (p=0.54) among diabetics and non-diabetics.
In the group of women, significantly, women with diabetes had a more Family history of diabetes (75% vs. 25%, P=10-4), Hypertension (60% vs. 31%, P=10-4) and Dyslipidemia (96% vs. 82%, P=0.003) than women in the other group. Also the Family history of diabetes and Hypertension in men with diabetes were significantly more than men in the other group (p<0.05). No significant difference was found between cases and controls in both sex for physical activity. Table 2 shows the results of the logistic regression analysis. In the univariate regression analysis (Table 2A), there were statistically significant relations between type 2 DM and WC, BMI and WHR in both sex. The other variables associated with type 2 DM were: family history of diabetes and hypertension in men and family history of diabetes, hypertension and dyslipidemia in women. However, after adjustment on the other significant factors of the univariate analysis, associations between type 2 DM and BMI and WHR showed no significant in both sex (Table 2B). The regression coefficient for WC was 3.71 (95 % (CI)=1.32-10.43 and P<0.013) in men and 4.86 (95 % (CI)=1.14-20.65 and P<0.03) in women.
Discussion
In this case control study performed in 400 subjects aged >30 years (200 case and 200 control), we documented significant associations between WC and type 2 diabetes whereas BMI and WHR were not significantly associated with diabetes in both sex. Moreover, our results showed waist circumference was more strongly related to type 2 diabetes in women than the men (OR=4.86 vs. 3.71). Insulin resistance is a major feature of type 2 diabetes, and waist circumference is associated with insulin resistant and type 2 diabetes [24]. Our results clearly demonstrate that WC is the strongest anthropometric index that associates with type 2 diabetes. Consistent with our findings, previous studies also showed that the waist circumference is the best predictor of type 2 diabetes mellitus compared to body mass index, waist/hip ratio and other anthropometric measurements [18,25]. There is conflicting evidence on the index of obesity that best reflects diabetic risk. In some studies, waist circumference [19,26,27] and waist-to-hip ratio [28] are better than BMI, in others, BMI is better [29,30] and in others, neither is significantly better [31]. Moreover, both types of obesity (central and overall obesity) may be independent predictors of diabetic risk [32,33]. Differences in data | 2019-03-11T13:03:48.829Z | 2014-10-03T00:00:00.000 | {
"year": 2014,
"sha1": "55de1ddf02d74c42678328221ca89626ec658412",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/association-between-anthropometric-parameters-wc-bmi-whr-and-type-diabetes-in-the-adult-yazd-population-iran-2155-6156.1000444.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7b2f6b8eee1c7f6826455a6d0808548c55e8d853",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246241459 | pes2o/s2orc | v3-fos-license | The psychological impact of the coronavirus disease 2019 pandemic on women who become pregnant after receiving treatment for infertility: a longitudinal study
Objective To compare the impact of the coronavirus disease 2019 (COVID-19) pandemic on the psychological health of patients with infertility who have become pregnant with that of women who have not. Design Prospective cohort study conducted from April 2020 to June 2020. The participants completed three questionnaires over this period. Setting A single large, university-affiliated infertility practice. Patients A total of 443 pregnant women and 1,476 women still experiencing infertility who completed all three questionnaires. Interventions None. Main Outcome Measures Patient-reported primary stressor over three months of the first major COVID-19 surge; further data on self-reported sadness, anxiety, loneliness, and the use of personal coping strategies. Results Pregnant participants were significantly less likely to report taking an antidepressant or anxiolytic medication, were less likely to have a prior diagnosis of depression, were more likely to cite COVID-19 as a top stressor, and overall were less likely to practice stress-relieving activities during the first surge. Conclusions Women who became pregnant after receiving treatment for infertility cited the pandemic as their top stressor and were more distressed about the pandemic than their nonpregnant counterparts but were less likely to be engaging in stress-relieving activities. Given the ongoing impact of the pandemic, patients with infertility who become pregnant after receiving treatment should be counseled and encouraged to practice specific stress-reduction strategies.
T he inability to achieve and sustain a clinical pregnancy is concomitant with substantial psychological distress and mental health challenges in both women and men. Infertility-related distress can be attributed to a wide range of factors, including the diagnosis itself, familial and societal pressures, physical burdens of treatment interventions, financial strains due to the cost of fertility treatment, and the uncertainty of treatment outcomes (1)(2)(3)(4). Individuals and couples experiencing infertility report relationship strain, heightened levels of anxiety and depression, and decreased self-esteem (5)(6)(7). In addition, 13% of individuals report taking antidepressant medication (8), and unsuccessful assisted reproductive technology is associated with negative impacts on mental health and selfesteem (9).
Patients with infertility frequently characterize infertility as their most stressful life experience, with psychological distress being one of the primary reasons for discontinuing treatment (10)(11)(12). In an evaluation of the psychological wellbeing of women with infertility, chronic pain, heart disease, cancer, hypertension, or human immunodeficiency virus infection, the researchers found that the overall scores of women with infertility were comparable to patients with cancer, cardiac rehabilitation, and hypertension (13). Additionally, the anxiety and depression scores of women with infertility were comparable to all other groups, excluding the patients with chronic pain (13). The results of this study emphasize that infertility is as distressing as other serious medical conditions, including cancer.
In response to the global coronavirus disease 2019 (COVID-19) pandemic declared on March 11, 2020, by the World Health Organization, professional organizations governing reproductive medicine in the United States (American Society for Reproductive Medicine) and Europe (European Society of Human Reproduction and Embryology) advocated halting infertility treatments so that resources may be directed to patients with COVID-19 (14,15). Our center terminated treatment for nearly nine weeks during the peak of the pandemic in New England from April 9, 2020, to June 15, 2020. Previously, our group indicated that infertility remained the most frequently reported top stressor among >2,200 patients, even amid a devastating global pandemic (16).
The previous longitudinal study (16) was extended and identified how the top stressors of the respondents changed over the first several months of the pandemic (17). By analyzing the responses from three distributed questionnaires, we found that COVID-19 was the number one stressor at the initial peak of the pandemic but was replaced by infertility just three weeks later. Furthermore, 29% of respondents believed that infertility treatments should be offered early in the pandemic; however, this sentiment drastically changed by June 2020, with 77% of individuals reporting that treatments should be provided. This longitudinal study demonstrates that despite the immense and ubiquitous impact of COVID-19, women with infertility still ranked infertility as their greatest stressor, underscoring the significant psychological impact that infertility has on our patient population and the need for the provision of mental health resources.
When each of the questionnaires was distributed, there were minimal data on the effects of COVID-19 on fetal and perinatal outcomes with no proven cases of vertical transmission from the mother to the fetus (18,19). Nevertheless, pregnancy is a high-risk state because of the associated physiologic and immunologic changes. Recent infectious illnesses, including the Zika virus and the 2009 H1N1 influenza virus pandemic, revealed the susceptibility that pregnancy presents and potentially devastating impacts that viral diseases can have on pregnancy outcomes (20)(21)(22). As our previous study was being conducted, multiple case series of COVID-19 in pregnancy were published (23,24). Still, little was known about the effects of COVID-19 on pregnancy outcomes, creating uncertainty and fear for this patient population, although, at the time, there were no data on the psychological impact of the pandemic on pregnant women.
There are new studies being published on the mental health of pregnant women during the pandemic; however, women who become pregnant after receiving treatments for infertility during COVID-19 are an understudied population. Our first analysis assessing reported stressors during the COVID-19 pandemic focused on patients with infertility who did not achieve pregnancy after treatment. The objective of this follow-up study was to assess the reported stressors for women who became pregnant during the pandemic after receiving treatments for infertility. Specifically, we wanted to identify differences in the reported stressors between infertile nonpregnant women and pregnant patients. We hypothesized that pregnant women would be more concerned about the potential adverse impact of COVID-19 on pregnancy outcomes and that they would be more likely to practice stressreducing activities in an attempt to decrease their anxiety levels than the nonpregnant patients with infertility.
MATERIALS AND METHODS
In a previous study (16), a 45-item questionnaire with questions on demographics and mental health history was developed, including the history of anxiety and depression, and the use of anxiolytic or antidepressant medications. Respondents' anxiety and sadness levels at the time of the questionnaire were assessed using a 7-point Likert scale (in which 1 indicated not at all sad/anxious and 7 indicated extremely sad/ anxious). Additionally, participants were asked to list their current top three stressors from a provided list. Further, participants were asked to note whether they believed that infertility treatment should be offered during the pandemic and whether their work hours or compensation had been reduced because of the pandemic. The first questionnaire was disseminated to eligible patients from April 9 to 16, 2020.
Subsequently, we modified the questionnaire and distributed the second and third iterations from April 30, 2020, to May 7, 2020, and June 11 to 17, 2020, respectively (17). Questionnaires two and three included 19 and 29 items, respectively, with similar questions to the initial questionnaire; demographic questions were not asked again; however, the second and third questionnaires included additional questions regarding coping strategies employed by patients to relieve stress. A 7-point Likert scale was also added to the questionnaire three to evaluate respondents' loneliness (1 indicated not at all lonely and 7 indicated extremely lonely) (Supplemental Data, available online).
Participants
The surveys used in this study were disseminated using Research Electronic Data Capture (a secure data storage platform compliant with the Health Insurance Portability and Accountability Act) to women who had been seen for a consultation at a single large, university-affiliated infertility practice in New England, the United States, from January 1, 2019, to April 1, 2020 (25). Women who completed the first questionnaire were sent two further questionnaires. This included both nonpregnant and pregnant participants. We were able to link questionnaires from the same respondent; however, responses remained anonymous.
After the first questionnaires were distributed, all nonresponders were emailed an invitation to complete the questionnaire and enter into a raffle for a $50 gift card. This incentive was implemented again during the third time point to encourage patients to complete all three questionnaires fully. There were three raffle winners at each of the two time points.
Statistical Analysis
In the previous studies, participants who had become pregnant or were otherwise no longer pursuing treatment for infertility during the distribution of the survey were excluded from the study's final analysis (16,17). For this follow-up study, data from pregnant respondents were analyzed and compared with those of nonpregnant participants still pursuing treatments for infertility. Descriptive statistics are reported as mean (standard deviation) or frequency (percent). To compare pregnant and nonpregnant respondents, we used c 2 or Fisher's exact test for categorical variables and a two-sample Student's t test for continuous variables. A P value of < .05 was considered statistically significant.
This protocol was determined to be exempt from review by the institutional review board of the Beth Israel Deaconess Medical Center (protocol number: 2020P000322).
RESULTS
The first survey in April 2020 was sent to 10,481 patients with infertility at our institution. The response rate on the first survey was 34%, with 3,604 patients fully completing the survey. The second survey was sent in May 2020 to 3,617 patients (including patients who conceived between surveys one and two), with a completion rate of 73% (2,644 total respondents). The third survey was sent in June 2020 to the same 3,617 patients (although two patients were removed upon request, resulting in 3,615 recipients, including patients who conceived between surveys two and three) with a completion rate of 54% (1,943 patients). The patients who completed all three surveys were included in this analysis, resulting in a study sample of 1,919 respondents (Table 1, patient characteristics at survey 1).
Respondent Characteristics
Thirty-one percent of pregnant participants with infertility and 34% of nonpregnant patients with infertility reported a prior diagnosis of anxiety, although this difference was not significant (P¼ .18). However, there was a significant difference between the number of participants who reported currently taking anxiolytics, with approximately 5% of pregnant participants vs. 12% of nonpregnant participants (P< .001) ( Table 2). Twenty-two percent of the pregnant patients and 28% of nonpregnant patients reported a prior diagnosis of depression (P¼ .01). There was also a significant difference between the percentage of patients who reported the current use of antidepressant medication: 8% of pregnant respondents compared with 12% of nonpregnant respondents (P¼ .01).
Sadness and Stress among Respondents
In survey one, pregnant patients with infertility reported significantly less sadness (P< .001) than nonpregnant patients with infertility, with mean sadness scores of 2.6 (AE1.5) vs. 3.0 (AE1.7) ( Table 2). There was no significant difference between the mean anxiety levels of pregnant patients with infertility (4.0 AE 1.5) and their nonpregnant counterparts (3.8 AE 1.5) (P¼ .12). In survey two, the sadness levels of pregnant and nonpregnant patients remained constant and significantly different (P< .001) at 2.6 (AE1.5) and 3.0 (AE1.6), respectively (Table 3). There was a slight decrease in the anxiety scores of pregnant and nonpregnant patients with infertility in survey two, who both had scores of 3.7 (AE1.4), (P¼ .38). In the final survey, pregnant patients continued to have significantly lower sadness scores relative to nonpregnant patients, with respective means of 2.6 (AE1.5) and 3.0 (AE1.6) (P< .001). Additionally, the mean anxiety scores of pregnant patients were lower than those of nonpregnant patients, with means of 3.6 (AE1.4) and 3.8 (AE1.4), respectively (P¼ .02).
There was no significant difference in patient-reported loneliness between pregnant and nonpregnant respondents at the time of questionnaire three (P¼ .48), with both groups reporting a mean loneliness score of 2.4 (AE1.6) ( Table 4). Furthermore, in survey three, approximately 40% of pregnant patients and 44% of nonpregnant patients reported that their sleep quality had changed since the start of the pandemic (P¼ .09), with 90% of pregnant patients vs. 86.8% of nonpregnant patients reporting that the change was for the worse (P¼ .66). Finally, for surveys two and three, there were consistent and significant differences between the two groups on most stress-reducing activities, with the pregnant patients employing fewer of these at both time points (Tables 3 and 4).
Stressors among Respondents
The top three stressors for the two groups on survey one are listed in Table 2. These stressors stayed largely consistent for pregnant women in survey two, in which pregnant patients' top stressors were COVID-19 (40%), their job (15%), and their health (14%) ( Table 3). However, the top stressors of nonpregnant patients changed in survey two to be infertility (29%), then COVID-19 (25%), and finally their job (20%) ( Table 3). In the third survey, pregnant patients reported their top stressors to be their job (23%), COVID-19 (20%), and their health (16%), whereas nonpregnant patients still reported infertility (35%) to be their top stressor, followed by their job (23%), and lastly their family (12%) ( Table 4).
Finally, survey three asked participants how concerned they were or would be about being pregnant during the COVID-19 pandemic. Results revealed that pregnant patients were overall more concerned about becoming infected with severe acute respiratory syndrome coronavirus 2 than nonpregnant patients, as well as being more concerned about COVID-19 causing a poor pregnancy outcome than nonpregnant patients (P< .001) ( Table 4).
DISCUSSION
The data presented in this article bring to light the psychological impact that COVID-19 pandemic has on pregnant patients with infertility relative to their nonpregnant counterparts. Infertility remained a top stressor for nonpregnant patients despite the hardships of the pandemic, whereas COVID-19 was ranked as the top stressor by pregnant patients. This is not surprising as other research has documented the extreme adverse impact that the pandemic has had on pregnant women, leading to huge increases globally in depressive symptoms (26). Women who were pregnant during the COVID-19 pandemic reported being significantly more depressed and anxious than women who were pregnant before the pandemic (27). In another study of 100 pregnant women assessed during the first surge in 2020, the majority reported that the pandemic had a severe impact on their psychological health, and half of them were highly anxious about the risk of vertical transmission of disease; these symptoms were the highest in women during their first trimester (28). However, given the severe impact that the pandemic has had on patients with infertility, especially those whose cycles were canceled or postponed in the spring of 2020, it is somewhat surprising that they did not rank the pandemic higher (29). Mental health during the COVID-19 pandemic was closely monitored by researchers across the world. Documentation of the impact of past pandemics noted a trend of the increased prevalence of clinically significant levels of psychological distress (especially posttraumatic stress disorder [PTSD]) and depressive symptoms (30). Nochaiwong et al. (30) predicted that at least one of every five people (regardless of culture or duration of isolation) would experience clinically significant psychological distress due to the COVID-19 pandemic. An international review reported that the pooled global rates of depression, anxiety, and overall stress during the pandemic increased significantly compared with global rates before the pandemic (31).
The COVID-19 pandemic was extremely distressing because of its impact on almost all aspects of one's life; isolation required for disease containment, constant media reports of bad news, economic shutdowns, and unemployment, as well as the fear of contamination, all caused extreme emotional, social, economic, and mental strain (32). Several risk factors emerged that were found to increase the likelihood and severity of negative mental health impacts due to COVID-19, including age (%40 years), sex (female), socioeconomic status (lower status being the most vulnerable), and medical condition (having a mental, physical, or chronic illness) (19,(33)(34)(35). It is important to recognize that the female infertility cohort is thus considered being at high risk because of being women, mostly those aged %40 years, and having a chronic disease (34,35).
Infertility is a stressful and sometimes traumatic condition that causes social, emotional, and economic strain (36). In fact, women with infertility have depression levels comparable to patients with cancer (13). Thus, patients with infertility had levels of anxiety and depression higher than the general public before the onset of the COVID-19 pandemic (16). Of the cohort of pregnant patients with infertility in this study, 31% reported a prior diagnosis of anxiety, and 22% of patients reported a prior diagnosis of depression. These numbers and those collected from the nonpregnant cohort with infertility were less than those found by Pasch et al. (6) in 2016 on patients in an infertility clinic. Pregnant women were less likely to be taking antidepressant medication than the patients with infertility. It is not known whether they discontinued taking medication upon learning of their pregnancy or whether there may be a correlation between medication and treatment failure. The literature is conflicting on whether antidepressant medication has any impact on fertility; however, there are some data on the adverse impact during pregnancy, especially during the first trimester (37). The COVID-19 pandemic has had a documented negative effect on mental health worldwide (30). Stress and anxiety caused by the pandemic were reflected in changes in sleep quality, as 20% of people categorized as ''good sleepers'' before pandemic experienced a decrease in sleep quality during COVID-19 lockdown measures (38). In the present study, 40% of pregnant patients and 44% of nonpregnant patients reported a change in sleep quality, with >85% of those patients (both the pregnant and nonpregnant groups) documenting the change as a decrease in sleep quality, nearly double of what was reported by Kocevska et al. (38). Although Kocevska et al. (38) emphasize the variability and individuality of the way the COVID-19 pandemic affected sleep quality, it is important to note the increased vulnerability of the patient cohort with infertility.
Recent research has documented the emotional vulnerability of pregnant women during this pandemic. In a study of 63 pregnant women who were assessed both before and during the pandemic, anxiety and depression scores increased significantly (39). The investigators recommended that healthcare teams need to develop strategies to prevent ''mental trauma'' to lessen the risk of adverse birth outcomes. In another study on 283 pregnant women during the first surge of the pandemic, pregnancy complications, which are common in pregnancies after assisted reproductive technology, were significantly associated with anxiety, and the presence of COVID-19 symptoms was predictive of PTSD symptoms (40). Patients at high risk during pregnancy are especially vulnerable; in a study of 446 pregnant women, those identified as being at high risk were significantly more anxious, leading the investigators to recommend routine psychological screening and increased emotional support (41). Lastly, the pandemic has led to increased anxiety among pregnant patients regarding hospital presentation and admission, with fears of access to care and further risks of viral transmission (42).
Although both pregnant and nonpregnant patients continued to practice stress-reducing activities or developed new ones to help cope during the pandemic, pregnant patients used significantly fewer of these during both surveyed time points. This needs to be addressed as many of the suggested activities are well known to decrease anxiety and depression. Given the most recent research on the 22-fold increased risk of death and 2.2-fold increased risk of perinatal mortality in pregnant women who contract COVID-19 (43), it is obvious why all our patients should be encouraged to address their distress in as many ways as possible for them. Because patients with infertility are at increased risk of pregnancy complications, which in turn increases their risk for negative psychological symptoms, including PTSD, this represents even more urgency to the need to increase the support offered.
Strengths and Limitations
One main strength of this study is the large sample size of pregnant patients with infertility and the inclusion of a ''control'' sample of nonpregnant patients with infertility with similar characteristics. Furthermore, this study investigated the novel subject of the psychological responses of patients who became pregnant after receiving treatment for infertility vs. nonpregnant patients with infertility during the first surge of the COVID-19 pandemic. The longitudinal nature of the study of surveying patients at three time points also allows for a perspective on how the psychological state of the cohort changed relative to the surge of the pandemic in New England.
A limitation of this study, however, is the lack of generalizability of the sample. The sample was homogeneous in characteristics such as socioeconomic status, race, and education level and was only distributed to patients in one infertility practice in one geographic region.
CONCLUSION
Despite the COVID-19 pandemic, infertility remains a top stressor for nonpregnant patients with infertility. This may be related to the distress caused when all treatments for infertility were stopped due to pandemic guidelines from the American Society for Reproductive Medicine and European Society of Human Reproduction and Embryology (13)(14)(15). On the other hand, patients who became pregnant after receiving treatment for infertility reported COVID-19 as their top stressor, perhaps relating to the stress involved in achieving that pregnancy and then unknown safety of pregnancy outcomes during COVID-19 (13). Despite the heightened anxiety levels expressed by pregnant patients during the first surge, they did not employ nearly as many stressreducing activities as the cohort with infertility, any of which could have theoretically led to lower distress levels. Because of the innate stress of conceiving and sustaining a pregnancy as a patient with infertility, it is important that support systems focused on reducing stress be implemented for the current and upcoming global challenges. Patients who conceive following infertility treatment should be provided with multiple sources of written and online resources designed to support them in reducing their levels of distress and encouraged by the staff to practice and incorporate these coping skills on a day-to-day basis, especially considering the alarming spread of the COVID-19 variants and the resultant anxiety-inducing media reports. This is especially crucial for the time period after the patient is discharged from the reproductive endocrinologist clinic after a scan confirming a normal intrauterine pregnancy, before they are able to be seen and connect with an obstetrician or midwife, a time period of up to five weeks. | 2022-01-25T14:10:55.020Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "ee64ebfcb68f1349c4f4057db48a628a09e757e0",
"oa_license": "CCBYNCND",
"oa_url": "http://www.fertstertreports.org/article/S2666334122000058/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "16cb6ae72cf36a594249f05481779d414d83cb0c",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
164381446 | pes2o/s2orc | v3-fos-license | RUbIn: A framework for reliable and ubiquitous inference in WSNs
. Development of (Internet of Things) IoT applications brings a new movement to the functionality of Wireless Sensor Networks (WSNs) from only environment sensing and data gathering to collaborative inferring and ubiquitous intelligence. In intelligent WSNs, nodes collaborate to exchange the information needed to achieve the required inference or smartness. E(cid:14)ciency or correctness of many smart applications relies on the e(cid:14)cient, timely, reliable, and ubiquitous inference of information. In this paper, we introduce the RUbIn framework, which provides a generic solution to such ubiquitous inferences. It brings reliability and ubiquity to inferences using the redundancy characteristic of the gossiping protocols. With RUbIn, the implementation of such inferences and the control of their speed and cost are abstracted by providing developers with a proposed middleware and some dissemination control services. We developed an implementation prototype of the RUbIn framework and a few inference examples of TinyOS. For evaluation, we utilized both the TOSSIM simulator and a testbed of MicaZ motes in various densities and di(cid:11)erent numbers of nodes. Results of the evaluations demonstrated that in all nodes, the inferring time after a change was about a few seconds and the cost of maintenance in stability state was about a few messages sent per hour.
Introduction
Sensor motes are small smart devices that integrate the advantages of computing, communication, and sensing systems into a compact element. These advantages provide WSNs with the ability of intelligence, which ensures their deployment as networked embedded systems in smart applications [1]. Development of IoT applications brings a new movement to the functionality of WSNs from only environment sensing and data gath-ering to collaborative inferring and ubiquitous intelligence [2][3][4][5]. The di culty of perceiving the constraints on the resources of nodes and the complexities brought by these constraints to application development should not be a barrier for the development of WSN/IoT applications. Simplifying the application development by the contribution of software and programming language experts can increase the speed of WSN development. In order to reduce these complexities, it is necessary to create new programming paradigms. Hence, the number of research studies and projects focusing on e ective frameworks or middleware are increasing. These frameworks or middleware enfold the constraints and complexities of WSNs and provide a convenient abstraction for programmers [6][7][8].
In intelligence WSNs, nodes collaborate to exchange the information needed to achieve the required inference or smartness [9][10][11][12][13]. The e ciency, correctness, or smartness of many protocols or applications of WSNs rely on e cient, timely, reliable, and ubiquitous inference of information. Some necessary inferences in WSN are ubiquitous as it is required at all the nodes. They are often active as all the nodes are continually tracing changes to keep their inferred information upto-date. They should also be reliable because, in a connected network, the inferred information at all the nodes should be updated in a short time after a change at any node. In this paper, our focus is on such inference problems. Thus, hereafter, the term inference refers to a ubiquitous, active, and reliable inference of information. Additionally, e ciency in energy consumption, the speed of inference after a change, e ectiveness in di erent densities, and number of nodes are other characteristics which can be found out in most of these inferences in response to the constraints on nodes and the requirements of applications. We refer to these characteristics as low maintenance cost, fast inference, and scalability, respectively.
Research studies focused on a generic solution to inference problems are neglected in WSN. Similar characteristics of inference algorithms and resource constraints on WSNs motivated us to propound a framework as a generic approach to the development of inference algorithms. It provides functionalities common to the whole class of inference algorithms and a set of left-blank modules to be lled in by the programmers. An inference algorithm is implemented only by instantiating the left-blank modules and lling them in by the inference-speci c logic. The framework abstracts the inference algorithms from the propagation protocol (gossiping) by providing some standard services, which the programmer can exploit to moderate the cost, speed, and scalability of an inference algorithm. It brings separation-of-concerns for a complex protocol or application when an inference is needed.
The paper continues as follows. The next section studies related work. Section 3 describes the problem and Section 4 analyzes the RUbIn requirements. Section 5 presents the RUbIn framework and Section 6 evaluates it. Finally, Section 7 concludes the paper and describes possible future work.
Related work
Some types of ubiquitous, active, and reliable inferences can be found within di erent software layers of many applications in WSNs. A framework like RUbIn brings e ciency and robustness to these inferences, which are essential prerequisites for the e ciency of the main applications relying on them. To the best of our knowledge, there is no similar framework to facilitate the development of such inferences. Only a few inference algorithms based on periodic message passing are found in some applications or middleware.
In WSNs, key-distribution algorithms are categorized into two types of random and regular [14]. In both types of these algorithms, you can nd tracks of inference in identifying the overlay neighboring nodes that are also physically neighboring nodes through a shared key, nding the overlay path, and nally, formulating the overlay network. With RUbIn, this inference can be simply and e ciently implemented such that not only existing nodes but also future joined nodes will participate in algorithms.
In Mate [15] middleware, nodes actively infer the latest version of a code such that if a node obtains a newer version of a code, after a while, all nodes will obtain it. There are other protocols for the dissemination of codes in WSNs [16][17][18]. These protocols use a gossiping protocol to reliably disseminate the metadata of a new code to all the nodes and make them aware of the new code. In these protocols, if no change occurs for a while, then the period of gossiping will be increased to reduce maintenance cost; otherwise, the period is reset to its lowest value to increase the dissemination speed and hence, the inferring speed.
The RUbIn framework is also based on gossiping protocols with some programming interfaces to increase or decrease the gossiping period. Although RUbIn employs the idea of these two protocols, it is more than a dissemination protocol. For example, in many inference problems, such as inferring the average surrounding temperature, making an approximation of local density, or identifying the shortest path to a sink, each node may infer a di erent value. Consequently, many inference algorithms, which can be developed in RUbIn, are more complicated than only inferring a shared data (here, meta-data). Furthermore, in many cases, in inference algorithms, a measure to score the surrounding nodes or the information received from them is required. To this aim, RUbIn provides one of the most common measures, namely link quality, as an existing default service. In many cases of inference problems, the quality of links to surrounding nodes can be exploited to develop more e cient or precise algorithms.
In collection routing protocol in [19], a tree is established to collect information from nodes. Through a gossiping protocol and a link quality estimator, an e cient, robust, and reliable routing protocol in WSNs is achieved, even if the number of topology changes is high. The design of the RUbIn architecture is inspired by this protocol to take advantage of its characteristics.
In [20], middleware for simplifying application development in WSNs using the publish/subscribe model is proposed. Behind this middleware is a routing protocol based on a tree construction, which should be updated with any change in publishers, subscribers, or the network topology. In this middleware, periodic beacon is used to establish a routing tree while in RUbIn, a more stable routing tree can be e ciently inferred.
Problem statement
There is a multi-hop WSN consisting of n nodes. Every node m executes an application a m , which can be di erent from or identical to other applications. The link between nodes m and k denoted by`m ;k can vary in quality for several reasons, such as noise, congestion, battery energy reduction, periodic sleep, etc. All the nodes, regardless of their running applications, have an active inference on a deterministic set of information C = [I 1 ; I 2 ; ; I jCj ] in their interaction with each other. The value of any information I j at node m at time t, denoted by I j (m; t), where 1 j jCj, may vary in all nodes over the time. Every information I j in every node m is initialized with the value v0 j and then, updated for various reasons, such as changes in the number of active nodes, variation in the quality of links, updates of information of neighboring nodes, changes applied by the application or a user, or changes in sensing values. Even though these changes may be mild and localized, they still may a ect the accuracy of information in other nodes. Therefore, all nodes should trace these changes and consider them in their inference algorithms. Also, they should inform the other nodes of any changes in their own information to ensure that after a short time, the information at all nodes is accurate and up-to-date. In contrast, sometimes, there is a high interval between changes and, meanwhile, the information is stable. In this situation, the message passing for keeping the information up-to-date is extra overhead. Thus, a mechanism is needed to moderate this overhead. In general, the inference framework should consider the following challenges: Reliability: Topology changes should not prevent a node from inferring accurate and up-to-date information. Thus, all nodes connected to the network should ultimately obtain any information needed to update their information; Inference speed: Latency in an inference after a change taking place anywhere in a network may have side e ects on the e ciency or behavior of an application. Thus, updating information should start and end immediately from the changing origin to where it is necessary; Scalability: The e ciency of an inference framework should be independent of the size or density of the network, and it should preserve its characteristics in large or small and in dense or sparse networks; Maintenance cost: Resource constraints in WSNs, especially energy constraint, should be considered in all mechanisms within the inference framework. Thus, all the above characteristics, namely reliability, inference speed, and scalability, should be achieved considering these constraints.
Furthermore, in many cases, in inference algorithms, quality of the link is a common measure to score the surrounding nodes or the information received from them. Using the quality of links in inference algorithms may result in more stable information and lower cost. In other words, inference algorithms based on the information received from nodes having more stable links prevent temporarily inferred values and their side e ects (successive inferences). Therefore, not only the accuracy of the information is increased, but also its maintenance cost also decreased.
Requirements analysis
In this section, we analyze the requirements of the inference framework mentioned in Section 3 and discuss important points that should be considered in its architecture.
Monitoring all changes which have an e ect on information is one of the requirements of a ubiquitous and reliable inference algorithm. We studied these changes and divided them into three categories. In other words, three main factors were identi ed: 1. Application: Running applications may have their own values in contributing to an inference. Sometimes, they reset the information to a given value. The given value can be a result of a new sensor value, a new command from a user, the logic of the application, etc. Thus, inference framework should provide an interface for the applications to contribute their values to the inference; 2. Time: Occasionally, the elapsed time from a given point in the past, like the latest update time or the latest con rmation time, may be considered in inference algorithms. Thus, time is another factor a ecting the information. Generally, in inference algorithms, the time factor appears as a periodic check of the validity of information. In some inference problems, in which elapsed time is not important, this factor is not considered; 3. Message: The most common factor in WSNs, which participates in all inference algorithms, is the messages received from neighboring nodes. Nodes should inform each other about their information and merge the received information with their own. Changes in the received values from neighboring nodes are the results of changes in the topology or the nodes caused by the three main factors in the neighboring nodes.
The information is initialized at the beginning of the inference algorithm and then, updated by these three factors. To better understand these three factors, three examples of inference problems with di erent levels of complexity are explained.
1. Providing a shared memory: To realize a shared memory in WSNs, all nodes should have their own allocated memory, which is always updated with the latest modi cation at any node. In this inference problem, all nodes infer the values of the node at which the latest modi cation occurred. Here, the two factors of application and message are e ective. The application factor initiates a change in the local memory of a node while the message factor transmits an update in the memories of all the other nodes. Here, the time factor has no role; 2. Consensus on a quantity: One of the research topics in WSNs is the consensus problem. For example, when all nodes have their own values of a quantity, and they all want to infer the maximum value, a consensus to nd out the maximum quantity is required. This problem is an inference problem in which all three factors are present. The application factor contributes a value of the quantity to the inference. The time factor checks whether the last consensus result is still valid by keeping the elapse time from the latest update. The message factor informs all the nodes of any change in consensus result at any node; 3. Finding a robust route to a sink: In a WSN, one or more nodes play the role of sink to collect information. Nodes in interaction with each other nd a route (usually the shortest robust path) to a sink to send their information. In this inference problem, the three factors are present. The application factor allows only a node to introduce itself as a new or removed sink. The time factor checks the route validity so that if the current route is not con rmed for a given time period, then it is expired. The message factor informs other nodes if a new route is found at any node. The other nodes update their routes if a better one (shorter robust path) is found.
Almost all the inference algorithms have a data structure for inferring information and the required metadata. The data structure consists of multiple data elds, which can be divided into private and public parts. Both of these parts may change over time, but only the public part is sent to the neighboring nodes.
Because of the high dynamics of the network, ensuring reliability and robustness of WSNs applications is possible only through repetition. In other words, a message from a node will be reliably received by its neighboring nodes if it is periodically disseminated for a nite or in nite number of times. Thus, to reliably achieve a precise inference in all nodes after a change in any node, periodic information dissemination, like gossiping protocols in wireless networks, is needed.
Wireless communication is the main energy consumer in a sensor node. Thus, hereafter, cost refers to the number of sent messages. The speed and cost of gossiping protocols are inversely related to the gossiping period; i.e., the shorter the period, the higher the speed and the cost and vice versa. Thus, a dynamic gossiping period is recommended.
Due to repetition in gossiping protocols, increasing the network size will not decrease e ciency, unless this increase brings an excessive increment to the density. A gossiping protocol in a dense network results in congestion and collision of messages and, consequently, reduction in e ciency. In most inference algorithms, the number of needed messages for a reliable inference in a proximity is independent of the number of nodes located in that proximity. This fact is not considered in gossiping protocols. A solution to this problem is to provide a mechanism by which the nodes that eavesdrop the messages in their proximity can eliminate sending if it is wasteful. Consequently, whenever density of a proximity increases, the probability of such eliminations is also increased. Therefore, this mechanism can restrict sending to a small number in each proximity.
Link quality estimation in WSNs is a kind of ubiquitous inference algorithm that is frequently needed in many other inference algorithms. The 4-bit link estimation algorithm [21] is adapted in our framework. To reduce the overhead of this algorithm, its messages (beacons) can piggyback on other messages of the framework.
Framework design
The RUbIn framework is a general solution to the inference problems mentioned in Section 3. This framework facilitates the development of inference algorithms by providing all the functionalities common to this class of inferences. We designed the RUbIn framework with regard to the analysis of its requirements in Section 4.
RUbIn framework stack
As depicted in Figure 1, the stack of the RUbIn framework consists of three layers and each layer has a data unit.
In the information inference layer, information is included in a data structure consisting of public and private parts. Both of these parts participate in an inference algorithm and are accessible by the applications, but only the public part is available to the lower layers and consequently, to other nodes. Thus, the length of the public part is restricted to a few bytes less than the maximum packet size so that it can be sent in one packet. Unlike the public part, the private part consists of information only bene cial to the current node with an arbitrary length.
The gossiping control layer controls the dissemination of information. As depicted in Figure 1, the data unit of this layer adds an 8-bit unique identi er of the information as a header to the public part of the upper-layer information.
The network access control provides services for the gossiping control layer to interact with the network. The data unit of this layer (RUbIn data unit) consists of a header and a footer in addition to a gossiping message of the upper layer. Both the header and the footer are used for link quality estimation according to our modi ed version of the 4-bit link quality estimation algorithm. The header contains two elds: an 8-bit eld as the sequence number of the sent messages and another 8-bit eld consisting of a 4 bits as the number of entries in the footer and a 4 bits as the ags used in the link quality estimation algorithm. The footer consists of some pairs each including a 16-bit node address and an 8-bit estimation of the input link from this node. The number of pairs, N, depends on the extra available spaces of each packet, so a round-robin manner is used to send all such pairs. If we consider L pckt as the maximum length of a packet and L pub as the length of the public part of information, then we have L pub L pckt 3. Therefore, N = b L pckt L pub 3 3 c.
When N > 0, the information in the estimation algorithm can piggyback on gossiping messages. Thus, at least for one case, we should have L pub L pckt 6 to attain a precise estimation of the links.
RUbIn framework architecture
In Figure 2, the architecture of the RUbIn framework and its layers is depicted by a component diagram. In this diagram, components are divided into the skeleton and extended components. The skeleton components are components which have already been implemented in the RUbIn framework, while the extended ones denote the components that users develop and add to the framework. Therefore, the network access layer and the gossiping control layer belong to the skeleton part, while the information inference layer has components in both parts. In the following, we describe each of the RUbIn components in more details.
The network access control component manages the transmission of RUbIn packets between network access layers of neighboring nodes. This component uses the network interface provided by the operating system to distinguish, send, and receive RUbIn packets from the network.
The estimation information can piggyback on messages of the dissemination engine through the link quality estimator component. Therefore, link quality estimation is performed during gossiping of other information and in case there is no information for inference or no space for piggybacking, quality estimation of the links would be unfeasible. To solve this problem, when the sending rate of the information on the links is lower than a threshold, we send a few distinct beacons to achieve a precise estimation of the links. The following services to access the quality of input, output, and bidirectional links are now available by the link quality interface for use in inference modules: getBackwardLinkQuality (neighborId:Addr): int; getForwardLinkQuality (neighborId:Addr): int; getLinkQuality (neighborId:Addr): int.
The exponential timer component provides an array of exponential timers for the dissemination engine, one for each inferring information. An exponential timer is a virtual timer the period T of which increases exponentially so that it is initialized at a minimal value t`(about a few seconds) and becomes automatically c times after each period. The period increases at most k times and nally reaches a maximum value t h , where t h = c k t`(about a few hours). Afterwards, the number of periods remains constant at about a few in a day. When an exponential timer res, it requests the dissemination engine to send the corresponding information. To decrease the probability of congestion, collision, and energy waste, we follow the idea of the trickle timer [22] so that in a period T , the timer res at a random time t tick , where t tick 2 [ T 2 ; T ], instead of the end of the period. At time t tick in each period, if a timer has not received a cancellation request from the dissemination engine, then it res immediately. Each timer can be reset by the dissemination engine to t`at any moment of time, even before T = t h .
The dissemination engine is the core of the RUbIn framework and responsible for information gossiping. This component is in interaction with some information control units equal to the number of inferring information units, and an exponential timer component with one virtual timer per information unit. When this engine is initiated, it initiates all the information control units and then, requests the exponential timer component to launch an exponential timer per control unit. Then, when a timer res, the dissemination engine sends a gossiping message consisting of the public part of the corresponding information to the link quality estimator component. Also, if a gossiping message is received from the link quality estimator component, this engine delivers it to the corresponding information control unit. Furthermore, the dissemination engine provides the following services for each of the information control units with the aim of managing the dissemination period. In other words, we summarize all modi cations to the default gossiping trend of each information unit by the following four services.
1. SendFast(): This service increases the dissemination speed of the information. To this aim, it resets the exponential timer to t`, which increases the dissemination speed and consequently, the inference speed of the corresponding information for a while.
In contrast, this service will also increase the inference cost; 2. SendImmediately(): This service immediately sends the information. However, the engine disseminates the information once per period, but immediate sending of a message before or after t tick may occasionally be needed. This service does not in uence the dissemination period, but can be used instead of SendFast in inference algorithms to immediately disseminate information and increase inference speed with no signi cant change in cost; 3. BeQuiet(): This service eliminates dissemination of information in the current period. In fact, if this service is invoked before t tick , the corresponding exponential timer will not re in the current period. This service does not in uence the dissemination period and is used to decrease the inference cost, especially when the density of nodes is very high; 4. SendImmediatelyAndFast(): This service combines the rst two services such that, at rst, immediate sending is performed and then, the dissemination period is reset to t`with the aim of increasing the inference speed. In this service, propagation over one path is at least t2 less than the SendFast service per hop and can totally be about a fraction of one second.
Some information control units are in interaction with the dissemination engine. A programmer instantiates these units as many as the number of distinct inferences required in an application so that each one knows its information structure and the relevant inference module de ned by the programmer. An information control unit is a gateway for all the three main factors to participate in an inference. In other words, an information control unit can receive messages (message factor) from the dissemination engine, commands from the application (application factor) using the application interface, and check requests from a dedicated periodic timer (time factor).
The inference module of information is the only place in which the information can be modi ed. The information is initialized in this module and then, modi ed in response to the requests of the three main factors over time. The requests of these factors are sent to this module by the corresponding information control unit using the inference interface. Therefore, for each inference module, the following list of services (de ned in inference interface) should be implemented: The information control unit initializes the information by calling the init service. Then, it handles requests of the application, time, and message factors by calling the set, check, and aggregate services, respectively. The return value of these three services is the only means a programmer has to manage the dissemination trend and consequently, the inference speed and cost. The set service sets or merges the information value with an application value. The check service performs periodic validation or modi cation of information, if needed. Finally, the aggregate service aggregates the newly received message with the local information. Since the sender identi er or the quality of its input or output links is needed in some inferences, the sender identi er is also known in a request for aggregate service. The type of return values (DissCmd) in these services is one of the following ve values: enum DissCmdfGoOn, SendFast, SendImmediately, BeQuiet, SendImmediatelyAndFastg. If the return value is GoOn, the information control unit will not do anything. Otherwise, it calls the equivalent service of the dissemination engine for the information.
An application can access the following services provided by the information control unit using the application interface. The rst two services request the information control unit to set and get the information, while the third one asks it to register an event handler for the case a modi cation occurs. The last service is used to activate the time factor by setting the period duration to a positive value. Zero value means that no time factor is needed.
In summary, to add a new inference algorithm, a programmer should de ne the data structure of information and implement init, set, check, and aggregate services.
Evaluation
We evaluate the RUbIn framework from two aspects of e ectiveness and e ciency. To this aim, with the two examples in Section 6.1, we demonstrate how an inference algorithm can be implemented. Then, in Section 6.2, we evaluate the e ectiveness of RUbIn and the e ciency of inferences developed with RUbIn.
Developing inferences in RUbIn
We develop the pseudo-codes of the two inference problems discussed in Section 4 using our framework. The required data structures and the services of relevant inference modules are de ned. Development of these two examples demonstrates how the RUbIn framework e ectively helps the programmer to focus on the inference algorithm and simply manage the dissemination trend.
6.1.1. Awareness of the latest version of an application (shared memory) In every in-situ reprogramming protocol in WSNs, all nodes should be aware of the latest version of an application introduced by any node and make an e ort to receive it. Furthermore, when a node has recently resumed from sleep mode or has been joined to the network, it should also infer the information and then, proceed to receiving any new applications, if required. Thus, a ubiquitous and reliable inference about application version is appealing and this can be simply and e ciently implemented using RUbIn. The required data structure and the services of the relevant inference module are depicted in Algorithm 1.
As can be seen in this algorithm, the information does not have a private part. The application version is an ordered pair of a version number and a node address < V erNo; Addr >. The element Addr is the address of the node introducing a new version (V erNo) of an application to the network. When more than one new application are introduced simultaneously to di erent nodes, all will be labeled by the same version, which is more than the latest known version. To break the tie, when V erNos are identical, the information with the biggest Addr will be inferred in all nodes. The set service is in charge of introducing a new version of the application to a node. This service returns a SendImmediatelyAndF ast request to the inference module to immediately and more frequently disseminate the new version information. Here, the version information will never expire and thus, the time factor (check service) does not in uence the inference.
The aggregate service processes all receive messages with the aim of inferring the latest version considering speed, cost, and scalability. In other words, when a node hears the same information as it already has, it returns a BeQuiet request in line 24 of Algorithm 1 to eliminate the dissemination in the current period. This brings scalability to the inference independent of the network density. When a new version is inferred, the SendF ast request (lines 27 and 34) is returned to request more frequent information dissemination. Also, when a lower version from one of the neighboring nodes is heard, the SendImmediately request (lines 29 and 36) is returned to immediately inform the neighbor Algorithm 1: Inference of the latest version of an application using RUbIn. node of the newer version without any change in the dissemination period. The incorrect use of these return values can signi cantly decrease the inferring speed or increase its cost.
Finding the shortest path to a sink (routing)
In a WSN, some nodes play the role of a sink to collect information from other nodes. Finding an optimum path to one of these sinks is the problem of routing algorithms. This problem is an inference needed at all nodes and should be quickly updated when a change in the network topology occurs. An optimum path has di erent de nitions. Here, we consider two of them. The rst is to nd the shortest path to a sink, and the second is to nd the shortest robust path (the path through which the intermediate links have an appropriate quality to relay messages) to a sink. To this aim, the getLinkQuality service of the link quality estimator component (named LE here) is utilized. Each path is speci ed by its length, next hop, and update time. The public part consists of the length while the private part encapsulates the other two. The required data structure and the services of the relevant inference module are depicted in Algorithm 2.
The sink nodes (HopCount = 0) are added and removed using the set service. The check service investigates the validity of a path in non-sink nodes. A path should be con rmed or updated once in each MAXV ALIDT IME seconds. Otherwise, it will be expired. In both set and check services, if a change in the information occurs, a SendImmediatelyAndF ast is returned. Therefore, all the paths will be quickly updated according to the new change.
In the aggregate service, only messages of senders whose bidirectional link quality is equal to or more than LQT HERSHOLD are processed. This condition means that each node selects a path in which the link to the rst node of the path is quali ed. Compliance with this condition will implicitly result in reliable and quali ed paths in all of the nodes. This condition is checked in line 29 of Algorithm 2. Removing this condition results in the shortest path to a sink while considering it results in the shortest robust path to a sink in each node. In this service, when a node Algorithm 2: Inference of the shortest robust path to a sink using RUbIn. hears information the same as what it has, it sends the BeQuiet request to eliminate the redundant sending in its proximity. Furthermore, when a better path is inferred, by SendF ast or SendImmediatelyAndF ast request, the node will inform the other nodes to update accordingly. Also, if a node infers that a path is a better one to one of its neighbors, it informs the neighbor by a SendImmediately request. In other situations, the GoOn request is returned.
Studying e ectiveness of RUbIn and e ciency of inferences
We implemented RUbIn and two inference samples given in Section 6.1 with NesC on TinyOS. Then, we examined both of these samples with a TinyOS simulator, namely TOSSIM. Also, we examined these samples in a real testbed with MicaZ nodes. We establish a multi-hop WSN with prerequisite conditions for each experiment. Some initial changes needed to make the network ready for the experiment are made. We should wait for the whole network to become stable after the initial changes. Then, according to the experiment, a new change in information is made somewhere in the network and all the subsequent changes in the network for a few hours are studied. Hereafter, we refer to a change occurring when the whole network is in the stability state as a wake-up change. Also, the inference time of a node refers to the duration between the occurrence times of a wakeup change in network and the consequent inference of information in that node. Inference speed is the inverse of inference time. The maximum inference time of nodes is the inference time of the network. Also, the instability time of network refers to the duration between the occurrence time of a wake-up change in the network (T = t`in the changing node) and the time the whole network becomes stable (T = t h in all nodes) again. During instability time, at least one node in the period of T , where t` T < t h , exists.
6.2.1. Evaluation by TOSSIM simulator TOSSIM enables us to run a real application on a virtual network with custom setting. Accordingly, we tested RUbIn on large-scale networks with TOSSIM. Figure 3 demonstrates the reliability of inferences developed in RUbIn. This gure shows the inference times of nodes for inference of the latest version of an application and the shortest robust path to a sink in four di erent topologies of a 20 20 grid network. These four networks are distinguished by the distances of physically neighboring nodes, which are 15, 20, 25, and 30 meters.
In inference of the latest version of an application (Figure 3(a)) at time zero, the top-right node introduces a new version to the network. After a short time, between 8 and 20 seconds, all 400 nodes infer the latest version of the application. As depicted in this gure, the new information is disseminated through the network from the source node to all other nodes. Nevertheless, there are nodes that infer the new information later than their neighbors due to collisions and topology changes. The repetition characteristics of RUbIn reliably lead to updates in these nodes as well.
In inference of the shortest robust path to a sink ( Figure 3(b)), the node in the bottom left is a sink. At time zero, the top-right node introduces itself as a new sink. The nodes that are closer to the new sink will update their paths. Change in path information is disseminated from the top-right node to the middle of the network. Path information of the other nodes will not be changed as they are still located closer to the old sink. Like the prior sample, there are nodes that infer the new information later than their neighbors. Nevertheless, the repetition causes all nodes to reliably infer the correct information after a short period.
Although the network is a large multi-hop one in these inferences, the speed of inference is an order of seconds (at most one minute). Figure 4 demonstrates the scalability of inferences developed in RUbIn. We show the inference time for the inference of the latest version of an application (Figure 4(a)) and the shortest robust path to a sink ( Figure 4(b)) in 10 n n(n 2 1; 2; 3; 4; 6; 8; 12; 16; 20; 28) grid networks. In each of these gures, we show the e ect of increasing nodes in two di erent scenarios. In the rst scenario ( xed distance), the distance of neighboring nodes is xed at 25 meters while in the second scenario ( xed area), all nodes are placed in an environment with a xed area (50 50 meters).
In the xed distance scenario, nodes are increased with the aim of increasing the covered area and diameter of the network. An increase in the diameter of the network results in an increase in its inference time. Figure 4 demonstrates that the ratios of the inference times of the two networks for both inference examples are almost equal to the ratios of their network diameters while in some rare cases, they are at most equal to the ratios of their covered areas.
In the xed area scenario, an increase in the number of nodes leads to an increase in density with a mild change in the diameter of the network. Increasing the density leads to an increase in collisions in the network, which lead to a longer inference time. Nevertheless, the BeQuiet mechanism prevents redundant messages and their collisions in dense networks. Accordingly, we can observe in the gure that the increase in the number of nodes slightly a ects the inference times in both inference examples.
In Figures 5 and 6, the maintenance costs of both inference examples in the stability state are illustrate. We examined such inferences in a grid network of 400 nodes (20 20) with a distance of 20 meters between physically neighboring nodes for 10 hours after the stability of inferences. The cost was measured in terms of the number of sent messages. Figures 5 and 6 compares the maintenance costs of the two inference examples with a case in which the dissemination period is the constant t`(2 seconds in our implementation) in order to maximize dissemination speed (max-speed) and a case in which the dissemination period is the constant t h (1024 seconds in our implementation) in order to minimize maintenance cost (min-cost). Figure 5(a) shows the maintenance cost for the inference of the latest version of an application (API) using RUbIn in comparison with the max-speed and min-cost scenarios during 10 hours. These costs are drawn in log 2 of sent packets. As a result, the precise use of the BeQuiet mechanism declines the maintenance cost of this inference to even less than the cost in min-cost scenario. Figure 5(b) shows the probability density function of the number of sent packets per node and its average during 10 hours. The average number of sent packets per node during 10 hours is about 21, which is 15 packets less than that in the min-cost scenario. Figure 6(a) shows the maintenance cost for the inference of the Shortest Robust Path (SRP) to a sink using RUbIn in comparison with the costs of maxspeed and min-cost scenarios during 10 hours. The maintenance cost of this inference is again less than the cost of min-cost by the use of BeQuiet mechanism. Figure 6(b) shows the probability density function of the number of sent packets per node and its average for 10 hours. The average number of sent packets during 10 hours is about 30, which is 6 packets less than those in the min-cost scenario.
These gures demonstrate the e ciency of inferences based on RUbIn such that using RUbIn and its mechanisms leads to an inference speed equivalent to the speed in max-speed scenario and maintenance cost less than or comparable to the cost in min-cost scenario.
Evaluation in a real testbed
We also evaluated and examined both developed inferences in a real testbed with MicaZ nodes. To this aim, we constructed a multi-hop network (Figure 7) in the laboratory by decreasing the RF power of nodes. We assumed c = 2, t h = 1024 seconds, and t`= 2 seconds. used as a testbed. Inference about the new path begins from areas around the top-right node and spreads to the middle of the network. Inference time in all of these scenarios is between 3 and 4 seconds. In both of these examples, the repetition causes all nodes to reliably infer the correct information after a short period of time. Figures 9 and 10 illustrate the scalability of these inferences in the two scenarios of xed distance and xed area.
Figures 9(a) and 10(a) show that when the number of nodes increases, the inference time in the xed distance scenario increases such that the ratio of the inference times of the two networks is approximately an order of the ratio of their covered areas, while in the xed area scenario, the inference time is approximately constant.
Also, in Figures 9(b) and 10(b), for both the xed distance and xed area scenarios, it is evident that the instability time of network is between 800 and 900 seconds. After a change, the dissemination period is reset to 2 seconds and then, doubled each time. Finally, after 9 periods, it becomes 1024 seconds. Hence, it takes 510 (2+4+8+16+32+64+128+256) seconds to reach the beginning of the period with a duration of 512 seconds. From the middle to the end of this period, nodes can send information and then, the next period is set. Therefore, when the period duration is 512 seconds, a message may be sent between seconds 256 and 512 of this period and then, for the next period, the duration of 1024 (stability state period) seconds is considered. In other words, the instability state lasts about 766 to 1024 seconds after the last change and we observe this in the empirical experiments depicted in Figures 9(b) and 10(b). These gures demonstrate that the duration of instability state is about 14 minutes after a signi cant change.
To ensure a fast and reliable inference, even in a high dynamic topology, this duration should be met.
Figures 9(c) and 10(c) depict the sum of periodic and sporadic sent messages during instability time. Depending on the type of inference, the number of sent messages in this interval is various. In the inference of the latest version of an application, during instability time, the BeQuiet service decreases the the number of sent messages to less than 9 (number of periods takes T to reach 1024 seconds from 2) messages per node. This issue is more severe in a scenario of increasing the number of nodes in a xed area. Nevertheless, in the inference of the shortest robust path to a sink, because of numerous path changes and gossiping period resetting, the number of sent messages to infer a correct path in each node increases slightly up to more than 9 messages per node. Thus, the instability state cost is a few messages per node in most inference algorithms.
We measured the maintenance costs of the two inference examples in the stability state. To this aim, we traced 2 hours of sent messages after a change at time zero and then, considered the second hours as the stability state. The number of sent packets, the probability density function of the number of sent packets per node, and its average in stability state are depicted in Figures 11 and 12. In these gures, the maintenance costs of the two inference examples are compared with those of min-cost and max-speed scenarios.
In Figure 11(a), for inference of the latest version of an application, it is evident that the cost in the stability state is always less than that in the min-cost scenario. In other words, after one hour of being in the stability state, based on Figure 11(b), each node has on average of 2 sent messages less than those in the min-cost scenario. Therefore, in long-time execution of this inference, despite that its inference speed is equal to that in the max-speed scenario, the cost is less than that in the min-cost scenario. Indeed, the BeQuiet service brings such e ciency to this inference algorithm.
In Figure 12(a), for the inference of the shortest robust path to a sink, it is again evident that the cost in the stability state is approximately equal to the cost of the min-cost scenario, i.e., on average, 2.5 more sent messages (based on Figure 12(b)). Therefore, in longtime execution of this inference, despite that the in this application, inference speed is equal to that in the maxspeed scenario, the cost is slightly more than that in the min-cost scenario.
Both inference examples evaluated by the TOSSIM simulator and the testbed of MicaZ nodes reveal the e ciency of our framework in developing inferences in terms of speed and cost while preserving scalability.
Conclusion and future work
In this paper, we proposed the RUbIn framework as an extendable middleware for the development of reliable and ubiquitous inferences in WSNs. We described the design of this framework and demonstrated that the RUbIn approach and its supporting mechanisms brought e ectiveness to this framework. In other words, we showed that by using this framework, reliable inferences could be simply developed independent of the nodes density and the coverage area. After a signi cant change anywhere in the network, information at all nodes could be quickly updated. Furthermore, we demonstrated that the mechanisms of RUbIn provided e ciency of inferences so that despite the high inference speed, the cost for each node was about a few messages sent per hour.
RUbIn framework provided a completely distributed approach to solving the inference problems. As a result, our framework facilitated the development of networked smart systems by reducing their design and implementation costs when a ubiquitous inference needed to be intelligent. Therefore, as future work, we will use RUbIn to develop IoT applications when local smartness is appealing in line of the new computing paradigm named \Fog computing" [23].
Jafar Habibi is an Associate Professor in Com-
puter Engineering Department at Sharif University of Technology. His research interests include software engineering, software architecture and evolution, simulation and performance evaluation, and embedded and distributed systems. He received his PhD in Computer Science from the University of Manchester in 1998.
Erfan Abdi received his MSc in Computer Science
from ETH Zurich in 2017. His research interest is distributed computing, design and analysis of algorithms and protocols for wireless sensor networks, and middleware for distributed systems.
Fatemeh Ghassemi received her PhD in Software
Engineering from Sharif University of Technology in 2011, and in Computer Science from Vrije Universiteit of Amsterdam in 2018. She has been an Assistant Professor at University of Tehran since 2012, supervising the Formal Methods Laboratory. Her research interest includes formal methods in software engineering, protocol veri cation, model checking, process algebra, software testing, and wireless systems. | 2019-05-26T14:10:31.012Z | 2019-04-15T00:00:00.000 | {
"year": 2019,
"sha1": "209fcc4fffd4ba6b050fe7c3d10838b3edc9f19f",
"oa_license": null,
"oa_url": "http://scientiairanica.sharif.edu/article_21373_d0a7718e948f8a5143c696b436cae834.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "023d2e52d053d538a6938556c0d6ae8e4f88cdcb",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
4365454 | pes2o/s2orc | v3-fos-license | Considerations on the effectiveness of educational strategies in outcomes related to workplace violence
1. Encuesta Nacional de Salud 2011-2012. Cuestionario de adultos. Ministerio de Sanidad, Servicios sociales e Igualdad. Instituto Nacional de Estadística [consultado 15 Ene 2015] Disponible en: http://www.ine.es/jaxi/menu.do?type=pcaxis&path=/t15/p419. 2. Corduras Martinez A, Del llano Señaris J, Caicoya Gomez-Moran M. La diabetes tipo 2 en España: Estudio crítico de la situación. Madrid. 2012 [consultado 14 Oct 2015] Disponible en: http://vww.madrid.org/cs/Satellite?c=PTSA Multimedia. 5. Simpson TC, Weldon JC, Worthington HV, Needleman I, Wi SH, Moles DR, et al. Treatment of periodontal disease f glycaemic control in people with diabetes mellitus. Coc rane Database Systematic Reviews. 2015;11, http://dx.doi.org 10.1002/14651858.CD004714.pub3. Art. N.◦: CD004714. 6. Winning L, Linden GJ. Periodontitis and Syistemic Diseas Association or Causality? Curr Oral Health Rep. 2017;4:1-http://dx.doi.org/10.1007/s40496-017-0121-7
Considerations on the effectiveness of educational strategies in outcomes related to workplace violence
Consideraciones sobre la eficacia de las estrategias educativas en los resultados relacionados con la violencia en el trabajo Dear Editor: Workplace violence is a problem that affects health professionals both in Primary Health Care and in hospitals in various parts of the world 1,2 and is prevalent among nursing professionals. 3 This violence generates high human and socioeconomic cost, 4 creating the need for health institutions to develop plans to prevent violent incidents at the workplace. 5 The indications on the use of training programs on workplace violence are recurrent in the literature, and are becoming a standard practice for help health care professionals in the care of aggressive patients and thus avoid lesions. 6 For this reason, we consider it important to conduct a review in order to get up-to-date information on the effectiveness of strategies with an educational focus aimed at nursing professionals for improving results related to workplace violence.
To this end, the databases CINAHL (Cumulative Index to Nursing and Allied Health Literature), MEDLINE (via Pub-Med) and Web of Science were accessed and up-to-date research articles that reported the application of educational strategies with the purpose of improving aspects related to workplace violence, having mentioned at least the post-intervention effects, were gathered with the use of three strategies that integrated the following Medical Subject Headings (MeSH) terms, keywords and Boolean operators: (1) This review demonstrated that certain strategies focusing on professional education have generated positive results regarding identification, handling, reducing of frequency and risk of incidents. However, the reduction in frequency after the strategy had been applied was not determined by all the studies, and the results presented by the research sometimes diverged from one another, with significant pre-and post-intervention differences in some studies and non-identified in others. This context indicated that it is necessary to search for studies on the theme of interest in the specialized literature, to assess the quality of the evidences and to identify aspects of the most effective educational and management strategies 6 for developing new educational programs in accordance with the existing purpose and resources.
We also recognized the importance of considering other aspects (e.g., structural aspects) that favor the occurrence of violent events at the workplace, since the training of professionals only may not be enough to solve the problem. Furthermore, additional studies that evaluate the effects of new educational strategies or other strategies on the results related to workplace violence are extremely needed in hospitals and in other levels of health care, such as Primary Health Care, and in some countries, for instance in Brazil.
Countries and their healthcare institutions should be aware that the issue is not fighting or not against workplace violence, but which plan of action should be used to prevent it or reduce the risk of its occurrence.
Funding
This study is related to the doctoral research of the first author supported by grant #2016/06128-7, São Paulo Research Foundation (FAPESP), National Council for Scientific and Technological Development (CNPq) and Coordination for the Improvement of Higher Education Personnel (CAPES). | 2018-04-03T06:09:15.816Z | 2018-03-21T00:00:00.000 | {
"year": 2018,
"sha1": "de649f07b30436508a03ab272e88df5c6fccd1fb",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.aprim.2017.09.013",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "983aff107dafb53dc35621c072d53aaeb042ae68",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
18608386 | pes2o/s2orc | v3-fos-license | Journal of Intercultural Ethnopharmacology Investigation on Hypoglycemic Effects of Ethanol Extract of Alpinia Nigra (gaertn.) in Animal Model
INTRODUCTION Diabetes, a chronic metabolic disorder, is a major threat to global public health that is rapidly getting worse and the biggest impact on an adult of working age in developing countries. There is an estimation of 246 million people with diabetes in the world, of whom about 80% reside in developing countries [1]. Among two types of diabetes, Type 1 causes the immunological destruction of pancreatic β cells resulting in insulin deficiency [2]. Type 2 diabetes mellitus (DM), more prevalent form of the disease, is associated with both impaired insulin secretion and insulin resistance. It is often associated with obesity and hereditary disposition [3]. Multiple lines of therapeutic options have been so far designed and applied to get cure of diabetic ailments. However, the synthetic antidiabetic agents themselves are making numbers of inconveniences due to their side effects along with their higher costs [4]. As a result, alternative therapeutic ways are still in search to shunt those adverse effects caused by synthetic antidiabetic agents. Traditional preparations from plant sources have recently and widely been used almost every corner of the world as an alternative medication for diabetes due to their less harmful effects and lower prices. The World Health Organization Study Group of DM has also acknowledged the therapeutic advantages of plants medicines in diabetic management as the plants were the first option as antidiabetic therapy before the advent of insulin and oral hypoglycemic drugs this issue [5]. Last two decades, plant materials are progressively formulated and marketed as herbal drugs [6]. It has been estimated that in the U.S. 25% of all prescription dispensed from community pharmacies contain plant extracts [7]. ABSTRACT Background: Our study aims at exploring the hypoglycemic effect, efficacy, and possible mode of action of ethanol extract of Alpinia nigra (EEAN) as an antidiabetic agent in an animal model. Methods: Oral glucose tolerance test (OGTT) was used to identify primary hypoglycemic effect in mice. Three tests (glucose absorption, sucrose absorption, and disaccharidase activity) were carried out by gut perfusion and six segments studies to assess carbohydrate absorption and glucose utilization. Results: In OGTT, at 400 mg/kg and 800 mg/kg dose of EEAN extract significantly improved oral glucose tolerance among normal mice at 60 min and 90 min with compared to control. Both doses of extract significantly (P < 0.01) reduced blood glucose level and showed the hypoglycemic effect by retarding 11.43% and 20.82% of blood …
INTRODUCTION
Diabetes, a chronic metabolic disorder, is a major threat to global public health that is rapidly getting worse and the biggest impact on an adult of working age in developing countries. There is an estimation of 246 million people with diabetes in the world, of whom about 80% reside in developing countries [1]. Among two types of diabetes, Type 1 causes the immunological destruction of pancreatic β cells resulting in insulin deficiency [2]. Type 2 diabetes mellitus (DM), more prevalent form of the disease, is associated with both impaired insulin secretion and insulin resistance. It is often associated with obesity and hereditary disposition [3]. Multiple lines of therapeutic options have been so far designed and applied to get cure of diabetic ailments. However, the synthetic antidiabetic agents themselves are making numbers of inconveniences due to their side effects along with their higher costs [4]. As a result, alternative therapeutic ways are still in search to shunt those adverse effects caused by synthetic antidiabetic agents.
Traditional preparations from plant sources have recently and widely been used almost every corner of the world as an alternative medication for diabetes due to their less harmful effects and lower prices. The World Health Organization Study Group of DM has also acknowledged the therapeutic advantages of plants medicines in diabetic management as the plants were the first option as antidiabetic therapy before the advent of insulin and oral hypoglycemic drugs this issue [5]. Last two decades, plant materials are progressively formulated and marketed as herbal drugs [6]. It has been estimated that in the U.S. 25% of all prescription dispensed from community pharmacies contain plant extracts [7].
Investigation on hypoglycemic effects of ethanol extract of Alpinia nigra (Gaertn.) in animal model
Although the succession of synthetic drugs, to certain extend, has raised the health care of people, until now the use and importance of phytomedicines for the same has never been neglected, and a large number of plants are screened for their efficacy against diabetic and hyperglycemic diseases [8,9]. Alpinia nigra (Gaertn.) B.L. Burtt, which belongs to the Zingiberaceae family, is known as Jongly Ada or Tara in Bengali. This aromatic and rhizomatous herb is also referred to as Galangal, False galangal, Greater galangal, Black-Fruited, or Kala. A. nigra it is used as traditional medicine for DM. Diabetic patients use it in various forms, e.g. juice of A. nigra is a natural cure against DM. A. nigra, which is widely cultivated in Asia, Africa, and South America, is a diverse medicinal plant which has also been therapeutically used in the treatment of various diseases. Various therapeutic activities of this plant which has been reported are antiinflammatory [10], analgesic, antibacterial, cytotoxic [11], anthelmintic [12], anxiolytic-sedative [13], etc. Research showed that isolated compounds from A. nigra had well inhibition of α-glucosidase activity [14]. Diabetic patients use it in various forms, e.g. juice of A. nigra as a home remedy against DM. The hypoglycemic effect of A. nigra was not evaluated by the established methods.
In the present study, we first tried to find out the hypoglycemic effect of A. nigra by OGTT. We also tried to establish an indigenous system of medicine (herbal therapy) as antidiabetic drugs instead of chemical drugs. The mode of action of A. nigra leaf extract in the treatment of diabetes was also investigated.
Chemical and Reagents
Reagents of analytical grades and deionized water (Purite, Oxon, UK) were used for the study. Sodium pentobarbital was purchased from Sigma-Aldrich (St Louis, MO, USA). Sodium chloride, D-glucose, sucrose, ethanol, calcium chloride, potassium chloride, and sodium hydrogen carbonate were obtained from BDH Chemical Ltd (Poole, Dorset, UK). All kits were purchased from Boehringer Mannheim GmbH, Germany. Wallac 1409 scintillation counter was supplied by Wallac, Turke, Finland while the microwell plate ELISA reader was obtained from Bio-Tek, USA. Rapid View ™ (Blood glucose monitoring system, Model: BIO-M1, BIOUSA Inc, California, USA) with strips were purchased from Anderkilla, Chittagong. Glucose was purchased from the local scientific market, Chowkbazar, Chittagong. Glibenclamide was obtained from Square Pharmaceutical Ltd., Bangladesh.
Collection and Identification
Leaves of A. nigra were collected from the Bangladesh Centre for Scientific and Industrial Research (BCSIR), Chittagong, Bangladesh, in the month of April 2014. It was identified and authenticated by the standard taxonomical method at BCSIR.
Preparation of Plant Extract
The collected leaves (5 kg) were washed with fresh water and dried in the shade at room temperature (25°C). The dried leaves were grounded into fine powder by an electrical grinder (Wiley mill) and mesh (mesh number 50) was used to sieve the sample. Then, the powder of leaves of A. nigra was pasted by homogenizing with mortar and was suspended with water for preparing the ethanol extract. About 900 g of the leaves were dissolved in absolute ethanol (99% ethanol, source) for 7 days and then filtered. Collected supernatant was dried using a rotary vacuum evaporator (BUCHI Rotavapor R-114). Semisolid crude extracts were again dried with water bath at 80°C. The dried extracts (yield, 12%) were kept in the freezer (4°C) and utilized for biological screening.
Experimental Animals
6-7 weeks old Long-Evans male rats (approximately weighing 110 ± 15 g) and Swiss albino mice were chosen for the study. The animals were bred at BCSIR (Chittagong, Bangladesh). The animals were acclimatized under standard conditions (temperature 23 ± 2°C, relative humidity 55%) and were maintained on 12 h light-dark cycle. A standard pellet diet and ad libitum were supplied freely unless otherwise indicated. The overall nutrient composition of the diet was 36.2% carbohydrate, 20.9% protein, 4.4% fat, and 38.5% fiber with a metabolisable energy content of 1.18 MJ/100 g (282 Kcal/100 g). The animals were maintained in the laboratory, and the treatment was in the schedule. The animals described as fasted were deprived of food for at least 12 h but allowed free access to drinking water.
Hypoglycemic Effect in Glucose-Induced Hyperglycemic Mice
Oral glucose tolerance test (OGTT) was performed according to the standard method [15] with minor modification. Group I was treated as a normal control group, Group II treated with glibenclamide (5 mg/kg body weight), and Groups III and IV were treated with ethanol extract of A. nigra leaves at 400 mg/kg and 800 mg/kg body weight, respectively. Glucose solution (1 g/kg body weight) was administered at first. Then, drug and extract solutions were administered to the glucose fed. Serum glucose level of a blood sample from tail vein was estimated using glucometer at 0, 30, 60, 90, and 120 min. Percent decrease of blood glucose level after 120 min measured by the following equation,
Sucrose Absorption from Gastrointestinal (GI) Tract
Rats were fasted for 12 h before receiving 50% sucrose solution by gavage (2·5 g/kg body weight) with (for experimental cases) or without (for control cases) ethanolic extract of A. nigra (500 mg/kg body weight). Some of the rats were killed at these timing. The GI tract was excised and divided into six segments: The stomach; the upper 20 cm, middle and lower 20 cm of the small intestine; the cecum; the large intestine. Each segment was washed out with acidified ice-cold saline and centrifuged at 3000 rpm (1000 g) for 10 min. The resulting supernatant was boiled for 2 h to hydrolyze the sucrose followed by neutralization with NaOH. Blood glucose and the amount of glucose liberated from residual sucrose in the GI tract were measured. The GI sucrose content was calculated from the amount of liberated glucose [16].
Intestinal Glucose Absorption
An intestinal perfusion technique [17] was used to study the effect of A. nigra on intestinal absorption of glucose in 36 h fasted non-diabetic rats anesthetized using sodium pentobarbital (50 g/kg). The ethanolic extract of A. nigra (10 mg/ml, equivalent to 500 mg/kg) suspended in Krebs-Ringer buffer supplemented with glucose (54 g/L) was passed through pyloric, and the perfusate was collected from a catheter inserted at the end of the ileum. The control group was perfused with Krebs-Ringer buffer supplemented with only glucose. Perfusion was carried out at the rate of 0·5 ml/min for 30 min at 37°C, with perfusate being separated by every 5 min. The results were expressed as the percentage of absorbed glucose, calculated from the amount of glucose in solution before and after the perfusion.
Intestinal Disaccharidase Activity
A 20 h fasted rats were killed and the small intestines were isolated, cut longitudinally, rinsed with ice-cold saline and homogenized with 10 ml saline (0·9% NaCl) and centrifuged at 3000 rpm (1000 g) for 3 min. Aliquots (20 µl) of the supernatant from mucosal homogenate were mixed with 1 ml sucrose (40 mmol/L sucrose) in Eppendorf tubes. For the control group, aliquots (20 µl) of distilled water were further added to the Eppendorf tubes. For treatment group, aliquots (20 µl) of A. nigra extract of 0.5 mg/ml, 1.0 mg/ml, 2.0 mg/ml, and 5.0 mg/mL were mixed, respectively, in the Eppendorf tubes. These Eppendorf tubes were then incubated with at 37°C for 1 h. Disaccharidase activity was calculated by glucose concentration converted from sucrose as µmol/mg glucose per protein per h [18].
Statistical Analysis
Data were expressed as mean ± standard deviation (SD), n = 6 for all experiments. Analyzes were performed by oneway analysis of variance (ANOVA) using statistical software (Statistical Package for Social Science, version 19.0, IBM corporation NY) followed by Dunnett's t-test for comparisons. P = 0.05 or less were considered as significant.
Hypoglycemic Effect in Glucose-Induced Hyperglycemic Mice
Investigational induction of hyperglycemia resulted in increased glucose level in blood on mice, which is shown in Table 1. Both doses of leaf extract did not manifest any significant reduction in 30 min after administration. Most significant reduction (P < 0.05) was observed for 800 mg/kg dose of ethanol extract of A. nigra at 120 min. At 120 min, this dose also showed a significant reduction (20.82%) of blood glucose level. Standard glibenclamide (5 mg/kg) showed a significant reduction in 30, 60, 90, and 120 min, which decrease 40.82% blood glucose level of its initial (0 min). Time interaction with each specific hour in this experiment was also found significant (P < 0.05). Percentage of decrease of blood glucose level in glucose-induced mice after 2 hours with different treatment are also showed in Table 1.
Effects on Sucrose Absorption from GI Tract
Results were expressed as (mean value ± SD) in mg. Administration of extract of A. nigra (500 mg/kg) with the sucrose load in rats increased the residual intestinal sucrose content (mg) significantly (P < 0·05) at 30 min in the stomach ( Figure 1].
Effects on Intestinal Glucose Absorption
As shown in Figure 2, intestinal glucose absorption (%) in nondiabetic rats was almost constant during 30 min of perfusion. Values are presented in mean±SD (n=6). EEAN=Ethanol extract of A. nigra leave. Values with different superscripts in the same column are significantly different from control at each specific hour after the administration of standard and different doses of the extract. For *P>0.05 and One-way ANOVA followed by Dunnett's multiple comparison was performed to analyze this comparison, A. nigra: Alpinia nigra, SD: Standard deviation The addition of A. nigra to the glucose perfusate resulted in a substantial decrease in intestinal glucose absorption during the whole experimental period (P < 0.05).
DISCUSSION
There several tests offered for screening the hypoglycemic result of any sample or drug. However, the OGTT is usually thought of as additional inclined for the screening of impaired glycemia, as a result of it distinguishes the changes in postprandial glycemia that tend to precede changes in abstinence aldohexose. All the present established diagnostic processes for polygenic disease rely on a threshold price forced on neverending delivery of blood sugar levels. Yet, the right glycemic threshold that discriminates "normal" from diabetic is not obvious. That's why choice for unknown sort two polygenic disease leftovers a polemic issue, there is clear proof that after it is known, complications are often prevented in several diabetic patients [19,20]. OGTT measures the body's ability to use aldohexose, the body's main supply of energy. OGTT are often accustomed diagnose pre polygenic disease and polygenic disease. The ethanol extract of leaves of A. nigra (EEAN) showed vital ability to scale back the elevated aldohexose level in traditional mice compared to the quality drug glibenclamide. At a dose of 800 mg/kg of EEAN showed the highest hypoglycemic effect and it decreased 20.82% of blood glucose level after 2 h of administration in glucose-induced mice, where glibenclamide (5 mg/kg) decreased 40.82%. At dose 400 mg/kg, EEAN decreased 20.82% of blood glucose level after 2 h of administration in glucose-induced mice.
The activity of A. nigra extract as antidiabetic drug agent and its doable mechanism was investigated in non-diabetic rats. The post-prandial symptom is undesirable because it will increase glycosylation merchandise, like methylglyoxal, that play a task within the development of diabetic tube disease [21]. Acute elevation of aldohexose conjointly will increase coagulation [22] and leads to multiple disturbances in epithelial tissue cell function [23]. It's renowned that highfiber diets improve aldohexose tolerance in diabetes [24]. This result could also be attributable to backward stomach removal, increased viscus transit, or modification of the secretion and action of biological process enzymes [25]. The hypoglycemic activity that is found once given with a synchronal aldohexose load in diabetic rats indicates that the extracts could interfere with the viscus aldohexose absorption within the gut by varied mechanisms [26]. In the present study, the various effect of A. nigra extract on carbohydrate digestion and absorption in the gut was assessed. This was investigated by gut perfusion experiment where the ethanol extracts showed a gradual decrease in glucose absorption.
Since aldohexose lowering result of genus A. nigra was clearly evident from previous study reports, aldohexose absorption inhibition may are a doable mechanism accountable for the hypoglycemic effect [27]. Our study confirms this result similarly as a result of once genus A. nigra ethanolic extract was given beside saccharose answer; it considerably increased saccharose retention within the gut compared with solely the saccharose answer au fait cluster of rats. Similar in vitro studies dispensed with high concentrations of Glucophage conjointly showed such inhibition of aldohexose absorption [28]. The flavonoids and tannins are reportable to provide antidiabetic activity [29]. This antidiabetic drug property has been connected with the flexibility of the polyphenolic tannins and flavonoids and to inhibit α-glucosidase enzyme [30]. Our study confirmed the claim mentioned higher than since enzyme enzymes of rats treated with A. nigra ethanolic extract showed vital dose-dependent inhibition in activity compared with the controls.
CONCLUSION
The present study demonstrates that the ethanol extract of A. nigra showed well decrease of blood glucose level after 2 h of administration in glucose-induced mice and significant inhibition of carbohydrate digestion and absorption, which has resulted in the well-known hypoglycemic effects of A. nigra. Thus, A. nigra may be a useful dietary adjunct for the treatment of diabetes. Further study is necessary to investigate its pancreatic action. | 2018-04-03T03:00:12.632Z | 0001-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "98d3a8a947e75809a39fc6af36052d98495f5709",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5455/jice.20160307112256",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "98d3a8a947e75809a39fc6af36052d98495f5709",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
259862686 | pes2o/s2orc | v3-fos-license | Labor Recruitment and Coloniality in the Agricultural Sector: On Plantation Archives, Underclassing, and Postcolonial Masculinities in Switzerland
This study provides insights into mechanisms of underclassing in modern society based on interviews with recruiters of agricultural workers in Switzerland. I show that narratives that racialize and ethnicize workers are nurtured by colonial legacies. This reveals that plantation practices and discourses have shaped Switzerland and remain as powerful means of enforcing agricultural racial capitalism. Furthermore, I argue that postcolonial masculinities drive these intersubjective relations. Tracing and situating these postcolonial subject formations on farms allows one to see how caring narratives entangle with a dehumanizing grammar and how this colonial logic is incorporated into social consensus on extractive labor practices. Finally, this reveals how coloniality operates in a postcolonial country that claims political neutrality.
Introduction
-Can you tell me what your job is about and what exactly you do at the Swiss Farmers' Union?-People have been doing this job for a long time.(. ..)Earlier, you just needed to bring workers here.For example, the man before my predecessor-he's still alive, he's nearly 100-he took the train to Portugal and came back with a carriage full of people.(. ..)People came out of the train and they-like they used to do with cows-they inspected their teeth (laughs).No, not like this, more like that-I'll take him!Sometimes someone was left over, he wasn't needed, maybe for five or six days, but then he was also placed/ taken care of [versorgt].That's how it worked.And today we take care of everything.We don't just connect with workers who come here by themselves-we help with all the formalities.(. ..)Mainly we want to ensure that the working relationship functions properly.We do not want to leave ourselves open to criticism that we don't keep people in good conditions (. ..). (Interview conducted in June 2017) This paper discusses interviews with people who mediate the arrangement of agricultural workers in Switzerland.The reference systems that these recruiters activate when talking about agricultural workers from abroad show glimpses of how power relations manifest themselves on Swiss farms.While scholars have emphasized the colonial roots of living and working conditions in the glo balized capitalist food supply chain (Cohen, 2019), reflections on the continuing nexus between racialization and agricultural (re)production and the mobility of 'agricultural racial capitalism' (Manjapara, 2018) are absent in relation to Swiss agriculture.Moreover, studies on the recruiters of agricultural workers are rare.Therefore, the research question addressed in this study is whether and how recruitment and treatment of agricultural workers are enmeshed with the larger question of subjugation under modern power structures.
This study aimed to examine practices and cultural repertoires that are used in the Swiss context to establish consent for agricultural racial capitalism.My investigations reveal how Swiss agri culture is tied to colonial legacies that are visible in recruiters' 'cultural archives' (Said, 1993).Using the decolonial framework, I will show how recruiters' thinkinglikethemarket reinforces 'coloniality' (Boatcă, 2013;Grosfoguel, 2011;Lugones, 2007;Quijano, 2001) through a White, colonial culture and more specifically, through what I call the plantation archive.Based on inter views and on ethnography, the findings indicate that plantation practices and discourses also shaped Switzerland by materializing it in working and living conditions and incorporating it into the selfrepresentation and subject formation of recruiters and others.
To situate my analysis, I will elaborate on the agricultural sector in Switzerland to give some brief insights into key developments.Neoliberal policies have intensified competition among farmers.The 2015 report from the Swiss Federal Office for Agriculture stated that onethird of Swiss farmers live in an economically precarious situation.Furthermore, working conditions can be very demanding, such as long working hours (48)(49)(50)(51)(52)(53)(54)(55)(56)(57)(58)(59)(60)(61)(62)(63)(64)(65)(66)Federal Statistical Office,n.d.).When farms are shut down (around 1000 per year), the land is mostly passed on to neighbor ing farms, leading to structural changes from small farms to medium and largescale companies that increasingly employ workers from abroad on a shortterm basis (Chau et al., 2015).Around 35,000 people from abroad are estimated to be employed annually in Switzerland around the harvest season (Bopp and Affolter, 2013).Recently, most work permits in the agricultural sector have been issued to workers from Poland, Portugal, and Romania.Bigger farms tend to have a workforce with a more globalized composition which is based on their different legal statuses.Most people are employed in shortterm work.Contracts generally last for 3-9 months, but labor arrangements for as little as a few weeks at a time also exist.The precariousness of these workers is caused by their 'hypermobility' (Bolokan, 2023a(Bolokan, , 2023b) ) and they have little chance of settling in Switzerland (Bolokan, 2020: 59).These workers face labor regimes of rotation, as they are forced to work at different farms in various European countries under multiple contracts or sets of arrangements.They are excluded from employment protections, social security, and rights to which local workers have access (Bolokan, 2020: 58).As Swiss farmers are challenged to produce food amid a capitalist logic of competing national economies and liberalized markets, their demanding conditions are passed on to temporarily employed workers.This entails outsourcing the costs of (re)production of the workers to themselves and to their communities (Bolokan, 2021).The agri cultural sector is, therefore, not just subsidized by direct payments, but also heavily relies on both the unpaid (re)productive labor of communities of workers in global peripheries and nuclear family members, especially farmers' wives.This unwaged labor on farms and the international division of (re)productive labor builds the ground for the underclassing of Swiss society, allowing the Swiss lower and rural classes living and working conditions that are less precarious, or even have upward socioeconomic mobility.
The rest of this article is divided into three parts.The first part presents key historical entangle ments which help trace recruiters' repertoires for sensemaking and situate evoked imaginaries in global, transregional, and local histories.In this part, the empirical background and key theoretical and methodological concepts are presented, and Switzerland is briefly introduced as a postcolonial space.The second section analyzes interviews with agricultural workers' recruiters.Based on interview transcripts, I elaborate on the patterns that recruiters activate when they talk about their work, agricultural workers from abroad, and their communities.Here, I elaborate on the politics of naming and the patterns of Othering.The third part will expand on the concept of 'cultural archives' (Said, 1993), which I reframe as plantation archives, thereby situating recruiters' reference sys tems in the entangled histories presented in the first part.Furthermore, I will reflect on recruiters' selfrepresentation and intersubjectivity and describe the way agricultural workers' recruitment is inherently marked by 'coloniality' (Boatcă, 2013;Grosfoguel, 2011;Gutiérrez Rodríguez, 2018;Lugones, 2007;Quijano, 2001) and the Swiss particularities of postcolonial masculinities.
Talking with Recruiters
While I was working on Swiss farms between 2011 and 2019, parallel to this ethnographic research, I talked to agricultural workers, farmers, and recruiters to understand the challenges faced by agriculture and the political economy of labor migration in the sector from different angles.In this article, I analyze interviews with 10 recruiters who mediate the arrangement of workers to farms.The criteria for choosing recruiters were that they either worked for the farmers' union or facilitated asylum seekers within statefinanced programs to work in agriculture.
Focusing on recruiters employed in statefinanced institutions resulted in a homogeneous group in terms of gender (male), 1 citizenship (Swiss) and background (rural).Those to whom I talked worked in different Germanspeaking cantons, were between 47 and 62 years old, and held views from across the political spectrum, from left to right.Most mediated the arrangement of workers as part of their gainful employment, and one of them who was unemployed for a long time arranged work for asylum seekers at orchards as part of his volunteering.They mainly recruited workers for small and medium scale farms (most widespread), as big farms employed staff specifically for this task.
When I was reaching out to the recruiters for interviews, I expressed my interest in their work and the overall challenges they identified in the sector.The interviews were conducted in German and Swiss German; they were transcribed and partially translated into English.When coding the interview transcripts, I was guided primarily by the question: What characterizes the recruiters' patterns and politics of referring to agricultural workers?
Global Entanglements and Coloniality
Coloniality of Labor.Cedric Robinson (1983) coined the term 'racial capitalism' to point out that capitaldriven exploitation has evolved from racial slavery and settler farming (Vekemans and Segers, 2020) and is therefore based on racialized violence and the west's coercive extraction of raw materials and labor from the colonized parts of the world (Frank, 1967;Wallerstein, 1979).Elaborating this point further, Kris Manjapara (2018) shows that the formal abolition of slavery in the British Empire led to a 'new system of slavery' (p.375), thereby spreading the plantation com plex on a worldwide scale.This means that, through the institutions of capital and the control over land and labor that were first initiated on Caribbean plantations and were spread to other parts of the world through racialized categorizations of workers, capitalist modes of production were stabi lized and 'agricultural racial capitalism' was established worldwide (Manjapara, 2018: 375).
The decolonial school of thought allows us to theorize the continuities of these global power relations using the concept 'coloniality of power' (Quijano, 2001) grasps the past and current power matrices that constitute these modern societies.Building on Anibal Quijano's 'coloniality of power', Manuela Boatcă (2013) deepens the concept of 'coloniality of labor' to encompass coexisting modes of labor control over time and space.Boatcă (2013) also shows that the extrac tion of raw material from 'Eastern' Europe by 'Western' Europe, which had already existed in the mid15th century, was accompanied by rural coercive labor in 'Eastern Europe'.Boatcă (2013) draws parallels between different systems of coercive labor relations over centuries in Abya Yala and Europe (first/second slavery and first/second serfdom) (p.321).This allows us to situate the 'quasicolonial relationships' between 'Eastern' and 'Western' Europe in the global, colonial context (p.305) while deepening a local perspective on 'racial capitalism' within Europe.
The perpetuation of this colonial logic and the 'East-West' relationship within Europe beyond the 15th century was also evident in the history of 'internal colonization' (Ha, 2008) in Prussia, for example, at the beginning of the 20th century.Discourses and practices during the Wilhelmine Period demonstrate the adaptation of a 'plantation logic' (McKittrick, 2013) to these specific politi cal conditions.The time was characterized by high industrialization, making Imperial Germany a major capitalist power through both external and internal colonization and by the rise of the work ers' movement.In the period leading up to World War I, the German Reich became the second largest laborimporting country in the world, just after the United States.In 1910, 1.26 million workers came from abroad, with twothirds of them from the Polish regions of AustriaHungary and Russia (Ha, 2008).The historian Klaus J. Bade has argued that these regions became the 'free hunting grounds' of Prussia.While the colonial nation was establishing rule in its colonies, Prussia employed workers from 'Eastern Europe' under conditions that amounted to the 'existence of lawless wage slaves' (Ha, 2016;Herbert and Hunn, 2007).
Within this context, workers were racially marked as 'lowranking Slavs', humiliated as 'sub missive', 'stupid Polacks', and classified as 'born earth workers' (geborene Erdarbeiter) and as 'Wulacker' (from the German word wühlen, to grub).Thus, the position that these workers were expected to occupy within society was naturalized by ascribing a 'race'/ 'ethnicity' to them and arguing that these 'Slavs' were particularly well suited to heavy work in the fields.Such racist nar ratives served to legitimize exploitation and the exclusion of the recruited workers from rights to which local workers had access.Later, the National Socialists adopted this plantation logic and further developed these narratives.Degraded as 'Slavic subhumans', 2 now called the Fremdarbeiter, were seen as those who must work for the Aryan 'master race'.
Agricultural labor relations in Europe since the mid15th century reveal similarities and differ ences in overseas plantation regimes and give insights into how coloniality and the inherent racial ized/ethnicized international division of (re)productive labor evolved within Europe and parallel to 'the Maafa' 3 (Ani, 1994: 583).Thus, the notion of 'coloniality of labor' includes global colonial power relations beyond the 'world plantation belt' 4 (Courtenay, 1980).Furthermore, it empathizes that plantation regimes and colonial logics continue to be an integral part of the contemporary global division of labor (Grosfoguel, 2002).
Coloniality of Migration.
Coloniality is also inscribed into migration as it established colonial empires and settler societies.Through the colonial migrations after World War II, the coloniality of power was structuring metropolitan areas.As Ramón Grosfoguel (1999) points out: 'No colonial Caribbean migration passed unnoticed in the European imaginary ' (p. 414).According to him, people migrating from former colonies are colonial not only due to their long colonial relationship with the metropole, but also due to their current stereotypical representation in the European imagination which is reflected in their subordinated location in the metropolitan labor market (Grosfoguel, 1999: 414).
Grosfoguel thereby emphasizes that the most marginalized and exploited classes in colonial set tings have always had a subproletariat from the colonized, global peripheries and that this division continues structuring societies through labor migration management.
Grasping these dimensions and the inherent colonial logic within asylum policies and migration regimes in modern societies, Encarnación Gutiérrez Rodríguez (2018) has introduced the concept of 'coloniality of migration': Migration regulation ensures that the Other of the nation/Europe/the Occident is reconfigured in racial terms.The logic generated in this context constructs and produces objects to be governed through restrictions, management devices, and administrative categories such as 'refugee', 'asylum seeker', or a variety of migrant statuses.The coloniality of migration operates within this matrix of social classification on the basis of colonial racial hierarchies (p.24).
Amy Niang (2020) even argues that enslaved people from the past and migrant workers from former colonies today are connected through their status as stranded bodies, which share the onto logical condition of atomized and 'erased subjectivities' (pp.5, 7).She elaborates: The distinction between 'Africans' and 'migrants' is immaterial for 'slaves', 'Africans' and 'migrants' are vague categories that are caught in the same ontological, temporal lapse.Africans and other black and brown migrants carry their colonial condition, thus their former subject position as a liability that stands in the way of the recognition of a legal subjecthood.They are permanent outsiders, if not reliquaries of the human category.(Niang, 2020: 339f) Such 'erased subjectivities' are institutionalized in Switzerland into a legal regime called Nothilfe (emergency aid) for persons with a legally binding negative asylum and deportation decision.Whoever comes under this regime of aid must live under devastating conditions in emergency camps and is not allowed to engage in gainful employment.Therefore, people are legally prevented from realizing their dreams and living in dignity and community.These places have been described as spaces of internal border demarcation, as they partially include rejected refugees in the welfare system, while their status excludes them from society (Marti, 2023).While these processes of confinement come along with the processes of Othering and 'demonization' (Marti, 2023: 186), they are discursively accompanied by the inability in public debates in Switzerland to name the racism and disenfranchisement of asylum seekers (Wilopo and Häberlein, 2023: 92).As the regime of Nothilfe represents a regime of postcolonial aid in which welfare and confinement intersect, this regime is a mode of how coloniality operates in a postcolonial country that claims political neutrality.
The asylummigration nexus allows us to see how asylum and migration policies produce hierarchical legal categories by marking and managing some people on the move as migrants and others as refugees.The coloniality of migration framework emphasizes the continuities of manag ing and controlling people within orientalist/racialized/ethnicized practices.
Coloniality of Gender and of Being.Not only did 'a European/ capitalist/ military/ Christian/ patri archal/ white/ heterosexual/ male' arrive in the Americas (Grosfoguel, 2011: 8), but this very subject formation also impacted the colonizing societies.From a decolonial feminist perspective, the analysis of the modern subject formations must be intersectional.Gendered oppression, in this understanding, is neither separable nor secondary to racialized oppression-both are constituting each other (Lugones, 2007;Mendoza, 2015).In a dialog with Oryuronke Oyewùmi (. ..) and under the term 'coloniality of gender', María Lugones (2007) argues that gender as such is a colonial imposition on the colonized; colonialism it not only erased manifold conceptualizations of sexualities and gender in colonies, but it also imposed new ones (p.186).
While these gendered and racialized interlocking systems of oppression produce various colo nial subjectivities, some are deprived of their subject status within this process through enslave ment and owing to the colonial matrix of power: In using the term coloniality I mean to name not just a classification of people in terms of the coloniality of power and gender, but also the process of active reduction of people, the dehumanization that fits them for the classification, the process of subjectification, the attempt to turn the colonized into less than human beings.(Lugones, 2010: 745) Accordingly, the evolution and global enforcement of entangled global hierarchies to justify European colonialism was a dehumanizing force.This will to colonize, according to Enrique Dussel, is a highly gendered will.He argues: The European subject who begins in the mode of 'I conquer' and reaches its climax in the 'will to power' is a masculine subject.The ego cogito is the ego of a male.(Dussel, 1977in MaldonadoTorres, 2007: 264) As the condition of the modern subject formation is inherently colonial, they can be seen as having an ontological prescription, which led MaldonadoTorres (2007) to utilize the 'coloniality of being' as a framework to grasp the way coloniality has entered all spheres of human being, sus tainably shaping modern subjectivity: 'In a way, as modern subjects we breath coloniality all the time and everyday' (p.243).Moreover, he elaborates, The role of skepticism is central to European modernity.And just like the ego conquiro predates and precedes the ego cogito, a certain skepticism regarding the humanity of the enslaved and colonized sub others stands at the background of the Cartesian certainties and his methodic doubt (p.245).
This 'imperial attitude' of ego conquiro, which claims ownership over people, has defined the 'modern Imperial Man' and constitutes current modern being (MaldonadoTorres, 2007: 245).
Colonization forms diverse modern subjectivities.Gendered hierarchies have been imposed on colonized societies in a way that had not existed before colonization, and a modern European iden tity evolved around the 'ego conquiro'-'a phallic ego'-that has been formed and fabricated around the 'certainty of the self as a conqueror' (MaldonadoTorres, 2007: 245).This modern European identity is characterized by a permanent suspicion of the humanity of those who are pushed toward the bottom of entangled global hierarchies.
Local Configurations of Coloniality: Postcolonial Switzerland
As early as 90 years ago, there have been arguments that, owing to its politics of neutrality, Switzerland profited more from imperialism than the great European colonizing nations (Behrendt, 1932).Almost a century later, many studies have elaborated on the involvement of Swiss actors in the transatlantic trade of enslaved people and goods, their participation in plantation economies, and in fighting uprisings of enslaved people, such as in Haiti (Brengard et al., 2020;Cooperaxion, n.d.;David et al., 2005;Fässler, 2005).Furthermore, scholars have investigated the history of Swiss settlers and their civilizing missions overseas and the cultural and nonmaterial aspects of Swiss 'colonialism without colonies' (Purtschert et al., 2016).In addition, researchers have increas ingly analyzed colonial legacies and the manifestation of uneven racialized power relations in current society.They offer insight into structural and everyday racism in Switzerland (Wa Baile et al., 2019) and the daily reproduction of Whiteness and White 5 supremacy (Iso, 2008).However, no studies on colonial legacies in agriculture exist.
Modern, Liberal Subject Formation, and Postcolonial Masculinities.Patricia Purtschert has shown that Swiss history cannot be grasped without colonial and gender history.Purtschert (2019) demon strated that the discursive production and hegemonic enforcement of new gender norms in the 20th century in Switzerland were based on different 'colonial fantasies' and specific Swiss ways of Othering (p.304).The development of a common Swiss national identity based on the heroic images of White colonial masculinities is of particular interest to this study.
In the 1950s, Swiss mountaineers entered the international fray by first ascending the highest mountains in the world.Competing in the maleonly sphere of the socalled 'death zone' in the Himalayas enhanced traditional colonial images of the White masculinity associated with adven ture, courage, and claims to leadership and ownership.Popular reporting on Himalayan expeditions postulated similarities between Sherpas and the Swiss.The neocolonial division of labor between the Swiss and the Sherpas was overwritten with a new variant of a colonial imaginary.According to Purtschert, relations between European and nonEuropean men appear in that context to be a par ticular type of neocolonial register marked by partnership and friendship, while the colonial asym metry between these men was maintained and remained unquestioned.Performing partnership and friendship made it impossible for the Swiss to surrender their white supremacy.This mirrors the position that Swiss actors sought to occupy after decolonization ended formal colonialism: the posi tion of being neutral mediators free from colonial entanglements, of engaging in a globalized world while claiming their White supremacy in the most natural way (Purtschert, 2019: 70).
On Servitude and 'Internal Others' in Switzerland. Purtschert (2019) also proposes a postcolonial reading of the violent Swiss history of administrative detention (Administrative Versorgung) and the existence of Verdingkinder (indentured child laborers) (p.320).Schär (2007) similarly argues for the need to understand the history of Yenish people in Switzerland as being entangled with colonial racism, and thus the need for reflecting on policies and practices that have created internal Others in Switzerland (p.14).
Since 1926, Yenish children have been taken away from their parents to be 'educated' into becoming settled and 'hardworking' citizens (Galle, 2016: 15).Following the strategy of re education as a civilizing mission, they were placed (plaziert) in institutions or with foster parents (Fremdplatzierung).Though this happened to many impoverished children in Switzerland, it occurred in Yenish families systematically-children were taken away and brought to farming families, where they had to work and were indeed enserfed as Verdingkinder (Galle, 2016: 487).Josef Jörger, a racial hygienist, published the journal Archiv für Rassen-und Gesellschaftsbiologie (archive for racialand social biology) in which Yenish communities are characterized as 'Vagantenstamm' (vagrant tribes).He said, 'those who came into a healthy environment at an early age, or whose mothers came from wellbehaved families, have for the most part found their way back to the human community'.Jörger thus explicitly excluded the Yenish from the category of human (Leimgruber et al., 1998: 35, 60).While these children became farmhands (Knechte), their parents were sent to prisons, socalled labor and correctional institutions, (Arbeitserziehungs-und Korrektionsanstalten), or even psychiatric clinics and forced to undergo eugenic sterilization.While the aim to erase the 'inferior categories of the population was a strongly gendered practice' (Mottier, 2006: 258); so were dehumanizing discourses.Yenish mothers' love for their children was described as 'very primitive, not to say animalistic' (Leimgruber et al., 1998: 35) or as 'mon key love ' (p. 36).
As Swiss history is marked by the practices of serfdom that carry the inherent colonial logic of dehumanizing Otherness, I argue that this specific culture of sensemaking by placing certain groups in society as 'not-yet-humans' and enforcing their assimilation with socalled welfare poli cies shows how practices of dehumanization and care can intersect.This form of Othering-throughdehumanizing-care or care-Othering (VerAndern durch VerSorgen) represents a specific Swiss way of Othering, where welfare policies are intertwined with eugenic practices and servitude.Thus, where welfare intersects with racism, care is not innocent.The term placing people for care practices and the accompanied cultural repertoires in Swiss history are deeply entangled with dehumanizing worldviews related to the civilizing mission and enforcing hegemony and a protes tant work ethic onto Othered people.
These insights into Swiss history show the unique experiences of assimilation and annihilation that have been talked about in public for some years.They also reveal similarities in motives and motivations globally.Further instances of child removal, such as those in Australia, where indige nous children were kidnapped, assimilated, and exploited for their labor (see 'Stolen Generation'), supports the need for reading caring narratives against their grain and situating practices in global colonial legacies.This proves that cultural repertoires in Switzerland are local instances of coloni ality and colonial care.
The Plantation Archives
I conceptualize the plantation archives to grasp the daily presence of colonial aftermaths on farms.I rely on Grada Kilomba's (2010) concept of 'plantation memories' and Edward Said and Gloria Wekker's 'cultural archive' (Said, 1993).
Based on Essed's notion of everyday racism and on Sigmund Freund's theory of memory, Kilomba (2010) developed the notion of 'plantation memories'.She thereby connects what she calls 'episodes of everyday racism' in Germany to its colonial history (p.132).On this basis, I theorize the plantation logic in the Swiss context.According to Kilomba (2010), plantation memo ries do travel.In these moments, . . . the colonial past is memorized in the sense that it was 'not forgotten'.Sometimes one would prefer not to remember, but one is actually not able to forget.Freud's theory of memory is in reality a theory of forgetting.It assumes that all experiences, or at least all significant experiences, are recorded, but that some cease to be available to the consciousness as a result of repression and to diminish anxiety; others, however, as a result of trauma, remain overwhelming present.One cannot simply forget and one cannot avoid remembering (p.132).
Thus, in moments when people experience racism, they are thrown back into a colonial setting with its total asymmetry of power.The dichotomy of being the master and being enslaved is sym bolically restored.In these moments, colonialism is experienced as real.This suddenness and unpredictability that characterizes the experience of everyday racism is a central characteristic of trauma.The past becomes the immediate present (Kilomba, 2010: 95).
In White Innocence, a work on the dominant White Dutch selfrepresentation, Gloria Wekker uses Said's concept of cultural archives to analyze colonial legacies in everyday life and says, . . . the cultural archive is located in many things, in the way we think, do things, and look at the world, in what we find (sexually) attractive, in how our effective and rational economies are organized and intertwined.Most important, it is between our ears and in our hearts and souls.(Wekker, 2016: 19) All spheres in the present society are therefore to be proven for their colonial content and 'their racialized common sense' (p.19).The cultural archive is 'silently cemented in policies, in organi zational rules, in popular and sexual cultures, and in common sense everyday knowledge, and all of this is based on four hundred years of imperial rule ' (p. 19).
Therefore, the plantation archive is understood as a specific cultural archive derived from plan tation regimes and is deeply incorporated into society by various means, such as modern food production regimes and agricultural practices under capitalism.Nurtured by (de)coloniality, it impacts meaningmaking repertoires and the everyday on farms.Plantation archives carry memo ries that reveal not only the forms of exploitation but also resisting practices against subjugations.Since plantation archives impose hegemony and traumatize or empower people, they shape think ing and organizing principles.
Politics of Naming
Agricultural workers are referred to by their nationality (e.g.'the Swiss', 'people from Afghanistan'), their continental origin (e.g.'those from Africa'), their legal status (e.g.'refugees'), their antici pated racialized/ethnicized belonging (e.g.'those Slavic guys', 'ethnic Germans' from Romania) and/or explicit stereotypes (e.g.'southern type of guy').They are seldom referred to as individuals but as differently Othered and homogenized groups.
Reference making was also based on outdated terms about workers' positions such as 'Knechte' [farmhands].Here, I asked what the recruiter would look for when he would employ people from abroad, and he said: You must have different skills as a farmhand [Knecht] (. ..) and are not merely useful as a harvester.(Interview conducted in March 2018) In this example, uneven power relations manifest themselves.Arguably, workers cannot be 'just' harvesters but must aim to become 'real' employees by adopting 'different skills'.The logic of this narrative is that even if workers acquire these skills, they remain 'farmhands'.Though this statement is constructed without the explicit intention of devaluation, it shows a patronizing atti tude toward 'nonSwiss' workers: Being a 'farmhand' is the position agricultural workers from abroad ought to occupy.
Workers from abroad were also referred to as Fremdarbeiter [foreign workers/alien workers].
We depend on foreign workers [Fremdarbeiter].Especially in the vegetable and fruit sector.No Swiss wants to work for this wage and under these working conditions.(Interview conducted in December 2017) While the recruiter describes that the Swiss citizens do not work for 'this wage', he reduces farm workers not holding Swiss citizenship to their 'foreignness' using World War II terminology.Furthermore, the recruiters mentioned workers' families from abroad and their communities as 'Sippe' [tribes].Such references did not appear when recruiters talked about Swiss citizens; nor did they mention them as 'Knechte'.Fellow citizens have another status: that of employees or mem bers of a nuclear family.The terminology for workers from abroad implies a relation marked by uneven dependency, which derives from past serfdomlike rural power relations that do not exist legally anymore but are unconsciously invoked.
Patterns of Othering
Three essential ways of Othering emerged in the interviews: the logics of reification, dehumaniza tion, and racialization/ethnicization.Although Othering narratives varied, they were tied to gener alizing statements.The common logic of reification appeared with an objectifying use of language and a technical kind of thinking and reference to individuals, groups of people, or entire regions as if they were commodities.One person said: Bulgarians and Romanians-those are the cheapest countries to get the people.But they will also be exhausted soon.(Interview conducted in May 2018) Moreover, the objectification comes with dehumanization, using a language that originated from animal husbandry: Mainly, we want to ensure that the working relationship functions properly.We do not want to leave ourselves open to criticism that we don't keep people in good conditions (. ..). (Interview conducted in June 2017) This logic of dehumanization was framed in diverse ways but did not differ in its conclusion.Here, again, dehumanizing narratives did not appear in explicit devaluating statements.They were couched in a caring habitus, meaning that the Swiss farmers, the Farmers Union, or the Swiss sys tem took good care of the workers.In some cases, the line between implicitness and explicitness in the narratives blurs, and they are interrelated as follows: My brother also gave them lunch and in the end a tenner.(. ..)They are human beings after all (. ..). (Interview conducted in March 2016) Here, the logic of dehumanization within the narrative appears implicitly.Though the human status of those workers is explicitly recognized, the fundamental fragility of their humanity is implied.
Workers are Othered through a racializing/ethnicizing logic and through a perceived cultural and biological regime of truth making.The justifying narrative was mainly constructed regarding perceived work ethic.Additional narrative constructions addressed the legal status, presumed cor porality, and abilities such as 'mindset', language skills, or nationality.For example: In the East-in Eastern countries-you can feel quite a difference.People from former Yugoslavia-they can really work.Those Slavic guys, right?! The Portuguese are much more like the southern types of guys.(Interview conducted in November 2017) Othering narratives did not always follow strict patterns, nor were the constructed storylines consistent or coherent.They were, however, guided by certain rules and shared several dominant discourses.While individual statements appeared arbitrary, 'racial grammar' (Wekker, 2016: 105) that established hierarchies among workers was consistent throughout the narrative.The operating logic of 'grammatical rule' is that the narratives put workers into hierarchical relations in which they place themselves and fellow citizens at the top of the constructed hierarchy.One such instance is the following: We have a different work ethic.We work harder.It's just like that.(Interview conducted in March 2017) The dominant, but not persistent, line of division was made between 'those from the east' (Recruiter)-often referred to as 'the foreign workers' [Fremdarbeiter]-and 'those from Africa' (Recruiter)-mostly addressed as 'the refugees' (Recruiter).Both groups were constructed accord ing to their presumed values, work ethic and abilities.A good work ethic was often emphasized when talking about workers from 'the east': What is now coming from the east is not political, it is the wage gap.(. ..)I think those further up are more hardworking.(Interview conducted in November 2017) One recruiter emphasized that the best workers come from specific 'tribes'.When recruiters talked about workers from 'Africa', their weak position as 'refugees' within the agricultural labor market was brought forward, but was not reflected in power relations: I think that migration from Africa will increase for a long time, but today we cannot really imagine how, (. ..)But I have not yet found any jobs for Black people on a farm.But it is easier to motivate people from Afghanistan and Pakistan.Now we cannot employ these people [legally], except as refugees.But I am convinced there is potential, though we do not know how to profit from this situation.(Interview conducted in April 2016) I also know farmers who say 'I certainly wouldn't employ a refugee, even if I could have him for free'.(Interview conducted in March 2017) Though the reason for this refusal is racism, it was never called racism; instead, the difficulty in meditating the arrangement of '[Black] refuges' is posed because they lacked motivation: I just don't believe they are so motivated to work in agriculture.(Interview conducted in December 2017) In many narratives, good workers could be anyone but '[Black] refugees'.They could enter the sphere of labor relations only if they explicitly 'obeyed' (Recruiter): One farmer heard that we have willing asylum seekers that obey [parieren].He employed one.But then, our association promised [the asylum seekers] that they would also talk together and eat together, which was travesty [Hohn] Here, the recruiter's narrative points to the manifestation of racism when he says that the farm ers were even prevented from eating and talking together; it reveals how structural and institutional racism is maintained by individual and interpersonal racism.What is also reflected here is the recruiter's selfrepresentation.He can identify racism or at least read the contemptible behavior.
However, he reproduces it by constructing asylum seekers as submissive people and by marking a kind of ownership over them ( 'we have').
Establishing hierarchies between workers is a crucial rule within racial grammar.Hierarchies are also established between national educational systems and the quality of work in general.Thus, not only is the Swiss agricultural educational system winning the race, but within this grammar, recruiters construct the 'Swiss' themselves as the winning, unmarked 'race'.
In summary, the main rule of this racial grammar was to create hierarchies between workers.This was facilitated by assigning different abilities and work ethics to homogenized, racialized/ ethnicized, differently Othered, and even dehumanized groups.This order assigned fixed places to workers, thereby enabling the nonracialized 'Swiss' to remain at the top and creating the self image of Swiss people as those with the best work ethic and most knowledge about farming.Two forms of racism were dominant: antiBlack racism and antiSlavic racism.
On the Entangled Histories That Nurture the Plantation Archive
In the quote with which this manuscript began, the first image that came up in the recruiter's mind when I asked him to reflect on his work was that of his predecessor and his experiences.He says: 'People came out of the train and they examined their teeth as if they were cows (laughs)'.This is the traditional line of recruitment practices in which he chose to inscribe himself.This supposed joke recalls selection practices in times of enserfdom and enslavement.Moreover, it reminds us of the settlers' attitudes and the dehumanizing ways of referring to those whom they subjected.Thus, the narrative becomes the practice of taking reference, which Fanon (1963) describes as follows: 'In fact, the terms the settler uses when he mentions the native are zoological terms' (p.42).Here, it is not a zoological language but the language of cattle breeding ( 'keep people in good condition', see quote).
While this reveals the colonial legacy of recruitment practices and a colonialracist mindset, making a joke in this context functions as a 'master suppression technique' (Ås, 2004).According to Wekker (2016), 'one of the characteristic ways to bring racist content across is by using humor and irony' (p.26).Wekker's description in the Dutch context is also relevant to Switzerland, where the production of racism through humor has a long tradition (Jain, 2014), one that is deeply inscribed in Swiss culture and passed on from childhood (e.g.Globi and the 'white n'. in children's books, Purtschert, 2012).This joke here can also be interpreted as an initiation ritual, as it occurred at the beginning of our conversation when affinity and sympathy could be created.As a joke between White people-the recruiter perhaps assumes our common Whiteness-it represents a moment of possible consentmaking, of checking out a shared but unmarked ideological disposi tion.However, the socalled guest workers experienced the mentioned practice as humiliating at that time (Aeschlimann, 2007).
Experiences of dehumanization were not limited to such moments.People worked in poor con ditions, thereby strengthening the Swiss national economy, filled social insurance funds, and, at the same time, were labeled 'uncivilized and wild foreigners' (see Jain, 2020).The logic of the planta tion archive enabled colonialracist insults, such as 'SpaghettiIndianer' (Spaghetti Indians) and 'braune Söhne des Südens' (brown sons of the South) (Jain, 2020).Signboards stating 'Italians and dogs forbidden' (Jain, 2020) were placed.These iterations show how people adapted the plantation archive to different times and places.This way of thinking about external Others, crucial to the plantation archive, returns in different shades today.As for Switzerland, Christina Späti (2022) has argued that antiSlavic racism overlaps with antisemitism.
The recruiters subsumed workers as 'the Slavs' and argued that those from 'the east' were natu rally laboring peoples, especially those from specific 'tribes' (recruiter's terms).Thus, all these transregional histories of internal colonization that nurture the plantation archive are powerfully mobilized.
Here, discourses on 'the laboring Slavs' that put racialized attention on specific workers get mixed with those that justify external colonization.Recruiters' mention of workers' communities as tribes stems from the colonizers' allocation during external colonization when all those com munities were subjugated and marked as 'uncivilized tribes'.But the mention of tribes also reminds us of the history of internal Others in Switzerland, as Yenish communities were marked as 'tribes' and researched within the context of eugenic sciences (see Sippenforschung (tribe research)).More local histories about the construction of internal Others, which impact the spe cific shaping of the plantation archive on which recruiters rely, appear when they talk about 'platzieren' (placing) and 'versorgen' (supplying/caring) (recruiters' terms).These words carry worldviews that cannot be disconnected from the Swiss history of serfdom and forced (dis)place ment (Fremdplatzierung and Administrative Versorgung) (see the 'Local Configurations of Coloniality: Postcolonial Switzerland' section).
As we have seen, the unfolding of the plantation archive in concrete terms is always subject to change, experiencing ruptures and (dis)continuities.I argue that stability results from a dehuman izing grammar which is key both to coloniality, in which the logic of racialization/ethnicization and reification is immanent, and in the notion of 'coloniality of labor' (Boatcă, 2013: 312), which con nects different coercive labor relations outside and inside of Imperial Europe.Plantation archives are not bound to one specific plantation regime or to one form of enslavement or serfdom.In Switzerland, these archives have been nurtured by local histories of dehumanizing internal Others.
On Affects and Structures of Feeling in Plantation Archives
In the following example, I told the recruiter a story that a trainee had told me.A friend of his from Moldova returned home after only 2 weeks because he had to have lunch in the pigpen when it was raining instead of eating with the farmer's family in the house-an experience that was obviously humiliating and that made him want to leave.I asked the recruiter what would happen to interns from abroad when they would complain about similar experiences.The recruiter told me: In that case we place [umplatzieren] them somewhere else.However, if someone working on a farm with black cows says they want to go to a farm with brown cows, it will not be a reason for us to place them somewhere else.(Interview conducted in November 2017) Despite the brutality in this example, the recruiter's lack of outrage correlates with the solution to just place the worker somewhere else, while downplaying the situation with reference to the preferred color of cows.It is not a new farm with respectful people that he aims to find within his narrative.Instead, the narrative constructs the trainee as the problem.While the word placing mobilized here reminds us of the abovementioned serfdomlike power relations in Switzerland, the 'structures of feeling ' (Wekker, 2016: 2) or rather of notfeeling, are even more telling.This is what Wekker (2016) means when she says that White culture has given itself 'a racial grammar, a deep structure of inequality in thought and affect based on race' (p.2).This structure leads to the affect of withholding emotions in response to the humiliation described.The affect then turns toward a technical solution.
This affect of emotional withdrawal or 'suspended empathy' (Purtschert, 2019: 130) represents a postcolonial gaze.It allows replacing people-like replacing an object-and intersects with the wider logic of reification and racialization.These structures of feeling, which avoid empathy, are crucial to the plantation archive and linked to what Aimé Césaire has called 'thingification'.According to Césaire (2000), 'no human contact, but relations of domination and submission turn the colonizing man into (. ..) a slave driver, and the indigenous man into an instrument of produc tion' (p.42).He, therefore, equates colonization as 'thingification ' (p. 42).It is this kind of spirit that has been deeply inscribed into the modern subjects who utilize plantation archives that struc ture and constitute the recruiter's mindset.As neocolonialism remains tied to thingification, work ers and their home regions remain instruments of production.Within these intersubjective relations, both the colonizer and the colonized become inhumane (Césaire, 2000: 42).This modern subject formation-that is, the 'ego conquiro' (MaldonadoTorres, 2007: 245)-thus enters the sphere of inhumanity because he dehumanizes others by making them merely part of the wider infrastructure in the farms.I argue that the example of 'suspended empathy' (Purtschert, 2019: 130) presented here mirrors a specific emotional regime deeply subscribed to Swiss culture.It is a dehumanizing gaze which comes along a caring patron habitus and represents a matrix reminiscent of the Swiss history of Otheringthroughdehumanizingcare (see the 'Local Configurations of Coloniality: Postcolonial Switzerland' section).I propose to situate such moments of 'suspended empathy' as everyday instances of coloniality that, at the micro level, mirror the wider 'superstructure' (Marx and Engels, 1846: 36;Fanon, 1963: 40) This recruiter repeated throughout the interview, 'I am not a racist' or 'nationalist'.Similar formulations can be found in most interviews (e.g.'I have no prejudices, but . . .', 'I am open.I come from a different canton, but . . .').According to BonillaSilva (2006), such narratives 'act as discursive buffers' (p.57), which means that someone's statement involves, is interpreted, or could be interpreted as, racist assumptions.With these buffers-crucial to today's White culture-they fail to address racism, since '[c]olorblind racism's race talk avoids racist terminology and pre serves its mythological nonracialism through semantic moves'.
Expressions such as 'I am not racist, but . . .' are everyday occurrences in farms and beyond and serve to initiate racist statements in daily interactions.While they are part of what Essed (1991) calls 'everyday racism', they also mirror people's limited understanding of racism.In the case above, the recruiter says that he is an open person as he has worked abroad.In his understanding, going abroad and having contact with different nationalities prevents him from being a racist or nationalist.This limited understanding undermines structural racism and its colonial legacies, represented on farms as the division of labor and differentiated access to work permits, rights, and life perspectives.
While such moments mirror the limited will to face one's own racism, one chooses to remain within the culture of 'White innocence' (Wekker, 2016).Furthermore, they represent instances that are caught in a 'regime of raceless racism' (Michel, 2015: 411), widespread in Switzerland.
The Subject Making Use of Plantation Archives: On Postcolonial Masculinities and on Performing Management of Civilization
Plantation archives transcend recruiters' reference systems to manifest themselves on Swiss farms, seen both in ways that Swiss farmers mention workers and ways that farms are organized.In Meret Oehen's (2020) interviews with farm managers in Switzerland, the same patterns of Othering are present (p.48f).The interviewees talk about 'placing people' (p.51) and refer to workers and their communities as 'hoards' (p.61) and 'tribes' (p.47).In one case, a farmer says: We have several origins.Not just one tribe [sic].Because, otherwise: if one of them is dominant and does not do well and you must send him away, others will leave too.(. ..)We do not just take (. ..) from the same tribe or from the same region.(Interview with farmer, see in Oehen, 2020: 47) Here, the farmer argues that mixing 'tribes' (meaning people from different origins) helps him maintain sustainable control over workers because without such mixing there is too much solidarity.Another interviewee reveals a further dimension: The men want to play the boss (. ..).Colleagues have also confirmed this to me.(. ..)I have had very good experiences with women, because women-they toil (laughs)-this is probably also the same in Polish culture.(see Oehen, 2020: 76).
Here, women from Poland are portrayed as submissive and naturally hardworking-a gendered and racializing discourse stemming from the abovementioned internal colonization (see the 'Global Entanglements and Coloniality' section) and from coloniality.The farmer sees women as better to control as they do not aim to be the boss and, therefore, represent the best kind of workers.
Salome Günther (2008), who worked at one of the biggest organic farms in Switzerland and who conducted ethnography, explains how groups organized by farmers work together according to nationality.The Swiss farmer, who chooses the main responsible person in each of these groups, calls him 'Häuptling' (tribal chief) (Günther, 2008: 13).The workers on that farm have adopted this terminology, which is mirrored in the interviews (p.31).The division of labor on this farm follows gendered, colonial fantasies.While the manager of the farm remains at the top of the hierarchy in structural terms, he additionally chooses to backdate his farm to the colonial era and to have a tribal chief he can control.Consequently, he then takes the position of the White, male settler.This adaptation of plantation regimes embedded in a globalized agricultural labor market serves to reconstruct the colonial situation on farms in Switzerland.The White, male subject who projects his self into the sphere of plantation economies to maintain order and control shows impressively how 'settlercolonial masculinities' (Connell, 2016: 307) that were adapted to subdue colonized people have been transferred to present times.According to Raewyn Connell (2016), such transferred masculinity is today represented by a new 'transnational corporate masculinity' (p.312).To trace the genealogy of these modern subject formations and capitaldriven manage ment masculinities, I argue that, following MaldonadoTorres, the 'coloniality of being' is key to understanding these White postcolonial masculinities (Connell, 2016).
The reconstruction and unfolding of the colonial era as presented in the two studies resembles the colonial fantasies of the recruiters in my interviews.While the farmers reenact the roles of White settlers, the recruiters represent themselves as conquerors.In their narratives, they search for new sources (countries, continents); they know which regions have the potential to get new work ers, and where the best 'tribes' (in recruiters' term) can be found to work in agriculture.
Since the neoliberal agricultural sector turned farms into globalized enterprises and imposed international competition, the colonial projections of farm managers and recruiters serve to maintain their hegemony and follow colonial narratives and practices of managing globalized, racialized/ethnicized, and gendered underclasses.The transference of plantation logic has turned the dehumanizing trade with enslaved people into a neocolonial register, the management of 'non-human' resources.
The recruiters' 'certainty of the self as a conqueror' (MaldonadoTorres, 2007: 245) remains crucial for Whitemaleselfperception and impacts intersubjectivity beyond recruitment; it tran scends into the everyday on farms.This certainty, which is inscribed within society and is key to White, postcolonial masculinity, empowers recruiters to recruit people into coloniality through jokes while they claim they are not racists.
Finally, I argue that recruiter's statements such as 'I am not a racist' represent a deep desire to perform the management of civilization and to justify capitalist exploitation.With this statement, the recruiters remain within the logic of coloniality at the same time that they aim to rise above the moral ambiguities of the plantation archives and White, male, thinkinglikeaslaveholder (Truth, 1994(Truth, [1867]]: 131) legacies.
Conclusion
As I have shown through looking into the recruiter's interviews, thinkinglikethemarket is thinking likethesettler/conqueror.As there is no public discourse in Switzerland on racism in agriculture, or academic reflection on the specific local manifestations of racial agrarian capitalism, this study contributes to the exchange of knowledge on present racism in the cultural imaginary and on neocolonial labor recruitment and living conditions in agriculture.
I have argued that forcing coloniality on the most exploited workers utilizes plantation archives that adapt to times and places.In the Swiss context, these plantation archives are based on the histories of external and internal European colonization and are shaped by local entangled histories of dehumanizing socalled internal Others.Therefore, the analysis allows me to take a geographi cal, discursive, and epistemic shift since the enforcement of the plantation logic has so far been located overseas.Looking into local configurations of plantation archives furthermore demon strates that consentmaking to underclassing and to coloniality relies on colonial caring narratives and practices.They are incorporated in postcolonial masculinities which manage and care while still dehumanizing and exploiting.
Those are far from the east.There are such tribes, two or three, close to China.They also look Chinese.(. ..)I met someone from Georgia.(. ..) with caliber.(Interview conducted in May 2017) in this farm.(Interview conducted in May 2017) called political neutrality.'I Am Not a Racist, But . . .': On Self-Representation and Intersubjectivity What characterizes recruiters' selfrepresentation and intersubjectivity is what Eduardo Bonilla Silva (2006) calls 'racism without racists'.To illustrate its occurrence, I recall a moment from an interview where the recruiter says: People from Africa very often [hesitates to complete his words]-I have to be careful here-they don't have [again hesitates].I am absolutely not a nationalist; I'm very open.I have worked abroad for a long time.They are not so motivated to work in agriculture.Many of them feel like, I can travel here and go like this, [snaps fingers] and I will make a lot of money.(Interview conducted in June 2017) | 2023-07-15T15:12:37.172Z | 2023-07-13T00:00:00.000 | {
"year": 2024,
"sha1": "b1360c63520a79e519bf2b9dc60c5c6be960c91a",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/08969205231185675",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "f0a9bd597d3b108a405ab5e342178e7683341cf9",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
267626827 | pes2o/s2orc | v3-fos-license | 2024 Statistical properties of probabilistic context-sensitive grammars
Probabilistic context-free grammars, which are commonly used to generate trees randomly, have been well analyzed theoretically leading to applications in various domains. Despite their utility, the distributions which the grammar can express are limited to those in which the distribution of a subtree depends only on its root and not on its context. This limitation presents a challenge for modeling various real-world phenomena, such as natural languages. To overcome this limitation, a probabilistic context-sensitive grammar is introduced, where a subtree’s distribution depends on its context. Its statistical properties are explored. Numerical analysis reveals that the distribution of a symbol does not constitute a qualitative difference from that in the context-free case, but mutual information does. Furthermore, a metric is introduced to quantify the breaking of this limitation directly. This metric, which is zero in the context-free case, is applicable to an arbitrary distribution of a tree. Measuring this metric enables the detection of a distinct difference between probabilistic context-free and context-sensitive grammars.
I. INTRODUCTION
Hierarchical structures underlie many real-world phenomena, including natural languages.A context-free grammar (CFG), a fundamental concept in formal language theory, was originally introduced to analyze hierarchical syntactic structures in natural languages [1].Furthermore, it provides a basis for describing more general hierarchical structures that are not limited to natural languages.A CFG, defined by a set of production rules, generates strings with trees in a formal way.The strings correspond to sentences, whereas the trees describe the hierarchical syntactic structures behind the sentences.A probabilistic extension of a CFG, known as a probabilistic context-free grammar (PCFG) or stochastic contextfree grammar [2], introduces probabilities into the production rules.According to the rules, this model generates trees in a probabilistic manner.This probabilistic grammar has been used to model syntactic structures of a natural language [3] or a programming language [4], and to study many other phenomena with tree or hierarchical structures in fields such as music [5,6], human cognition [7], long-short-term-memory network [8], RNA [9], cosmic inflation [10], or a more abstract model [11][12][13].Additionally, other frameworks are closely related to a PCFG, including a branching process and a Lindenmayer system (or L-system) [14][15][16].
An essential property of a PCFG is that the distribution of a subtree depends only on its root, not on the context, which we will designate as context-free independence.This property allows an exact mathematical analysis of the statistical properties of PCFGs.Indeed, earlier studies have analyzed and resolved various aspects of PCFGs, including the probability of symbol occurrence [17], the correlation function [11,12], mutual information between nodes [8], the mean sentence length [18], entropy [19,20], branching rates [19,20], tree size [20], and the conditions for sentence generation to terminate with probability 1 [18,21].At the same time, this property is too strict to impose on real-world phenomena.Particularly, it is well known in linguistics that some languages in the real world cannot be described using a CFG because of its inability to represent cross-serial dependencies [22,23].In natural language processing, empirical evidence suggests that a naive parser relying on a PCFG is insufficient for inferring syntactic structures [3].Moreover, certain parsers that relax context-free independence in technical manners can express more complex distributions and can achieve higher accuracy [24,25].Outside of language-related domains, the possibility that introducing context sensitivity is useful for describing music is also discussed [5].Therefore, the distributions that a PCFG can express are regarded as severely limited.
To understand more realistic phenomena with hierarchical structures, it is necessary to introduce and analyze a model that captures the distribution of a tree beyond context-free independence.For this purpose, one can naturally consider context-sensitive grammars (CSGs) [26], which form the class one level higher than CFGs in the hierarchy of expressive power: the so-called Chomsky hierarchy.Similarly to a PCFG, a probabilistic contextsensitive grammar (PCSG) can be formulated as a probabilistic extension of a CSG.A PCSG explicitly relaxes context-free independence.Consequently, the theoretical analyses developed for a PCFG are not applicable to a PCSG.The statistical properties of a PCSG have only rarely been analyzed, either theoretically or numerically.
To address this point, for this work, we defined a simple PCSG and investigated its statistical properties by numerical simulations systematically, mainly examining whether a qualitative difference from a PCFG exists, or not.To be more precise, we implemented a PCSG to measure the distribution of a symbol, mutual information between two nodes, and mutual information between two pairs of children of nodes on which the symbols are fixed.Here, we present a comparison of the observed similarities and differences between PCFG and PCSG: No qualitative difference was found in the distribution of a symbol between a PCSG and a PCFG.This result suggests that the properties observed in PCFGs are likely to be preserved in PCSGs.Given that the absence of a singularity in the distribution in an ensemble of PCFGs has been proven [17], it is reasonable to infer that PCSGs would not exhibit the singularity, similarly to PCFGs.This singularity is relevant for the discussion on a phase transition in the random language model (RLM) [27,28], which might be analogous to discontinuity in human language acquisition according to earlier research.
However, the behaviors of mutual information between two nodes differ between a PCFG and a PCSG.The mutual information in a PCFG decays exponentially with the distance between two nodes, i.e., the path length in a tree graph.In contrast, in a PCSG, the mutual information decays exponentially with the effective distance, which is defined by considering the effect of context sensitivity.
Additionally, a more pronounced difference concerns the mutual information between pairs of children of symbol-fixed nodes.This novel metric, proposed in this research, quantifies context-free independence breaking.From a theoretical physics perspective, this metric represents the degree to which the network of interactions deviates from a tree structure.Linguistically, it represents the strength of mutual dependence between the structures of two constituents or phrases of given types.This metric not only detects whether context-free independence is broken; it also quantifies where and how strongly the breaking occurs.In a PCFG, the context-free independence breaking is always zero.By contrast, in a PCSG, it is positive and decays similarly to the mutual information between nodes.As a result, the most striking difference between a PCFG and a PCSG is in this metric.This quantification is intuitive and is definable for any distribution of a tree.Measuring this metric in other mathematical models or real-world phenomena will help deepen the understanding of them by investigating how their behavior differs from that of a PCFG.
Here, we provide a brief summary of the main contributions made in this paper.Our first main contribution is the systematic investigation of a PCSG, which is a simple model for generating hierarchical structures beyond those produced by PCFGs.A key distinction between a PCFG and a PCSG is in the distance that determines the decay of mutual information.Second, we propose a novel metric for the context-free independence breaking, which has not been quantified previously.This metric allows for further quantitative investigation of various hierarchical structures that violate the context-free independence.Our results show that this metric decays exponentially for a PCSG while it remains zero for a PCFG, demonstrating the usefulness of this metric.
This paper is structured as follows: The models, a PCFG and a PCSG, are introduced in Sec.II.The analysis of the distribution of a symbol in a PCSG and the argument about the phase transition in the RLM are presented in Sec.III.Then, in Sec.IV, a numerical analysis of the mutual information between two nodes is presented, including the definition of the effective distance.The introduction and analysis of the quantification of the context-free independence breaking are given in Sec.V. Finally, we summarize the results and briefly discuss future works in the last section.
A. Probabilistic context-free grammar
In formal language theory [26], a grammar G consists of a vocabulary V and a finite set R of rules.A vocabulary V , a finite set of symbols, is divided into nonterminal symbols A, B, meaning that a finite string ϕ in V is rewritten as another finite string ψ.Also, the left-hand side ϕ of the rule must include at least one nonterminal symbol.The grammar G generates a sentence by the following process: Initially, a special symbol S ∈ V N , called the starting symbol, is given.Next, S is rewritten by a rule S → ϕ.When a substring ψ of ϕ includes a nonterminal symbol, ϕ can be rewritten by replacing ψ with another string ω according to a rule ψ → ω.This process is repeated.Finally, if the string has no nonterminal symbol, it can no longer be rewritten by any rule.The final string is called a sentence.The whole process of generating a sentence is called a derivation.The set of sentences generated using a grammar G is a language of G.The importance of the finiteness of symbols and rules is noteworthy.If infinite sets V and R are allowed, then it becomes trivially possible to construct a grammar that generates an arbitrary language by introducing a symbol A and a rule A → ϕ for each sentence ϕ in the language.The infinite number of symbols or rules would make the concept of characterizing and classifying languages in terms of grammars irrelevant.
A grammar G is a CFG [1] if every rule of G is of the form A → ϕ with A being nonterminal.The derivation in a CFG can be represented as a tree, which is analogous to the syntactic structure of a sentence in a natural language analyzed by immediate constituent analysis, as shown in Fig. 1.In fact, any CFG can be transformed to an equivalent CFG where every rule is of the form A → BC or A → a for A, B, C ∈ V N and a ∈ V T , ensuring that the generated language remains unchanged.This transformed form is referred to as the Chomsky normal form (CNF) [29].
A PCFG [2] is a probabilistic version of a CFG.It is introduced by assigning a probabilistic weight M A→ϕ to each CFG rule A → ϕ, meaning that a nonterminal symbol A is rewritten as ϕ with probability M A→ϕ .The PCFG specified by the set of weights M A→ϕ determines the probability of a derivation, which is the product of the weights of all rules applied in the derivation.If we adopt the idea of simplifying a speaker or a group of speakers of a language as an agent that generates strings with syntactic structures probabilistically, then a PCFG can be a simple mathematical model for a language.Indeed, a PCFG has been used for modeling a natural language [3] and a programming language [4].In addition, because a PCFG can be regarded as a simple mathematical model for randomly generating trees or hierarchical structures, many studies have used it as a model not only for a natural or formal language but also for other phenomena [5][6][7][8][9][11][12][13].A PCFG also has a close relation to other physical and mathematical frameworks [14][15][16].By definition, the distribution of a subtree in a PCFG depends only on the root.It is unaffected by the context, i.e., the neighboring symbols of the root.Because of this context-free independence, many properties of a PCFG can be analyzed theoretically.For instance, the distribution of a symbol or the joint distribution of several symbols at arbitrary nodes can be computed recursively from the root of the entire tree, similarly to a Markov chain.Indeed, many earlier studies have analyzed properties of a PCFG theoretically and exactly [8,11,12,[17][18][19][20][21].The context-free independence allows for the theoretical analysis of various properties of PCFGs, but it also severely restricts the range of distributions that a PCFG can express.In general, it is not reasonable to expect a natural phenomenon to satisfy such a restriction.Linguistically, for instance, some real-world languages cannot be described by CFGs [22,23].Moreover, natural language processing researchers have found it necessary, empirically, to relax the context-free independence for modeling the syntactic structures of natural languages [24,25].However, no report of the relevant literature describes a systematic investigation of a simple mathematical model that goes beyond the independence or a quantitative analysis of the degree to which context-free independence is broken in any model or phenomenon.This need for study prompts us to consider such a model and to quantify how far the model is from the independence.
B. Probabilistic context-sensitive grammar
A model introduced by allowing each rule in a CFG to refer to the context is a CSG, which has one level higher expressive power than a CFG in formal language theory [26].In a CSG, a rule is of the form ϕAψ → ϕωψ.In other words, the result ω of rewriting A can depend on the substrings ϕ and ψ next to A, i.e., the context of A. The class of languages generated by CSGs is believed to be larger than the class of possible natural languages [30].Additionally, we can naturally define a probabilistic version of a CSG, namely, a PCSG, by assigning a probabilistic weight to each rule, similar to the introduction of a PCFG.A PCSG relaxes the context-free independence, meaning that the distribution of a subtree in a PCSG depends not only on its root but also on the context.The theoretical analyses of a PCFG described above [8,11,12,[17][18][19][20][21], all of which impose the independence, are not applicable to a PCSG.Consequently, the behavior of a PCSG and its characteristics, such as which of its properties are similar to or different from those of a PCFG, are unknown.
The class of all possible grammars defined as a probabilistic extension of a CSG is too large and complicated to analyze.We focus, therefore, on a simpler model within a CSG to examine its behavior.First, we consider a CSG with a vocabulary consisting of binary nonterminal symbols, V N = {0, 1}.We do not consider terminal symbols.In the following, a symbol simply means a nonterminal symbol unless otherwise noted.Additionally, we restrict rules to the form of A → BC or LAR → LBCR.The former is a nonterminal rule of a CFG in CNF, whereas the latter is a CSG rule with context sensitivity that refers only to the two symbols next to the rewritten symbol.Consequently, the cause of the difference between our model and the binary CFG or PCFG in CNF is, in essence, the context sensitivity to L and R. In our notation, A, B, and C represent symbols, whereas L and R can be symbols or nulls λ.For example, if the rule λ01 → λ111 is applied to the leftmost 0 in the string 0110, then the string turns to 11110.
Our PCSG is defined as the probabilistic extension of this CSG.The probabilistic weight M CF ABC is assigned to each CFG rule A → BC, and M CS LAR,BC to each CSG rule LAR → LBCR.Next, we introduce the probability q that a CSG rule is chosen, to control the degree of context sensitivity.More precisely, symbol A in the context LAR is rewritten as BC by a CFG rule A → BC with probability (1 − q)M CF ABC , or as DE by a CSG rule LAR → LDER with probability qM CS LAR,DE .A PCSG with q = 0 is a PCFG.Additionally, we must determine the order in which rules are applied to a string because, in a PCSG, unlike a PCFG, a derivation depends on the order.For this study, we choose to apply rules in a uniformly random manner as a neutral alternative.If the length of a present string is l, we first generate a random permutation τ of {0, • • • , l − 1} according to a uniform distribution, and then apply rules to the symbols sequentially, from the τ (0)-th one to the τ (l−1)-th one.After all symbols of the preceding string are rewritten, the length becomes 2l.The whole procedure to generate a tree is as follows: The first step in the derivation is to choose a symbol from a uniform distribution over V N .Subsequently, a string is rewritten recursively.For each step, the order of application of rules and each rewriting are determined randomly in the manner we explained above.Because no terminal symbol exists in this setting, a rule can always be applied to the string no matter how many steps the derivation goes through.Consequently, we stop the process when the step is repeated D times, which is a value determined in advance [31].
Although the discussion in the remainder of this paper is based on the above setting, we have found that the properties of a PCSG remain qualitatively unchanged under alternative settings.For example, the model exhibits similar behavior when each rule refers to two left neighbors and two right neighbors, or when symbols are rewritten in a different order, such as left-to-right or inside-tooutside.
This type of PCSG is specified by the probability q and the weights M = M CF , M CS , where M CF = {M CF ABC } ABC and M CS = {M CS LAR,BC } LAR,BC .The probabilistic weights are sampled according to the log-normal distributions with normalization conditions Therein, ǫ is the parameter used to control the width of the lognormal distributions.
For this study, we are interested in how the introduction of context sensitivity affects the statistical properties of PCFGs.Specifically, we implement PCSGs and conduct numerical analyses of three statistical quantities.The first involves the distribution of a symbol at a node, analogous to magnetization in a spin model.This quantity is related to the phase transition in the RLM [27,28].The second specifically examines the mutual information between two nodes, which is associated with a two-point correlation.Finally, we introduce the mutual information between the children of two symbol-fixed nodes.This metric, which is zero for q = 0 by definition, reflects how strongly the independence is broken.
III. DISTRIBUTION OF A SYMBOL A. Distribution of a symbol
Primary emphasis should be on the distribution of a symbol on a single node.We denote the probability that symbol A occurs on node i as π A,i (q, M ) ≡ δ A,σi q,M , where σ i is a symbol on node i, and • • • q,M represents the average over trees under a PCSG with parameters (q, M ).This quantity corresponds to the magnetization in the Potts spin model [32], where each site i has a spin σ i , and the magnetization along the direction A is defined by the ratio of sites with σ i = A. In the case of q = 0, i.e., a PCFG, the context-free independence enables us to apply the concept of Markov chains.Because of this, the probability π A,i can be computed.If node i is the left child of node j, then π B,i = A C M CF ABC π A,j .If node i is the right child, then it is the same except that π B,i and C are replaced, respectively, with π C,i and B .However, this no longer holds in the case of q > 0 because of the broken independence.
To see the degree to which the distribution of a symbol changes with the context sensitivity, we measured the Euclidean distance ∆ between {π A,i } A,i with q = 0 and that with q > 0, expressed as FIG. 2. Differences ∆(q, M ) between πA,i with q = 0 and that with q > 0 as functions of q, computed from 10 4 sampled trees of depth 10.Different colors represent different M 's.Panels (a), (b), and (c), respectively, present results for M 's generated from the lognormal distribution with ǫ = 10 −2 , 10 0 , and 10 2 .
Figure 2 presents the distances ∆ as functions of q for ǫ = 10 −2 , 10 0 and 10 2 .We sampled 20 M s for each ǫ, and 10 4 complete trees for each PCSG, with depth D of a tree set to 10.These figures show that ∆ increases monotonically and continuously for any M .It can also be observed that the increase is slower with larger ǫ.If ǫ is larger, most of the generated M CF ABC s and M CS LAR,BC s are near 1/2 2 .As a result, π A,i s are near 1/2 for any A and i with most M s.This fact leads to the slower increase.This behavior of ∆ implies that the context sensitivity drives {π A,i (q, M )} A,i farther away, monotonically and continuously, from that for q = 0, and that no singularity occurs at any point in 0 < q < 1.It is noteworthy that the context sensitivity is not the only factor that contributes to this behavior, at least qualitatively.Suppose we interpolate between a PCFG M CF and another independently generated PCFG M CF ′ , instead of an M CS .Even in this case, ∆ will grow similarly with q.It is not possible to see any qualitative difference between a PCFG and a PCSG in terms of the distribution of a symbol.
The observations presented here are for finite trees.However, for most of the 20 M s, ∆ with D = 10 seems to converge almost to that in the limit D → ∞.Consequently, it is unlikely that ∆ has a singularity, even in the limit of infinite trees.Supplemental Material provides numerical observations of how ∆ converges as D increases.
B. Order parameter for the random language model
In our case, because the tree topology is always the same, the mean ratio π A of symbol A in a whole tree is the average of π A,i over nodes i.We denote it as The probability density of π A attributable to the randomness of M , defined as plays a crucially important role in the discussion of the phase transition in the RLM, proposed in [27,28].The RLM is defined as an ensemble of PCFGs generated according to the lognormal distribution, which is equivalent to the q = 0 case in our model.An earlier study investigated the possibility of a phase transition characterized by the singularity of an order parameter as the parameter ǫ varies.This earlier study suggested that the phase transition can be interpreted as a possible discontinuity in human language acquisition.However, recent findings in [17] have revealed that the singularity of their order parameter, if any, is reduced to that of the probability density of π A and that the probability density is an analytic function of ǫ with finite vocabulary.In other words, the phase transition does not exist as long as the number of types of symbols is finite.This conclusion holds true for any analytic distribution of M , irrespective of whether it follows the lognormal distribution or whether the sizes of trees are finite or infinite.Because the proof relies on the assumption of context-free independence, it cannot be extended to a context-sensitive case with q > 0. Therefore, whether a phase transition exists in the context-sensitive RLM remains a non-trivial question.
To investigate whether the distribution of π A in the context-sensitive RLM has a singularity, we measured the Binder parameter of π A , defined as , where ∆π A ≡ π A − 1/2 and [• • • ] ǫ means the average over M s according to the lognormal distribution determined by ǫ.This parameter has been used to detect the transition in various statistical-mechanical models numerically [33,34].This parameter is zero when π A follows a Gaussian distribution and nonzero when the distribution of π A is multimodal or non-Gaussian.To compute the Binder parameter, we sampled 10 4 M s for each ǫ and 10 3 trees for each M .Error bars were computed using the bootstrap method [35,36] with 10 2 bootstrap sets.Figure 3(a) shows the result obtained when the tree depth is fixed at D = 11 and the context sensitivity is q = 0, 0.25, 0.5, 0.75, and 1.From these findings, the Binder parameter seems to change analytically, but it changes more dramatically if the context sensitivity is larger.Consequently, if the singularity exists, it might occur for q = 1.We also computed the Binder parameters for q = 1 while varying the depth D of a tree, the result of which is shown in Fig. 3(b).For all previously known cases of phase transitions detected by this parameter, a discontinuous jump from zero to non-zero is found at the transition temperature in the thermodynamic limit.However, it is unlikely that such a transition occurs for the limit D → ∞ because the Binder parameter for large ǫ becomes farther away from zero as D increases.Note that we do not rule out the possibility of another phase transition detected by other methods, which remains an open problem.
IV. MUTUAL INFORMATION BETWEEN TWO NODES
As described in the preceding section, we examined the distribution of a symbol on a node, but we could find no significant difference between a PCFG and a PCSG.For the discussion in this section, we turn our interest to mutual information, which has a close relation to the twopoint correlation function [37] and which has been used for measuring correlation in symbolic sequences such as formal and natural languages [8,38,39], music [8], birdsong [40], DNA [41], and so forth.We denote the mutual information between nodes i and j, as depicted in Fig. 4, as This measures the dependence between the two nodes.
Although the behavior of mutual information in a PCFG is well known through theoretical analysis [8], this analysis is also based on context-free independence.Consequently, understanding what occurs in a PCSG regarding the mutual information, where the independence is broken, is non-trivial again.Before presenting the results of the numerical analysis, we introduce some notations and quantities.In the following, we designate a node by a binary sequence that represents the path from the root to the node by assigning 0 and 1, respectively, to a left and right child.For example, nodes (), (0), and (0, 1) represent the root, the left child of the root, and the right child of the left child of the root, respectively.To characterize the relative position of two nodes, we introduce the two distinct distances described in Fig. 5.The first is the structural distance, i.e., the length of the path between the two nodes.The second, designated as the horizontal distance, is the number of nodes lying horizontally between the two nodes.If the depths of the two nodes differ, then the horizontal distance is the number of nodes between the higher node and the lower node's ancestor of the same depth as the former.
One of the two nodes was fixed at i = (1, 0, 0, 0, 0, 0), which is the leftmost node of depth 6 in the subtree whose root is the right child of the root of the whole tree.The other node j could be any node in the whole tree.The relation between structural and horizontal distances differs based on whether node j belongs to the left or right subtree, as presented in Fig. 5. Presuming that the depth of node j is fixed, then when j is in the left subtree, i.e., j = (0, • • • ), the structural distance is the same, irrespective of the horizontal distance.However, when j is in the right subtree, i.e., j = (1, • • • ), the horizontal distance is roughly exponential of the structural distance.
In the context-free case with q = 0, the dependence of the mutual information on the two distances is already known.Lin and Tegmark [8] have proved that the mutual information decays exponentially with the structural distance.Recalling that the mutual information in a Markov chain decays exponentially with the chain length, this result is intuitively reasonable when considering context-free independence.When j is in the left subtree, the mutual information is the same for any node j of the same depth because the mutual information depends only on the structural distance, which is independent of the horizontal distance.However, when j is in the right subtree, the mutual information decays according to a power law of the horizontal distance because the horizontal distance grows exponentially in the structural distance.One main claim of Lin and Tegmark [8] was that this power law might be the mechanism of the power-law decay of mutual information in natural language texts.
In the context-sensitive case with q > 0, we examined the behavior of the mutual information.We sampled 10 8 complete trees of depth D = 7 and estimated I. Because the mutual information between X and Y is decomposed into S(X) + S(Y ) − S(X, Y ) where S(•) is Shannon entropy, we computed the mutual information by estimating the entropy from the empirical distribution.This estimate has a bias from the entropy of the true distribution, resulting in biased mutual information, which is not negligible in the region of the small mutual information.Consequently, to compute the entropy in the present and the subsequent sections, we used the bias-reduced estimator proposed by Ref. [42].This estimator is represented by 2) is the mutual information between the red node i and the blue node j.
Therein, Ψ is the digamma function, x represents a state which X takes, n x denotes the number of samples such that X = x, and N = x n x is the total number of samples.
Figure 6 shows Is for an M generated with ǫ = 10 −2 and q = 1, where rewriting always refers to the context.The structural distance dependences of I for j belonging to the left and right branches are shown respectively in Figs.6(a Here, I decays exponentially in the structural distance and follows a power law in the horizontal distance.However, different behavior is observed when j belongs to the left branch, i.e., j = (0, • • • ), as shown in the left subfigures (a) and (c).In Fig. 6(a), I has clearly different values even with the same structural distances, whereas it decays in the power law of the horizontal distance in Fig. 6(c), similarly to the case with j = (1, • • • ).This result differs from the behavior found with a PCFG.
The mutual information between nodes in a PCSG depends explicitly on the horizontal distance.This observa-FIG.5.
Structural and horizontal distances between node i and j with i = (1, 0, 0, 0) and j = (0, 0, 0, 0, 0), (0, 1, 1, 0, 1), (1, 0, 0, 1, 1), or (1, 1, 1, 1, 0).Different colors and lines represent different js.The structural distance is the path length from i to j, denoted by the line along the edges.The horizontal distance is the number of nodes lying horizontally between the higher node and the lower node's ancestor of the same depth as the former, as indicated by the horizontal arrows.Nodes j = (0, 0, 0, 0, 0) and (0, 1, 1, 0, 1) are in the left subtree.The horizontal distance is 8 in the former case and 2 in the latter case, whereas the structural distance is 9 in both cases.Node j = (1, 0, 0, 1, 1) belongs to the right subtree.The structural and horizontal distances between this node and i are, respectively, 3 and 1. Node j = (1, 1, 1, 1, 0) belongs to the right subtree, too.The structural and horizontal distances are 7.When j belongs to the right branch, the horizontal distance grows exponentially as the structural distance increases.
tion can be attributed to the context sensitivity inherent in PCSG rules.If the context-free independence holds, then a node can correlate with other nodes only along the path in the tree graph.This result engenders the exponential decay with the structural distance.In contrast, in a PCSG where each rule involves the context L and R as well as A, a node can correlate with its left and right neighbors directly, even in the absence of a direct path between them.This horizontal correlation can bypass 2) against the distance between i and j.Weights M are generated with ǫ = 10 −2 .The context sensitivity is set to q = 1.The position of i is fixed at (1, 0, 0, 0, 0, 0).Mutual information against the structural distance when node j is in the left branch, i.e., j = (0, • • • ), is shown in (a).The same quantity when node j is in the right branch, i.e., j = (1, • • • ), is in (b).Similarly, plots against the horizontal distance are shown in (c) for j = (0, The result when node j is the root, i.e., j = (), is in (a).Markers and colors are different for different depths.
the long structural distance between two nodes belonging to different subtrees, leading to the effective distance.
As shown in Fig. 7, the horizontal distance increases exponentially with the effective distance.If the mutual information does not decay exponentially with the structural distance, but instead with the effective distance, then the mutual information will decay in a power law in the horizontal distance, irrespective of whether node j belongs to the left or the right branch.The effective distance is definable as explained here-inafter.Presuming that nodes i ′ and j ′ are ancestors of i and j, respectively, and that i ′ and j ′ are the horizontal neighbors of one another, then the effective distance between nodes i and j is the sum of the path length from i to i ′ and from j to j ′ .Here, we assume that the effective distance is equal to the structural distance if one of the two nodes is the ancestor of the other.We plot the same I as in Fig. 6, but against the effective distance, in Fig. 8(a).From this, it can be confirmed that the mutual information decays exponentially with the effective distance, as expected.This result indicates the existence of a typical effective distance that corresponds to a correlation length, which is the inverse of the decay rate.The mutual information is small beyond this typical distance.
It is intuitively reasonable to infer that the mutual information decays exponentially with the effective distance.Joint probability P (σ 0 , • • • , σ 2 D+1 −2 ) of all nodes is the product of all 2 D − 1 rewriting weights.For two nodes i and j, marginalizing the remaining nodes yields the joint probability P (σ i , σ j ) of the two nodes.The greatest contribution to this is the product of the weights on the shortest effective path, described by the blue dashed line in Fig. 7.Although an effective path and its corresponding weights depend on the order of application of rules at each step, the length of the shortest effective path asymptotically equals the effective distance.Therefore, the joint probability of two nodes scales as an exponential function of the effective distance.This result , and (c), the parameter in the lognormal distribution is ǫ = 10 −2 , the context sensitivity is q = 1, and node i is fixed at (1, 0, 0, 0, 0, 0).
implies that the mutual information scales in the same manner [8].
What we describe here is not unique to this instance.It is typically observed across the M s sampled.We measured I for 10 M s under the same settings and computed the averages and the standard deviations of ln I over js of each effective distance and over M s.Whereas mutual information is always non-negative, the estimate by the method in [42] sometimes takes negative values when the true value is small.We simply excluded non-positive estimates to compute the logarithm.This exclusion caused the average to be biased upward, but this bias was slight in this case.The results presented in Fig. 8(b) show that the exponential decay in the effective distance discussed above for a single M is observed across 10 M s. Figure 8(c) also presents histograms of ln I for the effective distance 5, where the frequencies are normalized.The points are distributed around the average.The deviations in Fig. 8(b) and the distribution in Fig. 8(c) originate from differences in js and M s rather than from sample fluctuations.
The rate of decay and the correlation length depend on weights M , causing the average rate to change as the parameter ǫ varies.One can infer that, as ǫ increases, the distribution of trees under generated weights M tends to approach the uniform distribution.Therefore, the mutual information is expected to decay faster, meaning that the correlation length will become smaller.Additionally, the rate of decay depends on the context sensitivity q.With larger q, rewriting operations depend not only on the rewritten symbol but also on the context, with higher probability.This dependence seems to engender slower decay.The numerically computed mutual information with different ǫ and q, as presented in Supplemental Material, follows these expectations.
FIG. 9. Context-free independence breaking J defined by Eq. ( 3) is the mutual information between the red nodes k and l and the blue nodes m and n.
V. QUANTIFICATION OF CONTEXT-FREE INDEPENDENCE BREAKING
Finally, we investigate the effect of context sensitivity more directly by quantifying the extent to which the context-free independence is broken.This independence means that two subtrees are mutually independent under the condition that the symbols of their roots are fixed.Therefore, quantifying the breakage of the context-free independence involves the measurement of the mutual information between the subtrees under this condition.However, it requires extremely large amounts of data to obtain the distribution of a subtree when the subtree is large.To overcome this difficulty, we instead specifically examine the mutual information between the children of their roots, as shown in Fig. 9.We denote this mutual information as , the context sensitivity is q = 1, and node i is fixed at (1, 0, 0, 0, 0, 0).J i,j;A,B (q, M ) ≡ σ k ,σ l ,σm,σn Therein, k and l respectively represent the left and the right children of i; m and n are the children of j.This quantity is always zero for any i and j in a PCFG because of the context-free independence.This is the requirement that must be met for this quantity to be a meaningful metric of the breaking of independence.
In addition to measuring the degree of context-free independence breaking, the metric J has other interpretations.One interpretation derives from theoretical physics.If the network of interactions forms a tree, where every interaction in the system is between a node and its child, then J is zero.In the presence of loops in the network, as seen in a PCSG, J can take a positive value.In this sense, J represents the degree to which the network of interactions deviates from a tree.Another interpretation is linguistic: Suppose that two constituents or phrases, i.e. subtrees of a derivation, are, for example, a noun phrase and a verb phrase.Under this condition, the structures of the noun phrase and the verb phrase are mutually dependent; J represents the strength of this dependence.
We measured the context-free independence breaking J in the same manner as for the mutual information I in the preceding section, under the same setting, where i = (1, 0, 0, 0, 0, 0), q = 1, ǫ = 10 −2 , and 10 8 trees of the depth 7 were sampled for each M .Our observations revealed that J behaves very similarly to I. Figure 10(a) shows J for A = B = 0 against the effective distance for an M generated from the lognormal distribution.It is evident that J exhibits exponential decay with the effective distance.Again, this finding indicates that there exists a correlation length, or a typical effec-tive distance beyond which the dependence between two subtrees is small.We computed the averages and the standard deviations of ln J over j's of each effective distance, over M 's, and over A and B, using the data size, i.e., the number of generated trees satisfying σ i = A and σ j = B, as the weights.Additionally, we simply discarded non-positive estimates of J, which only led to a small bias.Figure 10(b) presents the results, suggesting that the exponential decay of J with the effective distance occurs across different M s, as well as different As and Bs. Figure 10(c) shows the normalized histogram of ln J obtained for the effective distance 5, where the data sizes were used as the weights.The distribution of ln Js centers around the red vertical line representing the average.
The dependence of J on the parameter ǫ and the context sensitivity q exhibits similar tendencies to those observed for I. Particularly, as ǫ increases or q decreases, the decay rate becomes more pronounced whereas the correlation length becomes smaller.Supplemental Materials provide additional results for different values of ǫ and q.In a general system, I and J do not necessarily behave similarly.Indeed, in a PCFG, I is positive and decays exponentially with the structural distance, whereas J is always zero.It is somewhat non-trivial that both I and J decay exponentially with the effective distance in a PCSG.
VI. CONCLUSION
A PCFG, a simple mathematical model for randomly generating a tree, has been used to model various hierarchical phenomena, including natural languages.This model satisfies the assumptions of context-free independence.Although this feature allows for the theoretical analysis of various properties of a PCFG, the restriction is too strong for a PCFG to be expressive of distributions.
We introduced the simple PCSG by relaxing the context-free independence, and we analyzed its statistical properties systematically.First, we specifically examined the distribution of a symbol on a single node.This distribution is to a PCSG what magnetization is to a spin system.Although the context sensitivity affects the distribution, its effect brings only continuous and quantitative changes.Such changes can occur even without context sensitivity, for example in the interpolation between two PCFGs.Our numerical investigation also shows that the Binder parameter of the mean ratio of a symbol, which is an analytic function of ǫ in the context-free RLM, is unlikely to be discontinuous in a context-sensitive case.
The second quantity of interest is the mutual information between two nodes, which is related closely to the two-point correlation function [37].It is noteworthy that mutual information decays exponentially with the effective distance between two nodes, which is a consequence of the horizontal correlation because of context sensitivity.This feature contrasts with the fact that the decay of the mutual information in a PCFG is exponential with respect to the structural distance, i.e., the path length.
In addition, to quantify the degree to which contextfree independence is broken, we proposed the use of mutual information between two pairs of nodes under the condition that the parent symbols are fixed.This metric can also indicate the degree to which the network of interactions deviates from a tree in theoretical physics, and it can indicate the mutual dependence between the structures of two constituents in linguistics.This quantity emphasizes the most distinct difference between a PCFG and a PCSG.The context-free independence breaking decays exponentially with the effective distance in a PCSG, similar to the mutual information between two nodes, whereas the breaking always remains zero in a PCFG.
Possible future issues, in our view, are divisible into four main directions.First, it is necessary to develop methods for theoretical analysis and efficient numerical approximation to confirm and further investigate the behaviors of PCSGs observed in this study, such as the exponential decay of the mutual information and the context-free independence breaking.The main challenges are the exponential growth of tree sizes and the complex interactions due to context sensitivity.
Second, another important approach would be to examine specific PCSGs, particularly those exhibiting atypical behavior, in contrast to our analysis of the typical properties of randomly sampled PCSGs.It might be true that PCSGs with low probabilistic measures exhibit non-analytic behavior in ∆ as a function of the context sensitivity q, or non-exponential decay of the mutual information or context-free independence breaking.The existence of such PCSGs and the mechanism underlying their atypical behavior are left as intriguing open problems.
Third, CSG is not the only linguistic framework beyond CFG.Although the CSG framework makes tree structures context-sensitive in a straightforward manner, modern linguists do not consider a CSG to be a relevant model of a natural language.This skepticism arises because a CSG can generate a set of sentences extending beyond natural languages [30].Also, formal language theory predominantly addresses surface sentences rather than syntactic structures [43].Conversely, several alternative models have been proposed as grammars closer to natural languages, such as Tree Adjoining Grammar [44], Combinatory Categorial Grammar [45], and Minimalist Grammar [46].The natural progression is to introduce probabilistic extensions to these grammars and to investigate their statistical properties, as examined in this study.Particularly, all probabilistic extensions of a CSG and the three grammars described above will violate the context-free independence, but their independence breaking J might decay exponentially, polynomial, or non-monotonically, depending on the grammar.If the decay is, for example, exponential in every model, then their decay rates might differ.These probabilistic grammars can be characterized by emphasizing the distinctions in their independence breaking J, thereby contributing to a comprehensive understanding of the grammars from a physical perspective.
As a fourth point, we discuss the application of our metric J for the context-free independence breaking, which is applicable not only to probabilistic grammars such as PCFGs but also to any distribution of a tree, including those underlying human languages and birdsongs.Earlier research has demonstrated that the behavior of mutual information in PCFGs, human languages, and birdsongs is similar in that it decays as a power-law function of the horizontal distance or the sequence length [8,40].However, J will allow us to detect and quantify the distinction between human languages and PCFGs, given the empirical knowledge that contextfree independence breaking occurs in natural languages [24,25].It might also be possible to identify characteristics unique to human languages, which are not present in birdsongs, using J.By quantifying the degree of independence breaking, we can more deeply compare tree structures among different mathematical models or natural phenomena.
) and 6(b).The horizontal distance dependence is also shown in Figs.6(c) and 6(d).When j belongs to the right branch, i.e., j = (1, • • • ), as shown in the right subfigures (b) and (d), what is observed with a PCFG roughly holds.
FIG. 6 .
FIG.6.Mutual information I defined by Eq. (2) against the distance between i and j.Weights M are generated with ǫ = 10 −2 .The context sensitivity is set to q = 1.The position of i is fixed at (1, 0, 0, 0, 0, 0).Mutual information against the structural distance when node j is in the left branch, i.e., j = (0, • • • ), is shown in (a).The same quantity when node j is in the right branch, i.e., j = (1, • • • ), is in (b).Similarly, plots against the horizontal distance are shown in (c) for j = (0, • • • ) and (d) for j = (1, • • • ).The result when node j is the root, i.e., j = (), is in (a).Markers and colors are different for different depths.
FIG. 8 .
FIG. 8. (a) Mutual information I defined by Eq. (2) against the effective distance between i and j.Weights M are generated from the lognormal distribution.Markers and colors differ for different depths.(b) Averages and standard deviations of ln I over js of the same effective distance and over 10 M s generated.(c) Normalized histograms of ln I for 10 M s for which the effective distance is 5.The red vertical line represents the average.For all (a), (b), and (c), the parameter in the lognormal distribution is ǫ = 10 −2 , the context sensitivity is q = 1, and node i is fixed at (1, 0, 0, 0, 0, 0).
FIG. 10 .
FIG.10.(a) Degree of the context-free independence breaking, or parent-fixed mutual information J defined by Eq. (3) for A = B = 0 against the effective distance between i and j.Weights M are generated from the lognormal distribution.Markers and colors differ for different depths.(b) Averages and standard deviations of ln J over js of the same effective distance, over the symbols A of node i and B of j, and over 10 M s generated.(c) Normalized histograms of ln J for 10 M s with effective distance 5.The red vertical line represents the average.For all (a), (b), and (c), the parameter in the lognormal distribution is ǫ = 10 −2 , the context sensitivity is q = 1, and node i is fixed at (1, 0, 0, 0, 0, 0).
Example of a derivation generated using a CFG in CNF.A node with its children means that the node is rewritten as the children.In this example, the initial string is C. Applying the first rule C → DA, the string becomes DA.The next rule D → BD (or A → F E) rewrites the string as BDA (or DF E).The remainder of the derivation is similar.The final string, i.e., the sentence, is becda.(b) Syntactic structure behind the sentence Colorless green ideas sleep furiously in terms of immediate constituent analysis.This diagram means, for instance, that the noun phrase (NP) green ideas consists of the adjective (A) green and the noun (N) ideas.Roughly speaking, a nonterminal symbol in a CFG corresponds to a constituent in syntax; a terminal symbol corresponds to a word. | 2024-02-13T06:44:40.116Z | 2024-02-11T00:00:00.000 | {
"year": 2024,
"sha1": "0ba4db60f78b619313d45252a8d48a0aa4a2dfe7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1103/physrevresearch.6.033216",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "0ba4db60f78b619313d45252a8d48a0aa4a2dfe7",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
246329957 | pes2o/s2orc | v3-fos-license | Pharmacokinetic, Hemostatic, and Anticancer Properties of a Low-Anticoagulant Bovine Heparin
Heparin is a centennial anticoagulant drug broadly employed for treatment and prophylaxis of thromboembolic conditions. Although unfractionated heparin (UFH) has already been shown to have remarkable pharmacological potential for treating a variety of diseases unrelated with thromboembolism, including cancer, atherosclerosis, inflammation, and virus infections, its high anticoagulant potency makes the doses necessary to exert non-hemostatic effects unsafe due to an elevated bleeding risk. Our group recently developed a new low-anticoagulant bovine heparin (LABH) bearing the same disaccharide building blocks of the UFH gold standard sourced from porcine mucosa (HPI) but with anticoagulant potency approximately 85% lower (approximately 25 and 180 Heparin International Units [IU]/mg). In the present work, we investigated the pharmacokinetics profile, bleeding potential, and anticancer properties of LABH administered subcutaneous into mice. LABH showed pharmacokinetics profile similar to HPI but different from the low-molecular weight heparin (LMWH) enoxaparin and diminished bleeding potential, even at high doses. Subcutaneous treatment with LABH delays the early progression of Lewis lung carcinoma, improves survival, and brings beneficial health outcomes to the mice, without the advent of adverse effects (hemorrhage/mortality) seen in the animals treated with HPI. These results demonstrate that LABH is a promising candidate for prospecting new therapeutic uses for UFH.
Introduction
Unfractionated heparin (UFH) obtained from animal tissues has been massively exploited as anticoagulant agent for almost a century, which makes it one of the oldest extant biologic drugs. 1 Although low-molecular weight heparins (LMWHs) and directly acting oral anticoagulants have been increasingly prescribed for treatment and prophylaxis of most thromboembolic diseases, UFH is still the most potent anticoagulant, being indispensable for patients Keywords ► unfractionated heparin ► low-molecular weight heparin ► pharmacokinetics ► bleeding ► Lewis lung carcinoma Abstract Heparin is a centennial anticoagulant drug broadly employed for treatment and prophylaxis of thromboembolic conditions. Although unfractionated heparin (UFH) has already been shown to have remarkable pharmacological potential for treating a variety of diseases unrelated with thromboembolism, including cancer, atherosclerosis, inflammation, and virus infections, its high anticoagulant potency makes the doses necessary to exert nonhemostatic effects unsafe due to an elevated bleeding risk. Our group recently developed a new low-anticoagulant bovine heparin (LABH) bearing the same disaccharide building blocks of the UFH gold standard sourced from porcine mucosa (HPI) but with anticoagulant potency approximately 85% lower (approximately 25 and 180 Heparin International Units [IU]/mg). In the present work, we investigated the pharmacokinetics profile, bleeding potential, and anticancer properties of LABH administered subcutaneous into mice. LABH showed pharmacokinetics profile similar to HPI but different from the low-molecular weight heparin (LMWH) enoxaparin and diminished bleeding potential, even at high doses. Subcutaneous treatment with LABH delays the early progression of Lewis lung carcinoma, improves survival, and brings beneficial health outcomes to the mice, without the advent of adverse effects (hemorrhage/mortality) seen in the animals treated with HPI. These results demonstrate that LABH is a promising candidate for prospecting new therapeutic uses for UFH.
requiring a rapid-onset and deep low-coagulant state, such as those undergoing cardiopulmonary bypass, extracorporeal membrane oxygenation, hemodialysis, and severe deep-vein thrombosis and pulmonary embolism. 2 In addition to anticoagulant activity, UFH also exerts many other biological effects, including modulation of different proteases and components of the extracellular matrix and binding to cytokines and growth factors. 3 Several preclinical and clinical studies have already demonstrated that UFH has a remarkable pharmacological potential for treating diseases unrelated to thromboembolism such as cancer, atherosclerosis, inflammation, and viral infections. 4,5 Among them, we can highlight the therapeutic effects of UFH on different pathways related to cancer progression, including angiogenesis, tumor cell proliferation and adhesion, immune system modulation and tumor cell migration and invasion during metastasis. 6 However, the high anticoagulant potency of UFH makes the doses commonly necessary to achieve satisfactory effects on therapeutic targets related to cancer or other non-thromboembolic diseases unsafe due to an elevated risk of hemorrhage incidents. 7 In addition to this serious adverse effect, UFH is clinically employed by intravenous route, making it unfeasible for long-term outpatient treatments. 8 Although LMWHs and some UFH mimetics obtained through extensive chemical/enzymatic processes have already proven to be effective and pose reduced risk of bleeding, different stakeholders involved in the production and research of heparins are still looking for new and more feasible UFH derivatives that somehow preserve the physical-chemical features required for aiming new therapeutic targets but with decreased anticoagulant activity nonetheless. 9 All UFHs currently available for clinical use and production of LMWHs are produced using heparin porcine intestine (Heparin Porcine Intestine; HPI) (Heparin Bovine Intestine; HBI) Hepatocyte Growth Factor (HGF) as raw material, except in some countries, including Brazil, Argentina, and India, which also employ UFH products obtained from heparin bovine intestine (HBI). [10][11][12][13] However, the use of HPI and HBI as interchangeable UFHs requires special attention due to their chemical and pharmacological differences. 14 The Fig. 1 Novel bovine heparins. Average pharmaceutical bovine heparin preparations (HBI) with approximately 100 IU/mg anticoagulant potency are actually composed by a mixture of low-anticoagulant (approximately 25 IU/mg) and high-anticoagulant (200 IU/mg) heparin chains named LABH and HABH, respectively, which can be separated through a single anion-exchange chromatography step. These bovine derivatives, as well as native HBI and porcine heparin (HPI), are composed by the same disaccharide units but in different proportions. HABH has a disaccharide composition similar to HPI while LABH contains lower proportions of N,6-disulfated and N,3,6-trisulfated α-glucosamine units, which are important components of AT-link region of heparin. For further details on structural and pharmacological features of these heparins see Tovar et al 2019. 18 increased proportion of N-sulfated but not 6-sulfated αglucosamine and diminished quantity of the disaccharide composed of N,3,6-trisulfated α-glucosamine linked to βglucuronic acid (►Fig. 1), which is directly involved in the potentiation of antithrombin (AT), makes the anticoagulant activity of HBI significantly lower than that of HPI (approximately 120 vs. 180 international unit [IU]/mg, respectively). [15][16][17] Nevertheless, our research group demonstrated that pharmaceutical HBI is actually composed of a mixture of two types of heparin chains bearing different chemical compositions and anticoagulant activities, which in turn can be separated through a single ion-exchange chromatography step (►Fig. 1), named as high-anticoagulant bovine heparin (HABH) and LABH. 18,19 While HABH has chemical composition (enriched in N,6-disulfated α-glucosamine) and anticoagulant activity (approximately 200 IU/mg) similar to HPI, LABH has diminished potency (approximately 25 IU/mg) due to the preponderance of disaccharides containing N-sulfated but not 6-sulfated α-glucosamine (►Fig. 1). Different from HBI, HPI and pharmaceutical preparations sourced from bovine lung contain diminished amounts of low-anticoagulant heparin chains such as the component of LABH. 18 Considering that LABH has anticoagulant potency markedly lower than that of the gold standard HPI and consequently reduced bleeding risk, it is a suitable candidate for prospecting novel pharmaceutical uses of UFH. 20 In the present work, we evaluate the pharmacokinetics profile and bleeding potential of LABH administered subcutaneous (SC) through animal models. We also find that SC treatment with LABH delays the early Lung Lewis Carcinoma (LLC) tumor progression in mice and improves survival and brings beneficial health outcomes (reduced weight loss and incidence of complicated tumors) to the sick animals. Besides the potential for development as a new oncologic coadjutant, the establishment of basic pharmacological parameters, such as pharmacokinetics and safety, is paramount for further researches on the use of LABH as a therapeutic agent for treatment of other non-thromboembolic diseases.
Material and Methods
Heparins LABH employed in the assays was prepared by fractionating a pool containing 10 batches of pharmaceutical preparations of HBI available in the Brazilian market through ion-exchange chromatography, by following the "Protocol 1" described in Tovar et al. 18 Pharmaceutical HPI (Hemofol) was obtained from Cristália (Itapira, Brazil) and the LMWH enoxaparin (Clexane) from Sanofi (Singapore
Animal Experiments
Experiments were conducted with adult (8-13 weeks age) wild type C57Bl/6 mice maintained at 22 to 24°C, artificial light cycles of 12 hours, and ad libitum feeding. The animals submitted to invasive/surgical procedures were anesthetized with 35 mg/kg ketamine and 9 mg/kg xylazine (both from Ceva Brasil; Paulínia, Brazil) administered intraperitoneally. All in vivo assays were performed in compliance with the guidelines for animal care and experimentation of our institution (Federal University of Rio de Janeiro, Brazil).
In Vitro Anti-FIIa and -FXa Activities
The anticoagulant activities of the heparins were determined by measuring inhibition of thrombin (FIIa) and activated factor X (FXa) with chromogenic assays, as previously described. 21 Different concentrations of HPI, LABH, and LMWH (0!0.4 μg/mL) were incubated (60 seconds at 37°C) with 10 nM AT and 2 nM FIIa or FXa (Hematologic Technologies; Essex Junction, United States) or human plasma. After incubation, the anti-FIIa or -FXa activities were determined by adding 100 μM of chromogenic substrates S-2238 or S-2765 (Chromogenix; Molndal, Sweden), respectively, and then recording absorbance (405 nm) during 300 seconds in a ThermoMax Microplate Reader (American Devices; Sunnyvale, United States). Anti-FIIa and -FXa potencies (IU/mg) were calculated with basis on parallel line assays performed with the Sixth International Heparin Standard (for LABH and HPI) and with the Third Standard for LMHW.
Pharmacokinetic Assessments
Plasmatic concentrations of the heparins were indirectly estimated by measuring residual anti-FXa or anti-FIIa activity, as described elsewhere. 22 Briefly, animals treated with doses of 2, 8, and 20 mg/kg of LMWH, HPI, and LABH, respectively, were administered SC. Blood samples were collected by the inferior vena cava at different times after the treatment (0 ! 10 hours). The anti-FXa or anti-FIIa activities (IU/mL) were measured as described in the previous section by using different dilutions of the plasmas from each treatment and then calculated on the basis of the values obtained to naïve plasma spiked with known concentrations of HPI, LABH, or LMWH.
Bleeding Evaluations
Bleeding tendencies of the heparins were assessed on the basis of in vivo model described in Tovar et al. 18 Briefly, animals were treated SC with different doses of LABH (8!40 mg/kg), HPI (1!16 mg/kg), LMWH (8!16 mg/kg) and saline (control). Bleeding was quantified after 1 hour of heparin administration by collecting the blood spilled from cuts (1 mm diameter) in the tails of the animals in 1.5 mL distilled water. Blood was collected for 10 minutes and then in the subsequent 50 minutes and hemorrhage quantified by measuring the dissolved hemoglobin (absorbance 540 nm) with a ThermoMax Microplate Reader.
Lewis Lung Carcinoma Model in Mice
LLC cells obtained from ATCC (Manassas, United States) were grown in modified Dulbecco-Eagle (DMEM) medium (Vitrocell; Campinas, Brazil), supplemented with 10% fetal bovine serum (Invitrogen; Waltham, United States). The LLC cells (5 Â 10 5 cells/60 µL) were inoculated SC into the dorsal region of the mice. One day after the inoculation, the animals were treated with SC injections of LABH, HPI, or LMWH (8 mg/kg of each) or saline (control) once daily during 26 days (D27). Growing of the tumors was monitored weekly (D1, D7, D14, D21 and D28) by measuring their cranium-caudal and lateral-lateral axes. After the treatment period (D28), the animals were sacrificed and then the tumors were surgically resected and weighed.
Evaluation of Health Parameters
Different health parameters of both heath (naïve) and sick animals treated with different heparins or saline (control) were evaluated. Body weight of the animals was monitored weekly (D1, D7, D14, D21, and D28) during the treatment period. Weighing of the animals on D28 was subtracted from the tumor weights. Ectoscopic evaluations were based on the onset and severity of ulcerations and hematomas caused by the tumors or application of the heparins during or after (post-mortem) treatment period. Mortality rate of the animals submitted to different treatments was recorded daily and the cause of death was assessed by macropathological necropsy, ulceration, and/or critical health conditions were criteria to sacrifice the animal. The lungs of the animals were evaluated for the occurrence of metastasis by histological examination. A minimum of three slides of each animal, stained with hematoxylin-eosin were carefully examined.
LABH Differs from HPI and LMWH on the Profile of Anticoagulant Activity
We evaluated the in vitro anticoagulant activity of LABH based on the anti-FIIa and anti-FXa activities and compared the effect with those of HPI and LMWH. Clearly, the three types of heparins present different activity profiles. LABH had low and HPI high anticoagulant activities in both assays (approximately 25 IU/mg and approximately 180 IU/mg, respectively; ►Fig. 2). In contrast, LMWH (enoxaparin) showed potent anti-FXa but low anti-FIIa activities (approximately 100 IU/mg and approximately 25 IU/mg; ►Fig. 2), as previously established elsewhere. [22][23][24] Although its low potency had already been determined by Tovar et al, 18 the distinct profile of LABH seen in these anticoagulant assays indicates that this heparin may also have different therapeutic effects than HPI and LMWH on non-hemostatic pathological conditions. We also found that the anti-FXa/anti-FIIa ratios of LABH in assays conducted with purified AT or human plasma (containing both AT and heparin co-factor II [HCII]) are approximately 1 (data not shown). Considering that HCII mediates anti-FIIa but not anti-FXa activity, this indicates that the contribution of HCII for the anticoagulant activity of LABH is similar to those previously reported in both HPI and HBI 18
LABH and HPI Administered SC Have Similar Pharmacokinetics Profile
The following step on the way to propose a therapeutic use for LABH is to assure its absorption after SC administration and define its pharmacokinetic profile. To achieve this objective, LABH, HPI, and LMWH were administered SC to mice at doses adjusted to assure their detections in the plasma by anti-FXa assay. The low anticoagulant activity of LABH requires administration of a higher dose (20 mg/kg) to make feasible its detection in the mice plasma. LABH and HPI have similar pharmacokinetics profiles (►Fig. 3), as estimated by the time required to achieve their maximum plasma concentrations (T max ) and the half-life time (T 1/2 ), which are markedly distinct from the profile of LMWH (►Table 1). Clearly, we observed a similarity between the absorption of LABH and HPI but not with LMWH. When the maximum plasma concentrations (C max ) and also the area under the curve (AUC) were normalized to a similar dose and to their specific anti-FXa potencies, we observed that LABH is better absorbed than HPI but poorer than LMWH (►Table 1). Although differing in the proportion of some disaccharide building blocks, the improved absorption of LABH in comparison to HPI certainly relates to its smaller molecular weight (13.5 and 17 kDa, respectively), which confirms the inverse correlation between SC suitability and molecular weights of the heparins. 25 We also observed that the pharmacokinetics profiles of HPI administered SC by monitoring anti-FXa or anti-FIIa activities are similar (►Fig. 3C). Although there is little information on the pharmacokinetics of UFHs after SC administration, our results are in line with the literature reporting that high-molecular-mass heparins have poor SC pharmacokinetics and are eliminated by both renal filtration and endothelium cells capitation while LMWHs are better absorbed and removed from plasma mostly by renal route. 26
LABH-Administered SC Does Not Provoke Bleeding
Next, we evaluated the bleeding effect of LABH after SC administration. Bleeding is the major adverse effect of heparin, especially in the case of UFHs. 27,28 It is an obstacle for the use of UFHs in non-thromboembolic diseases, especially because such uses often require elevated doses and long periods of administration. 29 We evaluated bleeding measuring the blood spilled from cuts in tails of mice 1 hour after SC administration of LABH, HPI, and LMWH. Blood was collected for an initial period of 10 minutes and then in the subsequent 50 minutes (►Fig. 4).
In the initial 10 minutes of blood collection, none of the heparins provoked blood losses significantly higher than those of the animals treated with saline (control) (►Fig. 4A). Possibly, the mechanisms of primary hemostasis (e.g., vasoconstriction, change in vascular permeability and platelet adhesion) are able to assuage the initial bleeding, even in the animals heparinized with HPI. 30,31 In the subsequent 50 minutes, doses of HPI and LMWH above 8 mg/kg resulted in blood losses significantly higher than that measured in control animals (►Fig. 4B). On the other hand, SC administration of LABH did not increase bleeding, except for a modest effect in the animals treated with a very high dose (40 mg/kg).
Tovar et al showed that different from HPI, HBI, and the bovine derivative HABH, LABH did not provoke bleeding even by intravenous (IV) administration. 18 Such a lack of hemorrhage effects directly correlates with the low anticoagulant potency of LABH (approximately 25 IU/mg). Although LMWHs are certainly safer than non-modified UFHs, 32,33 Table 1 Pharmacokinetic parameters T max (time to reach maximum plasma concentration), T 1/2 (elimination half-life), AUC 0-10h (area under the curve) and C max (maximum plasma concentration) and molecular masses (M w ) of LABH, HPI, and LMWH we observed that even high SC doses of LABH provoked less bleeding than enoxaparin in the animals, this is possibly due to the improved SC absorption and high anti-FXa potency (approximately 100 IU/mg) of this LMWH. In conclusion, LABH is devoid of bleeding effect by both SC and IV administration and thus do not pose the worse adverse effect hindering the therapeutic use of UFHs in non-thromboembolic diseases.
Effect of LABH on LLC Tumor Progression
Anticancer effect is one of the most relevant non-hemostatic pharmacological activities reported for heparins. [34][35][36] We assessed this effect of LABH in comparison with those of HPI and LMWH using an experimental model of cancer based on tumor formation by SC inoculation of LLC cells in mice. 37 The animals received daily SC doses (8 mg/kg) of the three heparins. Measurement of the tumor size by examination of the animals showed that the three heparins delayed tumor progression, which is more evident up to the 14 th day and became less expressive on the examinations at the 21 st and especially at the 28 th days (►Fig. 5A). Tumor was detected on the 14 th day in nine animals among 15 receiving saline and only in four among 10 treated with LABH, all with a small size (<0.5 cm 2 ). None of the seven animals treated with LMWH had tumor and just one among the same number of animals was treated with HPI (shadowed area in ►Fig. 5A). On the 28 th day after tumor cells inoculation, the animals were sacrificed and their tumors resected, examined, and weighted. Again, we observed that the heparins delayed tumor progression, which was less evident in the case of LMWH (►Fig. 5B, C). Animals treated with LABH and HPI showed a decrease in tumor size, but with no statistical significance, in comparison with the control animals. We did not identify tumor metastasis on lungs of all animals' groups after careful histological examination (not shown).
Subsequently we attempted to examine the balance between the beneficial action of the three heparins on the tumor progression and possible adverse effects of the drugs. The relative survival curves (►Fig. 6A) showed that animals inoculated with LLC cells and treated with HPI or saline had a high mortality rate, most of them were related to ulcerations. Some animals treated with HPI that died prematurely showed extensive post-mortem dorsal hematoma at the injection site, in addition to pulmonary hemorrhages, suggesting that bleeding is the major cause of death or at least contributed to that. These findings did not occur in other groups of mice. The animals treated with LABH and LMWH showed a notable decrease in mortality compared with the animals treated with saline or HPI.
Another approach to examine the beneficial effects of the heparins during tumor progression was based on the ponderal loss/gain of body weight in the course of 28 days treatment. Mice inoculated with LLC cells and treated with LABH showed an average body weight increase of approximately 11%, similar to the naive group (approximately 14%) (►Fig. 6B). Animals treated with saline (control) had a decrease of 9% in body weight while HPI and LMWH had more modest effect on the increase of the mice body weight (approximately 3 and 6%, respectively). We also evaluated the incidence of complicated tumors in the animals based on ectoscopic examination. The three types of heparins had favorable effects preventing tumor ulceration and hematomas but LABH and LMWH showed more pronounced benefices (►Fig. 6C).
Although most of the studies on the anticancer properties of heparins are focused on their P-selectin-mediated hematogenous anti-metastatic activities, both UFHs and LMWHs have also proven to be effective for other cancer therapeutic targets, such as upregulation of E-cadherin and inhibition of HGF, heparinase, and galectin-3. 38-42 UFH inhibits proliferation of tumor cells by modulating the proto-oncogenes c-Myc and c-Fos, which downregulates the phosphorylation of MAPK part of the signaling cascade of protein kinase C, thus hindering the growth of both primary and metastatic tumors. 43 Another noteworthy anticancer effect of heparins is the release of the natural anticoagulant TFPI (tissue factor pathway inhibitor) from the endothelial cells. TFPI has been shown to inhibit hypercoagulability and angiogenesis resulting from the overexpression of tissue factor by tumor cells. 44 Besides these hemostatic effects, TFPI, especially TFPI-2 synthesized by vascular cells, have also been demonstrated to exert anti-metastatic activity by downregulating effectors involved in the degradation of the extracellular matrix during extravasation or intravasation, such as heparanase and matrix metalloproteinase-1. 45,46 Other heparin/heparinoid derivatives devoid of anticoagulant activity have already exhibited anticancer effects. A heparan sulfate hexasaccharide has proven to inhibit cancer stem cells renewal and induces apoptosis of three types of tumor cells. 47 Another study tested a low-anticoagulant heparin on patients with myeloid leukemia, aiming the survival of the leukemic stem cells in the marrow bone, which resulted in increased remission/recovery rates and no adverse events. 48 The sulfated non-anticoagulant heparin (S-NACH) increases threefold TFPI-2 levels and suppresses pancreatic tumor growth and metastasis in animal models. 49,50 Notwithstanding we have not demonstrated a specific mechanism of action, the delay in the early LLC tumor progression, improved survival and health (reduced weight loss and incidence of complicated tumors) promoted by SC administration of doses deprived of bleeding risk of the lowanticoagulant LABH in the mice must not relates to survival/ viability of cancer stem cells, considering that it did not affect the final size of the tumors, but to a possible raise in the plasmatic levels of TFPI.
Conclusion
Several clinical trials have shown that both UFH and LMWHs might bring beneficial clinical outcomes to patients with different solid tumors; nevertheless, the elevated risk of hemorrhage incidents jeopardizes their uses. 51 The main objective of this study was to evaluate the effect of a heparin with low-anticoagulant activity (named as LABH), a derivative from pharmaceutical preparations of HBI, on tumor progression in mice. The effect of this derivative was compared with that of the gold standards HPI and enoxaparin (LMWH). The three types of heparins were SC administered to mice at the same dose in mass and not as anticoagulant potency (IU) since the effect we tested is not related to the action of the heparins on coagulation.
The administration of HPI delayed tumor progression but in parallel it had a toxic effect in the animals, as indicated by the increased mortality, some due to bleeding. In contrast, LABH had a similar effect in hindering tumor progression, but without such deleterious effects. These observations show that LABH has a wider therapeutic window than HPI, with beneficial pharmacological action and diminished adverse effects. Considering that UFHs are commonly administered IV, another challenge we overcome here was to ensure that LABH is satisfactorily absorbed after SC administration, which is an essential route for outpatient use over long periods of time.
Finally, our study is an example of combining the detailed structural analysis of heparin preparations with testing their pharmacological effects on both in vitro assays and specific animal experimental models. This is an approach to obtain more accurate information on the structure versus specific biological effects, which may lead to the development of new heparins with practical use in medicine.
What Is Known about This Topic?
• Heparin has several pharmacological properties other than anticoagulant activity, especially for use as new oncogenic agent. • However, the high anticoagulant potency of heparin makes the doses necessary to exert non-hemostatic effects unsafe due to an elevated bleeding risk. • Development of new heparins with decreased anticoagulant activity could make the therapeutic use for treating non-thromboembolic diseases safer.
What Does This Paper Add?
• We demonstrate that a low-anticoagulant bovine heparin derivative is suitable for subcutaneous administration required for outpatient/long-term treatments. • Even high subcutaneous doses of this derivative do not provoke bleeding. • This derivative was able to delay the early progression of Lewis lung carcinoma, improve survival, and bring health benefits with minimal adverse effects. • Such a low-anticoagulant heparin is a promising candidate for development as a new therapeutic agent for treatment of non-thromboembolic diseases.
Conflict of Interest
None declared. | 2022-01-28T16:11:41.995Z | 2022-01-25T00:00:00.000 | {
"year": 2022,
"sha1": "fdbab280d37953e91ab0013720ff766d7847aa53",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1055/s-0042-1745743",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "64f222028122530c9c0146fe2576b5aa2d00b195",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233191830 | pes2o/s2orc | v3-fos-license | Potentiation of the Anticancer Effects by Combining Docetaxel with Ku-0063794 against Triple-Negative Breast Cancer Cells
Purpose mTORC1 and mTORC2 inhibition by Ku-0063794 could confer profound anticancer effects against cancer cells because it eliminates feedback activation of Akt. Herein, we aimed to determine anticancer effects of docetaxel and Ku-0063794, individually or in combination, against breast cancer cells, especially triple-negative breast cancer (TNBC) cells. Materials and Methods MCF-7 breast cancer and MDA-MB-231 TNBC cell lines for in vitro studies and mouse xenograft model for in vivo studies were used to investigate the effect of docetaxel, Ku-0063794, or their combination. Results In the in vitro experiments, combination therapy synergistically reduced cell viability and induced higher apoptotic cell death in breast cancer cells than the individual monotherapies (p < 0.05). Western blot analysis and flow cytometric analysis showed that the combination therapy induced higher apoptotic cell death than the individual monotherapies (p < 0.05). In the in vivo experiment, docetaxel and Ku-0063794 combination therapy reduced the growth of MDA-MB-231 cells xenografted in the nude mice better than in the individual monotherapies (p < 0.05). Immunohistochemistry showed that the combination therapy induced the highest expression of cleaved caspase-3 and the lowest expression of Bcl-xL in the MDA-MB-231 cells xenografted in the nude mice (p < 0.05). Western blot analysis and immunofluorescence, incorporating both in vitro and in vivo experiments, consistently validated that unlike individual monotherapies, docetaxel and Ku-0063794 combination therapy significantly inhibited epithelial-mesenchymal transition (EMT) and autophagy (p < 0.05). Conclusion These data suggest that docetaxel and Ku-0063794 combination therapy has higher anticancer activities over individual monotherapies against MDA-MB-231 TNBC cells through a greater inhibition of autophagy and EMT.
Introduction
Triple-negative breast cancer (TNBC) is a subtype of breast cancer defined by the negative expression of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) [1,2]. Although TNBCs account for approximately 15%-20% of all the breast cancer cases, they are the most challenging as they confer a poor survival outcome due to the limited number of available therapeutic options [1,3]. Since TNBCs lack ER, PR, and HER2, chemotherapy occupies the central position in the treatment of TNBCs. Anthracycline-and taxane-based chemotherapy have been the core of the chemotherapy regimens used for TNBCs [3][4][5]. However, the patients with TNBC still have very limited chemotherapy options compared with the patients with non-TNBC subtypes after tumor recurrence and metastasis, with a median response duration of only 3 months [3][4][5]. Furthermore, since anthracycline-based regimens have cardiotoxicity, it is essential to develop a novel regimen or to appropriately combine treatment regimens to limit the dose of anthracycline-based regimens [6,7].
As the therapeutic target of TNBCs, the mammalian target of rapamycin (mTOR) protein kinase attracts attention because it lies at the center of TNBC signal transduction [6,8]. Among the mTOR inhibitors, everolimus, an allosteric mTOR complex 1 (mTORC1)-specific inhibitor, has been approved for clinical use against ER positive breast cancer with exemestane (an aromatase inhibitor), and is currently in clinical trial against TNBC in combination with other chemotherapy regimens [6]. mTOR consists of two distinct cell signaling complexes, mTORC1 and mTORC2, both of which are essentially involved in cell proliferation. Therefore, targeting to mTORC1 alone may result in drug resistance due to compensatory activation of mTORC2 signaling [8][9][10]. Therefore, dual mTORC1 and mTORC2 inhibitors could improve the therapeutic efficacy of mTOR-targeted treatment by eliminating feedback activation of Akt. Ku-0063794 is a highly specific ATP competitive mTOR inhibitor that targets both mTORC1 and mTORC2 complexes [8]. Ku-0063794 inhibits the phosphorylation of S6K1 and 4E-BP1, which are downstream substrates of mTORC1, and also inhibits Akt phosphorylation on Ser473, which is the target of mTORC2 [8]. Based on these findings, we hypothesized that a combination of Ku-0063794 with docetaxel (a taxene-based regimen), could have enhanced anticancer activity against TNBCs. Thus, we investigated the anticancer effects and mechanism of Ku-0063794 and docetaxel treatment against TNBC cells in vitro and in vivo.
Quantification of apoptotic cell death by flow cytometry
To detect apoptotic cell death, MCF7 and MDA-MB-231 breast cancer cells were stained with annexinV/propodium iodide (PI). After incubation for 10 minutes in the dark at 25°C, the cells were analyzed using Attune NxT acoustic focusing cytometer (Thermo Fisher Scientific, Waltham, MA).
Cell migration assay
MCF7 and MDA-MB-231 breast cancer cell line migration was analyzed using the in vitro wound healing assay. Cells were grown to confluence in 6-well plates and changed to serum-free medium for an additional 24 hours. Cell monolayers were scraped with a micropipette tip and treated with Ku-0063794, docetaxel or a combination of both agents. The wound area was photographed using phase-contrast microscopy before and 24 hours after the treatment. The percentage of wound closure was determined as: [(initial areafinal area)/initial area]×100 [11].
Immunofluorescence and immunohistochemistry
MCF7 and MDA-MB-231 breast cancer cell lines were cultured on Lab-Tek chamber slides (Thermo Fisher Scientific). The cells were washed three times with phosphate-buffered saline (PBS), fixed with 4% paraformaldehyde for 20 minutes, and permeabilized with 0.3% Triton X-100 for 10 minutes. After blocking with 0.2% bovine serum albumin for 1 hour at 25°C, the slides were incubated with the antibodies against E-cadherin, vimentin, snail, LC3B, p62, and glyceraldehyde 3-phosphate dehydrogenase (1:200 dilution) at 4°C overnight. The slides were washed three times with PBS and incubated with Alexa Fluor 488-or Alexa Fluor 594-conjugated secondary antibodies (1:500 dilution) for 1 hour at 25°C; the nuclei were counter-stained with DAPI-containing VECTASHIELD Mounting Medium (Vector Laboratories) for 1 minute. The samples were observed using a fluorescence imaging system (EVOS U5000, Invitrogen, Carlsbad, CA) to analyze the expression of these markers.
For immunohistochemistry, paraffin-embedded tissue sections were deparaffinized in xylene and rehydrated in a series of graded ethanol. The antigen was retrieved with 0.01 M citrate buffer (pH 6.0) by heating the sample in a microwave. The tissue sections were then placed in 3% hydrogen peroxide for 5 min to inactivate the endogenous peroxidase, blocked for 10 minutes with normal horse serum (Vector Laboratories) and incubated with the primary antibodies against cleaved caspase-3 and Bcl-xL overnight at 4°C. The slides were then treated with the biotinylated secondary antibody for 30 minutes at 25℃, followed by streptavidin-HRP and 3,3′-diaminobenzidine solution for another 10 minutes at 25℃.
In vivo xenograft model
BALB/c nude mice (5 weeks) were used for compara- (1 mg/kg in 100 μL in normal saline, 3 times a week), and a combination of both agents (1 mg/kg Ku-0063794 combined with 1 mg/kg docetaxel in 100 μL normal saline, 3 times a week) for 3 weeks. Tumor size was measured twice weekly using a caliper. After the completion of treatment, all mice were euthanized.
Statistical analysis
All data were analyzed with SPSS ver. 11.0 software (SPSS Inc., Chicago, IL), and are presented as mean±standard deviation. Statistical comparison among the groups was performed using Kruskal-Wallis test. Probability values of p < 0.05 were regarded as statistically significant.
Cell viability after docetaxel and Ku-0063794 mono-and combination therapy
First, the effects of mono-and combination therapy comprising Ku-0063794 and docetaxel on the viability of MCF7 and MDA-MB-231 cells were investigated (Fig. 1). Docetaxel and Ku-0063794 monotherapies were shown to reduce the viability of these cells in a concentration-and time-dependent manner. The effects of docetaxel and Ku-0063794 combination therapy were different according to the cell type. Following the Ku-0063794 and docetaxel combination therapy, MDA-MB-231 cells showed the significantly decreased cell viability at lower concentrations than MCF-7 cells (p < 0.05). It suggests that the combination therapy has a synergistic effect on MDA-MB-231 cells rather than MCF-7 cells.
Apoptosis following mono-and combination therapies
The effects of mono-and combination therapy on cell apoptosis of MCF7 and MDA-MB-231 cells were determined using western blot analysis and flow cytometry, respectively. Western blot analysis revealed that combination therapy increased the expression of PARP (a pro-apoptotic marker) and decreased the expression of Mcl-1 (an anti-apoptotic marker) with the increasing concentration of docetaxel in the combination therapy ( Fig. 2A). This pattern was not significantly different between the MCF-7 and MDA-MB-231 cells (Fig. 2B). Subsequently, Annexin V/PI double staining was used in flow cytofluorimetric analyses to detect apoptotic cell death ( Fig. 2C and D). In both the cell groups, the population of Annexin V-positive cells (early and late apoptotic cells) tended to increase with the increasing concentration of docetaxel in the combination therapy, and this trend was more pronounced for the MDA-MB-231 cells than for the
Epithelial-mesenchymal transition and cell migration following mono-and combination therapies
Western blot analysis was performed to determine the effects of mono-and combination therapy on the expression of the epithelial-mesenchymal transition (EMT)-related markers in the MCF7 and MDA-MB-231 cells. In the MCF-7 cells, individual monotherapies decreased the expression of E-cadherin (an epithelial marker) and snail (a mesenchymal marker) and increased the expression of vimentin (a mesenchymal marker), suggesting a tendency of EMT promotion (Fig. 3A). By contrast, combination therapy increased the expression of E-cadherin and decreased the expression of vimentin and snail compared with the expression of these markers following individual monotherapies, suggesting
MDA-MB-231 TNBC cells
Ye-Won Jeon, Docetaxel Plus Ku-0063794 against Breast Cancer the effect of ameliorating EMT by combination therapy. This trend was more pronounced in MDA-MB-231 TNBC cells (Fig. 3B). Subsequently, we investigated cell migration following either mono-or combination therapies ( Fig. 3C and D). In both types of cell lines, combination therapy was found to significantly reduce cell migration as compared to the individual monotherapies (p < 0.05).
Changes in autophagic markers following mono-and combination therapies
Autophagy induction leads to the upregulation of LC3B and downregulation of p62. We thus compared the expression of autophagy markers, LC3B and p62, in each of the cell types following docetaxel and Ku-0063794 treated either individually or in combination using western blot analysis. In the MCF-7 cells, individual monotherapies resulted in the higher expression of LC3B and lower expression of p62 as compared to the controls, suggesting autophagy induction (Fig. 4A). However, combination therapy resulted in a significantly lower expression of LC3B and higher expression of p62 as compared to the controls, suggesting autophagy inhibition (p < 0.05). In MDA-MB-231 cells, the upregulation and downregulation of LC3B and p63 was remarkable following docetaxel monotherapy (Fig. 4B). As with the MCF7 cells, combination therapy resulted in the significantly lower expression of LC3B and higher expression of p62, suggestive of autophagy inhibition (p < 0.05).
Validation of combination therapy anticancer effects in vivo
Next investigation focused on the effects of docetaxel and Ku-0063794, treated either individually or in combination, on the growth of MCF-7 and MDA-MB-231 cells xenografted in nude mice. Xenograft models were generated by a 4-week intraperitoneal administration of MCF-7 and MDA-MB-231 cells (5×10 6 cells/mouse in 100 µL normal saline, three times a week). After the intraperitoneal administration of docetaxel (1 mg/kg/day) and Ku-0063794 (1 mg/kg/day) three times a week for 3 weeks, the mice were euthanized and the tumors were collected. Images of the tumors after necropsy showed that the shrinkage was most prominent in the mice treated with combination therapy than in the mice treated with individual monotherapies (Fig. 5A). In both types of breast cancer cells, a considerable reduction in tumor size was observed in the mice treated with combination therapy than in the mice with individual monotherapies (p < 0.05) (Fig. 5B). Collectively, the data presented here showed that combination therapy has a higher potential to reduce the growth of xenografted MCF-7 and MDA-MB-231 cells over individual monotherapies in the nude mouse model. Bcl-xL ×20 ×40 Ye-Won Jeon, Docetaxel Plus Ku-0063794 against Breast Cancer
Comparison of each group by tissue staining
Next, the xenografted MDA-MB-231 TNBC cells were stained to compare the effects of individual mono-and combination therapies (Fig. 5C). In hematoxylin and eosin staining, the number of tumor cells was significantly reduced after the combination therapy as compared to the individual monotherapy (p < 0.05) (Fig. 5C, left). Immunohistochem-istry revealed that the expression of cleaved caspase-3 (a pro-apoptotic marker) was significantly higher in the mice with combination therapy than in those with individual monotherapies (p < 0.05) (Fig. 5C, middle). Furthermore, the expression of Bcl-xL (an anti-apoptotic marker) was significantly lower in the mice with combination therapy than in those with individual monotherapies (p < 0.05) (Fig. 5C,
Validation of autophagy inhibition by combination therapy in vivo
The expression of autophagy markers, LC3B and p62, were determined in the MDA-MB-231 cells xenografted in the nude mice (Fig. 6A). Western blot revealed that the expression of LC3B was significantly increased in the docetaxel group but was not significantly changed in the Ku-0063794 and combination groups. In addition, the expression of p62 was decreased following individual monotherapies and significantly increased following combination therapy, suggesting that autophagy is inhibited after combination therapy. Immunofluorescence results confirmed the western blot analysis (Fig. 6B); combination therapy did not increase the expression of LC3B, but increased the expression of p62, suggesting the inhibitory effects on autophagy by combination therapy.
Validation of the effects of combination therapy on EMT in vivo
Finally, western blot analysis was carried out to determine the effects of mono-and combination therapies on the expression of EMT-related markers in the MDA-MB-231 cells xenografted in the nude mice (Fig. 7A). In this experiment, increased epithelial maker (E-cadherin) and decreased mesenchymal markers (vimentin and snail) were considered to be the promotion of EMT. KU monotherapy slightly decreased EMT, considering the higher expression of E-cadherin and the lower expression of snail. By contrast, docetaxel monotherapy significantly increased EMT, considering the lower expression of E-cadherin and the higher expression of vimentin and snail (p < 0.05). And, the combination therapy led to the significant reduction of EMT, considering the higher expression of E-cadherin and the lower expression of vimentin and snail (p < 0.05). These results were also re-affirmed by the E-cadherin, vimentin, and snail immunofluorescences of the MDA-MB-231 cells xenografted in the nude mice (Fig. 7B). Western blot analysis and flow cytometric analysis showed that the combination therapy induced higher apoptotic cell death than the individual monotherapies. Moreover, western blot analysis demonstrated that while each monotherapy increased the processes of EMT and autophagy, combination therapy decreased both.
In the in vivo experiment, combination therapy was shown to have the higher potential to reduce the growth of xenografted MDA-MB-231 cells over individual monotherapies. In addition, the results of in vivo experiments were consistent with in vitro experiments, including the significantly higher inhibitory effects of the combination therapy on autophagy and EMT. Taken altogether, these data suggested that docetaxel and Ku-0063794 combination therapy had higher anticancer activities over individual monotherapy against MDA-MB-231 TNBC cells through the increased inhibition of autophagy and EMT processes. mTOR complexes (mTORC1 and mTORC2) are essential mediators in the phosphoinositide 3-kinase, protein kinase B (Akt/PKB) and mTOR signaling pathway, and are crucial for cell growth, survival, motility, proliferation, protein synthesis, and transcription [12]. The first generation mTOR inhibitors that primarily target mTORC1 (such as temsirolimus and everolimus) are already FDA-approved for clinical use [12]. Unfortunately, the effectiveness of these inhibitors as a single agent therapy is stifled in part by strong mTORC1dependent negative feedback loops that become inactive on mTORC1 inhibition [8][9][10]12]. In recent randomized phase II trial, the paclitaxel/cisplatin and everolimus combination was associated with more adverse events without improvement in pathological complete response or clinical response in patients with TNBC [13]. Therefore, the present study focused on the combined use of docetaxel (taxane-based regimen) and Ku-0063794 (a novel mTORC1/2 dual inhibitor) as new therapeutic agents for the treatment of TNBC. It inhibits Akt as well as the serum-and glucocorticoid-inducible protein kinases that are likely to play vital roles in driving the proliferation of many cancers [8,12,14,15]. The finding that Ku-0063794 induces more marked dephosphorylation of 4E-BP1 than rapamycin also holds promise that this drug would be more effective at suppressing protein synthesis required for growth and proliferation of cancer cells than rapamycin derivatives [8,15].
In this study, it was found that whereas individual monotherapies slightly decreased or rather increased EMT, combining both medications significantly reduced EMT of TNBC cells. EMT guides the transformation of non-mobile epithelial-like cells into mobile, mesenchymal-like cells, allowing tumor cells to acquire the capacity to infiltrate surrounding tissues and to metastasize to distant sites [16]. EMT is basically characterized by the loss of epithelial markers (downregu-lation of E-cadherin, occludins, claudins, desmoplakin, and epithelial cytokeratins) and the gain of mesenchymal (upregulation of vimentin, N-cadherin, fibronectin, and α-smooth muscle actin) markers expression in breast cancers [17,18]. EMT process is influenced by a variety of medications, including chemotherapeutic regimens. In our study, docetaxel showed a tendency to increase EMT in breast cancer cells. It has been noted that docetaxel could induce autophagy as well as apoptosis of cancer cells [19,20]. The promoted autophagy by docetaxel is implicated in the cancer cell resistance to chemotherapy, and thus could be related with an increase in EMT of cancer cells. In addition, the same medication could have different effects on the expression of EMT markers when co-administered with other medication(s). In our previous study, it was demonstrated that although everolimus and Ku-0063794 monotherapies did not significantly affect EMT in the hepatocellular carcinoma cells, combining both medications significantly reversed EMT process [21]. Similarly, in this study, it was found that although docetaxel and Ku-0063794 monotherapies could not inhibit EMT, combining both medications effectively inhibited EMT. We believe that the EMT-inhibiting ability of combination therapy could have contributed to the high synergistic effect of the combination therapy.
Autophagy is the major cellular catabolic degradation process in response to stressors such as nutrient deprivation and bioenergetic stress [22,23]. mTORC1 coordinates both anabolism and catabolism to meet the needs of cell growth. mTORC1 also inhibits autophagy induction, primarily by (1) inhibiting ULK1/2 and the VPS34 complex, and (2) by preventing global expression of lysosomal and autophagyrelated genes through transcription factor EB phosphorylation [22][23][24]. In addition, it is generally accepted that mTORC2 also inhibits autophagy by activating mTORC1 through its interaction with ribosomes [24]. Thus, mTOR inhibitors basically have pro-autophagic properties. This experiment showed that Ku-0063794 had pro-autophagic effects in TNBC cells as demonstrated by up-and downregulation of LC3B and p62, respectively. Docetaxel is also known to have pro-autophagic properties [25]. Autophagy could be used by tumor cells as a way of escaping apoptotic cell death caused by chemotherapeutic regimens, thereby promoting chemoresistance. Hu et al. [25] reported that docetaxel-mediated autophagy induced chemo-resistance in castration-resistant prostate cancer cells, and inhibiting autophagy led to the increased chemo-sensitivity.
Our results show that whereas Ku-0063794 and docetaxel monotherapies increased autophagy, combining both medications reduced autophagy. It occasionally occurs that combining several medications could lead to the alternations in the action of mechanism. This inhibition of autophagy was Ye-Won Jeon, Docetaxel Plus Ku-0063794 against Breast Cancer accomplished by down-regulating SIRT1. SIRT1 is an essential element of autophagic processes because it significantly contributes to autophagy by deacetylating essential autophagy-related proteins, such as Atg5, Atg7, and Atg8 [26]. While everolimus and Ku-0063794 monotherapies was found to increase SIRT1, combining both resulted in the reduced expression of SIRT1 [27]. Likewise, in our study, combining docetaxel and Ku-0063794 led to the inhibition of autophagy, of which mechanism needs to be further validated. We believe that reduced autophagy in combination therapy could have contributed to the synergistic effect of the anticancer effect in combination therapy.
In conclusion, this study showed that docetaxel and Ku-0063794 combination therapy has superior anticancer activities over individual monotherapy against MDA-MB-231 TNBC cells. Specifically, in the in vitro experiments using MDA-MB-231 cells, the combination therapy was found to synergistically reduce the cell viability and induced the higher pro-apoptotic cell death than individual monotherapies. Moreover, in the in vivo experiment, combination therapy was shown to have the higher potential to reduce the growth of xenografted MDA-MB-231 cells over the individual monotherapies. In addition, both in vitro and in vivo experiments consistently validated that, unlike individual monotherapies, docetaxel and Ku-0063794 combination therapy significantly inhibited the processes of EMT as well as autophagy. Taken altogether, these data suggested that docetaxel and Ku-0063794 combination therapy had higher anticancer activities than the individual monotherapies against MDA-MB-231 TNBC through the increased inhibition of autophagy and EMT processes.
Ethical Statement
Animal studies were carried out in compliance with the guidelines of the Institute for Laboratory Animal Research, Korea (IRB No: CUMC-2018-0332-02).
Author Contributions
Conceived and designed the analysis: Kim SJ.
Conflicts of Interest
Conflict of interest relevant to this article was not reported. | 2021-04-10T06:16:43.922Z | 2021-04-05T00:00:00.000 | {
"year": 2021,
"sha1": "e73979d0fd2630871ef993bce88e6c90fca6992c",
"oa_license": "CCBYNC",
"oa_url": "http://www.e-crt.org/upload/pdf/crt-2020-1063.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3041dced71c09d1e747dd1e91a5ddaab1193ce65",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
108291142 | pes2o/s2orc | v3-fos-license | Powder and Nanotubes Titania Modified by Dye Sensitization as Photocatalysts for the Organic Pollutants Elimination
In this study, titanium dioxide powder obtained by the sol-gel method and TiO2 nanotubes, were prepared. In order to increase the TiO2 photoactivity, the powders and nanotubes obtained were modified by dye sensitization treatment during the oxide synthesis. The sensitizers applied were Quinizarin (Q) and Zinc protoporphyrin (P). The materials synthesized were extensively characterized and it was found that the dye sensitization treatment leads to modify the optical and surface properties of Titania. It was also found that the effectiveness of the dye-sensitized catalysts in the phenol and methyl orange (MO) photodegradation strongly depends on the dye sensitizer employed. Thus, the highest degradation rate for MO was obtained over the conventional Q-TiO2 photocatalyst. In the case of the nanotubes series, the most effective photocatalyst in the MO degradation was based on TiO2-nanotubes sensitized with the dye protoporfirin (ZnP). Selected catalysts were also tested in the phenol and MO photodegradation under visible light and it was observed that these samples are also active under this radiation.
Introduction
As a result of the industrial and human activities, large volumes of wastewater containing different pollutants are poured every year, significantly affecting the environmental stability. Therefore, for the treatment of urban and industrial wastewater, different alternatives have been employed. Despite this, in most of cases, pollutants are recalcitrant, non-biodegradable and often the formation of more toxic products during the treatment is noticeable [1,2]. Phenolic compounds and dyestuffs are common organic pollutants in wastewater, which have proven to be hard to remove. With this outlook, currently, it is necessary to search for more suitable and effective methods for environmental pollutants removal.
TiO 2 photocatalysis has been successfully applied as an eco-friendly and efficient alternative in the treatment of wastewater sources and polluted atmospheres [3][4][5][6][7][8][9][10][11][12]. TiO 2 has been considered as the best photocatalyst, the photoactivity of this oxide depends not only on its physicochemical properties but also on its structure. TiO 2 -based nanotubes production has been extensively studied by Kasuga et al. [13,14]. Since 1998, these authors have demonstrated that these nanotubes have great potential for use in the preparation of catalysts, adsorbents, and deodorants with high activities because their specific surface area is greatly increased. TiO 2 -based nanotubes have attracted great interest to be used as photocatalysts in environmental applications [15][16][17][18][19][20]. Currently, the advances in the study of nanotubes have been focused in self-organized anodic TiO 2 nanotube layers and the use of atomic layer deposition for the functionalization of these layers. By using this technique, important advances in the improvement of physicochemical, photoelectrochemical and photocatalytic properties of Titania have been achieved [21]. Additionally, other photocatalysts based on nanowire arrays like structures or heterostructures have also been studied [22][23][24].
In order to modify TiO 2 structure, the preparation of layered titanates have also received great attention, mainly due to the high ability to ion exchange/intercalation reactions and potential applications in the synthesis of new nanomaterials. By using bulk organic molecules in intercalation processes, it is possible to produce TiO 2 single sheets with 2D morphology [20].
TiO 2 presents some disadvantages, such as higher recombination rate of the photogenerated charges and the largest band gap value. Consequently, in order to solve these problems and for increasing the TiO 2 photoactivity, many strategies have been employed, where usually noble metal addition is a good alternative to reduce the recombination [25,26]. Currently, the dye sensitization also represents a good way to increase the TiO 2 photo efficiency in different chemical reactions [27]. The sensitization leads to the increase of visible light adsorption as usually, the sensitizer applied in photocatalytic processes is a chromophore compound, which is anchored to the semiconductor surface. Such dye molecules absorb visible light and excites electrons, and these electrons get transferred to the conduction band in the semiconductor, leading to decrease of its band gap value [28].
According to the scenario presented above, the main objective of this research was to study the effectiveness of photocatalysts based on sol-gel synthesized TiO 2 powders and TiO 2 nanotubes, in the phenol and methyl orange photodegradation in the liquid phase. As a strategy to increase the activity of the photocatalysts, these materials were modified by sensitization with different dyes. The dyes selected was also employed as bulky molecules to induce the obtention of layered titanates.
Photocatalysts Preparation
The sol-gel TiO 2 powder was obtained by controlled hydrolysis of 1 mol of titanium butoxide (IV). As hydrolysis rate controllers, acetic acid (4 mol) and ethanol (5 mol) were added. After homogenization, 8 mol of distilled water was added drop by drop and the hydrolysis maintained under continuous stirring for 3 h. The powders thus obtained were recovered by filtration and dried at 80 • C. Moreover, in order to obtain the sensitized TiO 2 , 5 mmol of quinizarin (Q) or Zinc protoporphyrin (ZnP) were incorporated in the suspension before the hydrolysis step. The samples were labeled as Q and ZnP and Figure 1 shows the structure of the sensitizers molecules. It was previously indicated in the introduction section that in the present work, we attempted to prepare layered titanates by using bulk chemicals for the separation of these layers. In order to achieve this objective, we select two different organic molecules such as Quinizarin and Zinc protoporphyrin. These molecules presented marked differences in size and chemical composition, so it was interesting to study the effect of these molecules in addition to the structure of Titania obtained and also to the photoactivity of this oxide.
In order to convert the as prepared sol-gel TiO 2 powders into titania nanotubes, the sol-gel product was dispersed in 7 M of NaOH aqueous solution and maintained under stirring for 24 h. After this, a washing with distilled water was carried out and the material thus obtained was dried at 80º C. The obtained samples in the form of nanotubes were labeled as Qa and ZnPa.
The control samples produced without colorants, such as powders TiO 2 and TiO 2 nanotubes for the sol-gel received materials and its treatment by NaOH were labeled as LT and TNT, respectively. In order to convert the as prepared sol-gel TiO2 powders into titania nanotubes, the sol-gel product was dispersed in 7 M of NaOH aqueous solution and maintained under stirring for 24 h. After this, a washing with distilled water was carried out and the material thus obtained was dried at 80º C. The obtained samples in the form of nanotubes were labeled as Qa and ZnPa.
The control samples produced without colorants, such as powders TiO2 and TiO2 nanotubes for the sol-gel received materials and its treatment by NaOH were labeled as LT and TNT, respectively.
Photocatalysts Characterization
The materials synthesized were characterized by means of the different techniques described below.
X-ray diffraction (XRD) analysis was performed on an X'PertPro PANalytical instrument (Malvern, UK). Diffraction patterns were recorded with the Cu Kradiation (40 mA, 45 kV) over a 2-range of 3°-60°, and a position-sensitive detector using a step size of 0.05° and a step time of 240 s.
All materials were analyzed by UV-Vis diffuse reflection spectroscopy by using a Varian spectrometer model Cary 100 (Palo Alto, CA, USA) and a BaSO4 sphere as the reference. All the spectra were collected in diffuse reflectance mode and transformed to a magnitude proportional to the extinction coefficient through the Kubelka-Munk function.
The textural properties were studied by the means of N2 adsorption-desorption measurements at liquid nitrogen temperature. The experiments were carried out on a Micromeritics ASAP 2010 instrument (Norcross, GA, USA). Before analysis, the samples were degassed for 2 h at 150 °C in the vacuum.
All photocatalysts were also evaluated by Transmission Electron Microscopy (TEM) in a Philips CM200 instrument (Amsterdam, The Netherlands). For this analysis, the samples were dispersed in ethanol using the ultrasound and dropped on a carbon grid.
Photocatalytic Tests
The photocatalytic activity of the synthesized catalysts was measured in the phenol and methyl orange (MO) photodegradation reactions. Both processes were carried by using a discontinuous batch system, including a 400 mL pyrex reactor enveloped by an aluminum foil, filled with an aqueous suspension (250 mL) containing 25 ppm of phenol or MO and the photocatalyst (1 g/L). This system was illuminated through a UV-transparent Plexiglas® top window (threshold absorption at 250 nm) by a 300 W Osram Ultra-Vitalux lamp (Munich, Germany) with sun-like radiation spectrum and a main line in the UVA range at 365 nm. The intensity of the incident UV-Visible light on the solution was measured with a Delta OHM photoradiometer HD2102.1 (Caselle di Selvazzano, Padova, Italy), being ca. 120 W/m 2 . The visible photocatalytic experiments were performed by using a polyester UV filter sheet (Edmund Optics, Barrington, NJ, USA) showing 99.9% of absorbance below 400 nm (0.15 W/m 2 for λ < 400 nm and 150 W/m 2 for λ > 400 nm). In order to favor the adsorption-desorption equilibrium, prior to the irradiation, the suspension was magnetically stirred for 10 min in the dark. Furthermore, a constant oxygen flow of 25 L/h used as the oxidant was passed through the suspension for improving the homogeneous dispersion of the photocatalyst in the
Photocatalysts Characterization
The materials synthesized were characterized by means of the different techniques described below.
X-ray diffraction (XRD) analysis was performed on an X'PertPro PANalytical instrument (Malvern, UK). Diffraction patterns were recorded with the Cu Kα radiation (40 mA, 45 kV) over a 2θ-range of 3-60 • , and a position-sensitive detector using a step size of 0.05 • and a step time of 240 s.
All materials were analyzed by UV-Vis diffuse reflection spectroscopy by using a Varian spectrometer model Cary 100 (Palo Alto, CA, USA) and a BaSO 4 sphere as the reference. All the spectra were collected in diffuse reflectance mode and transformed to a magnitude proportional to the extinction coefficient through the Kubelka-Munk function.
The textural properties were studied by the means of N 2 adsorption-desorption measurements at liquid nitrogen temperature. The experiments were carried out on a Micromeritics ASAP 2010 instrument (Norcross, GA, USA). Before analysis, the samples were degassed for 2 h at 150 • C in the vacuum.
All photocatalysts were also evaluated by Transmission Electron Microscopy (TEM) in a Philips CM200 instrument (Amsterdam, The Netherlands). For this analysis, the samples were dispersed in ethanol using the ultrasound and dropped on a carbon grid.
Photocatalytic Tests
The photocatalytic activity of the synthesized catalysts was measured in the phenol and methyl orange (MO) photodegradation reactions. Both processes were carried by using a discontinuous batch system, including a 400 mL pyrex reactor enveloped by an aluminum foil, filled with an aqueous suspension (250 mL) containing 25 ppm of phenol or MO and the photocatalyst (1 g/L). This system was illuminated through a UV-transparent Plexiglas®top window (threshold absorption at 250 nm) by a 300 W Osram Ultra-Vitalux lamp (Munich, Germany) with sun-like radiation spectrum and a main line in the UVA range at 365 nm. The intensity of the incident UV-Visible light on the solution was measured with a Delta OHM photoradiometer HD2102.1 (Caselle di Selvazzano, Padova, Italy), being ca. 120 W/m 2 . The visible photocatalytic experiments were performed by using a polyester UV filter sheet (Edmund Optics, Barrington, NJ, USA) showing 99.9% of absorbance below 400 nm (0.15 W/m 2 for λ < 400 nm and 150 W/m 2 for λ > 400 nm). In order to favor the adsorption-desorption equilibrium, prior to the irradiation, the suspension was magnetically stirred for 10 min in the dark. Furthermore, a constant oxygen flow of 25 L/h used as the oxidant was passed through the suspension for improving the homogeneous dispersion of the photocatalyst in the solution. For this purpose, a bubbler tank was used as a source of natural oxygen. All photocatalytic tests started at the natural pH of pollutants solutions which was ca. 6, and the total reaction time was 120 min.
During the phenol and MO photoreactions, samples were collected at different times and analyzed by UV-Visible spectrophotometry, considering the main absorption band observed for these compounds and located at 270 and 465 nm, for phenol and MO, respectively. For these analyses, a Genesys 10UV Thermo Electron instrument (Waltham, MA, USA) was used. Taking into account the Lambert-Beer law which stated that the absorbance is proportional to the concentrations, the evolution of these pollutants concentration as a function of the reaction time was calculated from the calibration curve obtained from the UV-Vis analyzes. The pollutants photodegradation rate was also determined by using Equation (1): where, v = photodegradation rate, k = Initial reaction constant, taken from the slope of the graph representing concentration vs. reaction time (s −1 ), C 0 = initial concentration of the substrates (Phenol or MO) (mol/L), V = volume of Phenol or MO (L). Photolysis tests of phenol and MO under UV-Visible light and in the absence of catalyst were carried out. Reproducibility of the measurements was ensured by double testing of selected samples. Figure 2 shows the X-ray diffraction patterns of the analyzed photocatalysts. The bare titania materials (Figure 2a) exhibit the presence of low crystalline sol-gel synthesized material. However, the production of lamellar structure can be noticed by the diffraction observed at low 2 theta angles. This diffraction can be ascribed to the presence of (0k0) body-centered orthorhombic titanate lepidocrocite like structure, commonly known as lamellar titanates. This structure was confirmed also by the asymmetric line shape of the reflection tailing toward higher angle diffractions and typical two-dimensional lattice of the layered titanates. Its treatment with NaOH (TNT sample) increases the intensity of the reflection and generates the appearance of diffraction lines associated with the (110), (130) and (200) crystallographic planes, being the last one unequivocal sign of titania nanotubes production [20].
Photocatalysts Characterization
During the phenol and MO photoreactions, samples were collected at different times and analyzed by UV-Visible spectrophotometry, considering the main absorption band observed for these compounds and located at 270 and 465 nm, for phenol and MO, respectively. For these analyses, a Genesys 10UV Thermo Electron instrument (Waltham, MA, USA) was used. Taking into account the Lambert-Beer law which stated that the absorbance is proportional to the concentrations, the evolution of these pollutants concentration as a function of the reaction time was calculated from the calibration curve obtained from the UV-Vis analyzes. The pollutants photodegradation rate was also determined by using equation 1: where, v = photodegradation rate, k = Initial reaction constant, taken from the slope of the graph representing concentration, Vs = reaction time (s −1 ), C0 = initial concentration of the substrates (Phenol or MO) (mol/L), V = volume of Phenol or MO (L). Photolysis tests of phenol and MO under UV-Visible light and in the absence of catalyst were carried out. Reproducibility of the measurements was ensured by double testing of selected samples. Figure 2 shows the X-ray diffraction patterns of the analyzed photocatalysts. The bare titania materials (Figure 2a) exhibit the presence of low crystalline sol-gel synthesized material. However, the production of lamellar structure can be noticed by the diffraction observed at low 2 theta angles. This diffraction can be ascribed to the presence of (0k0) body-centered orthorhombic titanate lepidocrocite like structure, commonly known as lamellar titanates. This structure was confirmed also by the asymmetric line shape of the reflection tailing toward higher angle diffractions and typical two-dimensional lattice of the layered titanates. Its treatment with NaOH (TNT sample) increases the intensity of the reflection and generates the appearance of diffraction lines associated with the (110), (130) and (200) crystallographic planes, being the last one unequivocal sign of titania nanotubes production [20]. When the colorants are used (Q and ZnP samples, Figure 2b) the same diffractions as for the LT sample can be noticed, although a shift to lower 2 theta angles is observed, suggesting interplanar When the colorants are used (Q and ZnP samples, Figure 2b) the same diffractions as for the LT sample can be noticed, although a shift to lower 2 theta angles is observed, suggesting interplanar distance increasing due to the colorants molecule hosting. The interplanar lamellar distance of titania increases with the kinetic diameter of the molecule, following the order of bare titania (LT sample) < Quinizarin (Q) < protoporphyrin (ZnP) sensitized titania materials.
Photocatalysts Characterization
Moreover, after NaOH treatment (Figure 2c), the ZnPa and Qa samples do not present the signal at 2 theta 49 (fingerprint of titania nanotubes), indicating a low degree of titania nanotubes production or the production of nanotubes with very low crystallinity [20]. The latter could be due to an important interaction of the colorants with the titania layer and difficulty to enroll the nanotubes in the presence of bulky organic molecules. Additionally, the NaOH treatment also produces a slight color change in the material due to either the loss of colorant or to the interaction between the Na + with the chromophore molecules.
The Brunauer-Emmett-Teller (BET) specific surface areas of the samples are presented in Table 1. All starting materials (LT, ZnP, and Q) show specific surface areas ranging between 170-200 m 2 /g, which indicates the production of mesoporous materials with average pore size around 5 nm. The NaOH treatment produces important loss of specific area due to the pore shrinking to around 2 nm for all samples (TNT, Qa, ZnPa). The optical properties of the samples were analyzed by means of UV-Vis diffuse reflection spectroscopy and the obtained spectra for all photocatalysts are shown in Figure 3.
Moreover, after NaOH treatment (Figure 2c), the ZnPa and Qa samples do not present the signal at 2 theta 49 (fingerprint of titania nanotubes), indicating a low degree of titania nanotubes production or the production of nanotubes with very low crystallinity [20]. The latter could be due to an important interaction of the colorants with the titania layer and difficulty to enroll the nanotubes in the presence of bulky organic molecules. Additionally, the NaOH treatment also produces a slight color change in the material due to either the loss of colorant or to the interaction between the Na + with the chromophore molecules.
The Brunauer-Emmett-Teller (BET) specific surface areas of the samples are presented in Table 1. All starting materials (LT, ZnP, and Q) show specific surface areas ranging between 170-200 m 2 /g, which indicates the production of mesoporous materials with average pore size around 5 nm. The NaOH treatment produces important loss of specific area due to the pore shrinking to around 2 nm for all samples (TNT, Qa, ZnPa). The optical properties of the samples were analyzed by means of UV-Vis diffuse reflection spectroscopy and the obtained spectra for all photocatalysts are shown in Figure 3. The typical band edge of the TiO2 semiconductor was observed for all samples near 350 nm. The sensitized materials present also absorption in the visible range (400-700 nm) compared to bare titania materials (Figure 3a). The quinizarin sensitized titania shows maximal absorbance in the 450-570 nm range, which could be shifted as a function of the structuring charge [29,30]. Thus, the neural quinizarin absorbs in 455-496 nm range, monoanionic at 554-569 nm and dianionic at 550 nm. The optical spectra of the Q sample indicate the presence of mono and dianionic quinizarin species interacting with the Ti-O skeleton, whereas the Qa bands shift to the lower wavelength indicating the loss of charge of the quinizarin molecules. The latter could be explained by the presence of Na + ,
P Pa
Absorbance, a.u. The typical band edge of the TiO 2 semiconductor was observed for all samples near 350 nm. The sensitized materials present also absorption in the visible range (400-700 nm) compared to bare titania materials (Figure 3a). The quinizarin sensitized titania shows maximal absorbance in the 450-570 nm range, which could be shifted as a function of the structuring charge [29,30]. Thus, the neural quinizarin absorbs in 455-496 nm range, monoanionic at 554-569 nm and dianionic at 550 nm. The optical spectra of the Q sample indicate the presence of mono and dianionic quinizarin species interacting with the Ti-O skeleton, whereas the Qa bands shift to the lower wavelength indicating the loss of charge of the quinizarin molecules. The latter could be explained by the presence of Na + , which could compensate the charge of the colorant, thus, producing the observed blue shift. However, bands centered at 510 nm indicate either a partial quinizarin charge compensation or the blue shift could be produced by the confinement effect by the nanotube formation.
On the other hand, the metal porphyrins present two types of visible adsorption, one band centered at around 410 nm called Soret band and a group of three to four bands in the 500-650 nm range called Q bands. Two types of bands are observed for our samples (ZnP, ZnPa) regardless of the treatment, indicating the presence of Zn protoporphyrin before and after the NaOH treatment without any significant change in its structure. However, the intensity of the Q bands decreases after NaOH treatment indicating either a lower quantity of colorant or closer colorant to metal energy levels and better orbital mixing due to its confinement in the resulting nanotubes.
The bandgaps energies were also calculated and the results are listed in Table 1. The LT and TNT samples show higher band gap energies 3.4-3.5 eV reduced to 3.2 and 3.3 for the sensitized samples due to the addition of colorant levels in the band gap of the pure titania structure. Figure 4 shows the absorption spectra obtained for the evaluation of band gap energy.
However, bands centered at 510 nm indicate either a partial quinizarin charge compensation or the blue shift could be produced by the confinement effect by the nanotube formation.
On the other hand, the metal porphyrins present two types of visible adsorption, one band centered at around 410 nm called Soret band and a group of three to four bands in the 500-650 nm range called Q bands. Two types of bands are observed for our samples (ZnP, ZnPa) regardless of the treatment, indicating the presence of Zn protoporphyrin before and after the NaOH treatment without any significant change in its structure. However, the intensity of the Q bands decreases after NaOH treatment indicating either a lower quantity of colorant or closer colorant to metal energy levels and better orbital mixing due to its confinement in the resulting nanotubes.
The bandgaps energies were also calculated and the results are listed in Table 1. The LT and TNT samples show higher band gap energies 3.4-3.5 eV reduced to 3.2 and 3.3 for the sensitized samples due to the addition of colorant levels in the band gap of the pure titania structure. Figure 4 shows the absorption spectra obtained for the evaluation of band gap energy. Figure 5 shows selected TEM images for the TNT samples, as it can be seen in these images, there are microchannels in the samples, so, we can observe that the preparation method led us to obtain nanotubes like structures in the materials analyzed. Figure 5 shows selected TEM images for the TNT samples, as it can be seen in these images, there are microchannels in the samples, so, we can observe that the preparation method led us to obtain nanotubes like structures in the materials analyzed.
However, bands centered at 510 nm indicate either a partial quinizarin charge compensation or the blue shift could be produced by the confinement effect by the nanotube formation.
On the other hand, the metal porphyrins present two types of visible adsorption, one band centered at around 410 nm called Soret band and a group of three to four bands in the 500-650 nm range called Q bands. Two types of bands are observed for our samples (ZnP, ZnPa) regardless of the treatment, indicating the presence of Zn protoporphyrin before and after the NaOH treatment without any significant change in its structure. However, the intensity of the Q bands decreases after NaOH treatment indicating either a lower quantity of colorant or closer colorant to metal energy levels and better orbital mixing due to its confinement in the resulting nanotubes.
The bandgaps energies were also calculated and the results are listed in Table 1. The LT and TNT samples show higher band gap energies 3.4-3.5 eV reduced to 3.2 and 3.3 for the sensitized samples due to the addition of colorant levels in the band gap of the pure titania structure. Figure 4 shows the absorption spectra obtained for the evaluation of band gap energy. Figure 5 shows selected TEM images for the TNT samples, as it can be seen in these images, there are microchannels in the samples, so, we can observe that the preparation method led us to obtain nanotubes like structures in the materials analyzed. TiO 2 as powders and nanotubes were extensively characterized by using the additional instrumental techniques, these results have been previously reported by Ivanova et al. [20]. Figure 6 represents the evolution of methyl orange absorbance during 120 min of photoreaction time by using commercial TiO 2 P25. As it was previously indicated in Section 2.3, the main absorption Nanomaterials 2019, 9, 517 7 of 11 band located at 465 nm was employed to estimate the MO concentration. The intensity of this band decreases as the reaction time increases, which indicates the breaking of the chromophore group corresponding to the azo group (N=N) present in the molecule of MO. It is important to note that the absence of new signals in the spectra confirms the degradation of the dye being treated; any reaction intermediate product was detected even after 120 min of reaction time. The UV-Vis spectra obtained with other photocatalysts are not included for the sake of brevity, but similar behavior as described here was observed for P25.
Photocatalytic Activity
TiO2 as powders and nanotubes were extensively characterized by using the additional instrumental techniques, these results have been previously reported by Ivanova et al. [20]. Figure 6 represents the evolution of methyl orange absorbance during 120 min of photoreaction time by using commercial TiO2 P25. As it was previously indicated in section 2.3, the main absorption band located at 465 nm was employed to estimate the MO concentration. The intensity of this band decreases as the reaction time increases, which indicates the breaking of the chromophore group corresponding to the azo group (N=N) present in the molecule of MO. It is important to note that the absence of new signals in the spectra confirms the degradation of the dye being treated; any reaction intermediate product was detected even after 120 min of reaction time. The UV-Vis spectra obtained with other photocatalysts are not included for the sake of brevity, but similar behavior as described here was observed for P25. The photocatalytic degradation rate of methyl orange over the analyzed catalysts is represented in Figure 7 and it is observed that the substrate photolysis is negligible, thus, indicating that the presence of a photocatalyst is necessary to induce the highest degradation rate of MO. The photocatalytic degradation rate of methyl orange over the analyzed catalysts is represented in Figure 7 and it is observed that the substrate photolysis is negligible, thus, indicating that the presence of a photocatalyst is necessary to induce the highest degradation rate of MO.
Photocatalytic Activity
TiO2 as powders and nanotubes were extensively characterized by using the additional instrumental techniques, these results have been previously reported by Ivanova et al. [20]. Figure 6 represents the evolution of methyl orange absorbance during 120 min of photoreaction time by using commercial TiO2 P25. As it was previously indicated in section 2.3, the main absorption band located at 465 nm was employed to estimate the MO concentration. The intensity of this band decreases as the reaction time increases, which indicates the breaking of the chromophore group corresponding to the azo group (N=N) present in the molecule of MO. It is important to note that the absence of new signals in the spectra confirms the degradation of the dye being treated; any reaction intermediate product was detected even after 120 min of reaction time. The UV-Vis spectra obtained with other photocatalysts are not included for the sake of brevity, but similar behavior as described here was observed for P25. The photocatalytic degradation rate of methyl orange over the analyzed catalysts is represented in Figure 7 and it is observed that the substrate photolysis is negligible, thus, indicating that the presence of a photocatalyst is necessary to induce the highest degradation rate of MO. In general, in photocatalyst series prepared with conventional sensitized TiO 2 , the highest photodegradation rate for MO was obtained by using quinizarin as sensitizer agent (i.e Q catalyst). This is the result of the highest specific surface area of this material (Table 1), since the substrate-catalyst surface interaction is an important factor influencing the degradation rate. This is because the adsorption of the dye can be improved in the surface of materials with the highest specific surface area, leading to increase in the photodegradation rate.
Photocatalytic Activity
Moreover, the catalysts based on TiO 2 -nanotubes sensitized were also evaluated in the MO photodegradation and the results are also included in Figure 7. As it can be observed, the MO degradation rate over the photocatalysts based on TiO 2 -nanotubes sensitized with quinizarin (Qa) and protoporfirin (ZnPa) is significantly higher than the observed over the conventional catalysts. Within the series, the highest dye degradation was achieved by using the ZnPa catalyst, which can be due to the higher S BET of this material compared to Qa ( Table 1). As the lower specific surface area was measured for the TiO 2 -nanotubes sensitized samples in comparison to the conventional TiO 2 samples, one can consider its photocatalytic behavior as illogically active. However, the improvement of their activity could be assigned to the presence of sensitizing molecules within the nanotubes, whose electronic properties become promoted by the electron confinement effect in semiconductors [31]. It is also worthy to consider the lower electron-holes recombination rate for this material in comparison to the higher surface materials, since the electrons can initially reach the dye before the conduction band. It is also possible that the sensitized photocatalytic materials simultaneously decompose both methyl orange and the dyes used for the sensitization. In fact, it was observed that after photocatalytic tests, the materials recovered presented a lighter color than they had initially, thus, showing that the sensitizing dye can be modified during photocatalytic reactions.
Commercial TiO 2 P25 Evonik was used as a reference, and it was observed that in any of the photocatalytic tests, a degradation rate higher than the obtained with this commercial sample (P25) was achieved. Figure 8, shows the MO concentration evolution with the photoreaction time, and as it can be observed, the highest degradation of the dye was achieved over the ZnPa catalyst. photodegradation rate for MO was obtained by using quinizarin as sensitizer agent (i.e Q catalyst). This is the result of the highest specific surface area of this material (Table 1), since the substratecatalyst surface interaction is an important factor influencing the degradation rate. This is because the adsorption of the dye can be improved in the surface of materials with the highest specific surface area, leading to increase in the photodegradation rate.
Moreover, the catalysts based on TiO2-nanotubes sensitized were also evaluated in the MO photodegradation and the results are also included in Figure 7. As it can be observed, the MO degradation rate over the photocatalysts based on TiO2-nanotubes sensitized with quinizarin (Qa) and protoporfirin (ZnPa) is significantly higher than the observed over the conventional catalysts. Within the series, the highest dye degradation was achieved by using the ZnPa catalyst, which can be due to the higher SBET of this material compared to Qa (Table 1). As the lower specific surface area was measured for the TiO2-nanotubes sensitized samples in comparison to the conventional TiO2 samples, one can consider its photocatalytic behavior as illogically active. However, the improvement of their activity could be assigned to the presence of sensitizing molecules within the nanotubes, whose electronic properties become promoted by the electron confinement effect in semiconductors [31]. It is also worthy to consider the lower electron-holes recombination rate for this material in comparison to the higher surface materials, since the electrons can initially reach the dye before the conduction band. It is also possible that the sensitized photocatalytic materials simultaneously decompose both methyl orange and the dyes used for the sensitization. In fact, it was observed that after photocatalytic tests, the materials recovered presented a lighter color than they had initially, thus, showing that the sensitizing dye can be modified during photocatalytic reactions.
Commercial TiO2 P25 Evonik was used as a reference, and it was observed that in any of the photocatalytic tests, a degradation rate higher than the obtained with this commercial sample (P25) was achieved. Figure 8, shows the MO concentration evolution with the photoreaction time, and as it can be observed, the highest degradation of the dye was achieved over the ZnPa catalyst. As it was indicated in the experimental section, selected photocatalysts were also tested in the phenol and MO degradation reactions under UV-Visible and visible radiations, and the results are presented in Table 2. It can be observed that the degradation rate for MO over all the analyzed catalysts is higher than the observed in the case of phenol. This behavior can be due to a double As it was indicated in the experimental section, selected photocatalysts were also tested in the phenol and MO degradation reactions under UV-Visible and visible radiations, and the results are presented in Table 2. It can be observed that the degradation rate for MO over all the analyzed catalysts is higher than the observed in the case of phenol. This behavior can be due to a double sensitization phenomenon induced by the simultaneous presence of the sensitizing agent in the catalyst and the dye used as a substrate in the photoreaction system.
In the case of the photoreactions conducted under visible light, it is observed that all the evaluated catalysts are active under this radiation, thus, demonstrating the effectiveness of the dye sensitization treatment. It is also important to remark that, as expected, under visible light, the MO and phenol degradation rate decreases.
Conclusions
The dye sensitization is an effective method to obtain the lab prepared TiO 2 catalysts active and effective in the treatment of water pollutants such as phenol and methyl orange by photocatalysis.
The effectiveness of the sensitization depends on the correct selection of the sensitizer and it is also influenced by the substrate to be degraded.
Thus, for one hand, Quinizarin is a good dye to prepare TiO 2 sensitized materials effective for the degradation of methyl orange. However, on the other hand, the sensitization has a detrimental effect on the phenol degradation rate.
Finally, the photocatalysts based on TiO 2 -nanotubes sensitized with the dye Zinc protoporfirin are the most effective materials for the MO photodegradation. Funding: This research was funded by Fondo Nacional de Financiamiento para la Ciencia, la Tecnología y la Innovación "Fransisco José de Caldas-Colciencias", Project 279-2016 and Universidad Pedagógica y Tecnológica de Colombia. | 2019-04-08T21:10:53.599Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "71e98cfdc60ed3ddf04d8a1b3c42a55a37906317",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/9/4/517/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "71e98cfdc60ed3ddf04d8a1b3c42a55a37906317",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
267373929 | pes2o/s2orc | v3-fos-license | Perioperative Fluid Management in Colorectal Surgery: Institutional Approach to Standardized Practice
The present review discusses restrictive perioperative fluid protocols within enhanced recovery after surgery (ERAS) pathways. Standardized definitions of a restrictive or liberal fluid regimen are lacking since they depend on conflicting evidence, institutional protocols, and personal preferences. Challenges related to restrictive fluid protocols are related to proper patient selection within standardized ERAS protocols. On the other hand, invasive goal-directed fluid therapy (GDFT) is reserved for more challenging disease presentations and polymorbid and frail patients. While the perfusion rate (mL/kg/h) appears less predictive for postoperative outcomes, the authors identified critical thresholds related to total intravenous fluids and weight gain. These thresholds are discussed within the available evidence. The authors aim to introduce their institutional approach to standardized practice.
Introduction
Over the last 20 years, fluid management has been increasingly recognized as a sensitive and modifiable parameter of perioperative care, directly affecting postoperative outcomes [1][2][3].However, the optimal amount of perioperative fluid administration is controversial, and standardized definitions of a restrictive or liberal regimen are lacking due to conflicting evidence, institutional protocols, and personal preferences [4,5].In line with these findings, a recent meta-analysis revealed various intra-and postoperative fluid volumes [6].
On the one hand, peri-and postoperative fluids are essential to maintain adequate organ perfusion and tissue fluid homeostasis [7].An overly restrictive approach may lead to hypotension and decreased organ perfusion, ultimately associated with acute kidney injury (AKI) [4].Furthermore, perioperative organ injury due to both inflammation and ischemia (due to a demand-supply mismatch) represents a potential hazard, thus needing preventive measures and close perioperative monitoring [8].Enhanced recovery after surgery (ERAS) pathways aim to decrease the physiological surgical stress response represented by a state of insulin resistance [9].Several measures, including preoperative carbohydrate loading, perioperative feeding strategies, minimally invasive surgery, and early resumption of a normal diet help to modulate the stress response, promote insulin sensitivity, and attenuate the breakdown of protein.Further consequences related to decreased organ perfusion due to an overly restrictive approach may be cardiovascular dysfunction (perioperative myocardial ischemia due to tachycardia, hypotension, hypoxia, or anemia), neurological complications (including confusional states or delirium), and intestinal dysfunction (including splanchnic or anastomotic hypoperfusion), which may be exacerbated by an excessive use of vasopressors [10,11].
On the other hand, fluid overload may result in harmful "third space" weight gain, associated with higher rates of pulmonary complications, postoperative ileus, altered mental status, and edema-related anastomotic complications, thus impeding postoperative recovery [12][13][14][15][16]. Furthermore, an excessive extracellular fluid volume may lead to abdominal compartment syndrome, which by itself may trigger adverse physiologic effects such as respiratory failure and renal failure [17].In light of these findings, definitions must be set to guide clinical practice.
In the setting of established ERAS pathways, the authors' institutions attempted to identify "safety" fluid thresholds for colorectal resections [13,18,19].The present review aims to define optimal fluid management, provide an overview of suggested thresholds, and discuss this institutional practice in the light of available evidence.
What Is Optimal Fluid Management?
Optimal fluid management implies a normovolemic state during and beyond the surgical procedure without fluid management-related complications due to overly restrictive or generous fluid administration, least possible postoperative weight gain, and prompt functional recovery.Whether a specific patient can be managed by noninvasive monitoring and according to a "zero fluid" approach as suggested by the ERAS guidelines mainly depends on the disease presentation, physiological state at the time of surgery, comorbidities, and patient frailty [2].A euvolemic, otherwise healthy patient without significant comorbidities warranting close surveillance going into elective, minimally invasive surgery is thus eligible for a standardized, restrictive fluid strategy, considering the physiologic principles of euvolemia [5].On the other hand, patients at risk presenting with an impaired physical condition and distress due to a more acute or emergent disease presentation should benefit from invasive monitoring techniques and be treated within a more liberal strategy according to their physiologic reactions to surgery in a non-elective, acute setting [6].This is even more important given the fact that these fragile patients are prone to postoperative morbidity and are not eligible for a simplified restrictive approach.On the contrary, management of these patients implies several critical perioperative assessments, including an evaluation of fluid responsiveness triggering, if appropriate, the administration of fluid boluses to increase stroke volume [20].Of note, such a protocol does not necessarily need hemodynamic monitoring devices for reliable prediction but can also be carried out using echography after a passive leg raising test or by inferior vena cava evaluation, both in mechanically ventilated and spontaneously breathing patients [21][22][23].In line with these basic principles, both authors' institutions aimed to standardize fluid management over the last years to implement preset thresholds related to IV fluids and weight gain as red flags for guidance in clinical practice.
Definition of a Restrictive versus Liberal Approach
To date, there is no standardized definition of restrictive fluid therapy.The Enhanced Recovery After Surgery (ERAS) guidelines recommend aiming for a "zero fluid" balance and euvolemia intraoperatively and during the first postoperative days in patients undergoing elective colorectal resections [24,25].Pre-operatively, carbohydrate loading and unrestricted access to clear fluids until 2 h before anesthesia induction help maintain fluid homeostasis and initiate surgery in a euvolemic, physiological state.Intraoperatively, a basal rate of crystalloid solution of <4 mL/kg/h is recommended [24,26].This approach has been considered "restrictive"; however, its interpretation and application in clinical practice remain vague and subjective.Patients requiring goal-directed fluid therapy (GDFT) should receive boluses to maintain the cardiac stroke volume and, hence, central normovolemia [6].However, recent guidance reserves a GDFT approach for high-risk patients (e.g., frailty and cardiopulmonary dysfunction) and high-risk procedures (e.g., emergent setting and disease-related distress) with large intravascular fluid loss [25,27,28].Postoperatively, both early IV fluid lock and resumption of liquids and solids allow for adherence to the natural process of fluid homeostasis according to individual needs [29].
In a recent meta-analysis including 18 randomized controlled trials, the median intraoperative fluid administrated in the restrictive group was 1930 mL (interquartile range (IQR): 1480-2470 mL) compared to 3880 mL (IQR: 3000-4400 mL) in the liberal group [30].On postoperative day 1, the median volume of intravenous fluids was 2340 mL (IQR 1640-3530 mL) versus 4350 mL (3100-5330 mL), respectively.However, important differences were observed among individual trials regarding total fluid volumes in the restrictive and liberal groups [30,31].Consequently, a liberal approach in a specific trial could be equivalent to a restrictive approach in another trial [30,32].While the concept of fluid restriction outside high-risk patients and procedures is widely accepted, "safety" thresholds may be valuable adjuncts and serve as red flags for clinical guidance during anesthesia and postoperative surveillance.Several randomized controlled trials compared both approaches (restrictive vs. liberal) and reported on fluid-related thresholds and postoperative complications, as summarized in Table 1.
Table 1 provides an overview of published RCTs comparing restrictive and liberal groups.In a former meta-analysis, Varadhan et al. suggested stratifying fluid regimens of the perioperative day into restrictive (<1750 mL/d), balanced (1750-2750 mL/d), and liberal (>2750 mL/d) [32].The balanced fluid range was calculated to compensate for the daily physiological water loss for an average human in a homeostatic state, estimated between 25-35 mL/kg [46,47].This volume is supposed to replace the perioperative body water loss to approach a zero fluid balance.Interestingly, this upper cut-off of 2.7 L was independently confirmed by an institutional series of the Mayo Clinic [13].
Impact of Fluid Overload on Postoperative Complications
A considerable weight gain of >6 kg after elective colorectal surgery has been observed in several studies, requiring close postoperative surveillance to prevent associated complications, especially in fragile patients prone to pulmonary complications [33].However, fluid management in these fragile patients represents a particular challenge given that they are at increased risk of experiencing postoperative morbidity.This impedes uncritical assumptions of cause (fluid overload) and effect (complications) patterns.While some of the data suggest a modest correlation between total perioperative IV fluid administration and weight gain [48,49], a dose-response correlation with consequent increased complication rates was observed by others [33,50].Despite the seemingly easy-to-perform weight measurements in the postoperative period, postoperative weight is reported in only 50% of randomized controlled studies [30], Table 1.
Fluid overload induces prolonged gastric emptying [5], which, together with bowel edema and interstitial third space fluids, causes postoperative ileus (POI).The series of both our institutions confirmed an independent effect of fluid overload and weight gain on POI occurrence [18,51].These findings were confirmed by others and independently validated [52][53][54].Furthermore, similar associations were observed in the setting of ostomy procedures [55,56].
Pulmonary complications after surgery are a major concern, with an occurrence of up to 23% [12,57].Fluid overload of the interstitial space triggers pulmonary edema, especially in patients with impaired cardiac function [57,58].A significant decrease in mean blood saturation on the second night after surgery was observed in patients within the liberal fluid administration group; however, there was no increased morbidity in this study [37].However, the results are conflicting, and cause-effect patterns are hard to establish in fragile patients with cardiopulmonary impairment.Several studies, including an institutional series, revealed that fluid overload and weight gain are associated with an increased risk of pulmonary complications [12,50,59].
Impact of Fluid Management on Renal Function
While perioperative hypotension may impact on several organs, a major concern of overly restrictive perioperative fluid administration is the development of AKI.The evidence is conflicting.A meta-analysis revealed a higher AKI rate in the restrictive group [30].Further data suggest that even a minor increase in creatinine levels could increase in-hospital mortality in non-cardiac surgical patients [60].However, no causeeffect patterns could be established due to its retrospective design.Myles et al. published a large multicentric randomized controlled landmark trial comparing restrictive versus liberal fluid administration in major abdominal surgery [4].In their study, the restrictive approach had no impact on disability-free survival but was associated with a statistically significant AKI increase (8.6% vs. 5% in the restrictive and liberal groups, respectively).Notably, around 50% of patients in this trial were not treated according to the ERAS principles, impeding uncritical extrapolation of the results to the setting of our institutions offering care within longstanding, established, and standardized ERAS pathways [61,62].A sizeable institutional series of elective patients revealed a low AKI rate of 2.5% according to loss of kidney function and end-stage kidney disease (RIFLE) criteria [63].In another series of our group, an intraoperative fluid range defined as "balanced" (300 mL-2700 mL) was associated with the lowest rate of POI and a prolonged length of stay but not AKI [13].Restrictive fluid management during elective colorectal resections appears safe if carried out within standardized pathways and it is supported by respective societies [24,25,64].
Fluid Management in the Perioperative Period: Which Indicators
Intraoperative oliguria occurring in isolation should not trigger fluid boluses since the predictive value for postoperative AKI appears low [65].An institutional series of the Mayo Clinic revealed that a certain degree of postoperative hypotension in up to 10% of patients may persist for less than 20 h without negatively impacting AKI occurrence, which affected <3% [66].There is a broad consensus that a permissive attitude to physiologic oliguria due to renal vasoconstriction can be adopted in the elective ERAS setting, providing no established cause exists [25].Based on the available information, intraoperative fluid management should be protocolized to determine an underlying physiologic problem requiring reversal [67].Standard monitoring integrating clinical data is thus likely sufficient in low-risk procedures, combining maintenance fluids at a low rate of < 4 ml/kg/h in the intraoperative and early postoperative period in the post-anesthesia care unit.Outside this low-risk setting and depending on the surgical risk, GDFT, including advanced hemodynamic monitoring devices, should be used as valuable adjuncts in higher-risk patients or procedures, triggering fluid administration if a decreased cardiac output or surrogates are suspected [68,69].
Summary of Institutional Thresholds and Practice Guidance
Based on the above discussed evidence and considering a 7-year experience in ERAS care in both authors' institutions at that time, our groups aimed not only to focus on established, evidence-based perioperative ERAS care but also to standardize fluid management [19].The need to improve perioperative fluid management standards in our institutions was motivated by the rather low compliance with guidelines, despite growing ERAS experience [19].Importantly, the aim was not to set inflexible, dogmatic thresholds but to help with guidance in clinical practice.Restrictive fluid management through a zero-balance practice in elective surgery represents one puzzle piece in a comprehensive care pathway aiming to maintain a physiologic state throughout the perioperative period, significantly impacting postoperative recovery [5].
The thresholds are displayed with their respective impact on specific outcomes or clinical consequences.Three papers from the Lausanne group tried to identify thresholds through receiver operating characteristics (ROC) curves in different surgical settings: minimally invasive surgery [50], open surgery [73], and lastly, surgery for urgent indications [72].Interestingly, the thresholds did not differ significantly across the different settings.The Mayo group analyzed an independent large dataset of elective colorectal surgeries with a focus on POI, prolonged LOS, and AKI, which were plotted against the rate of intraoperative Ringer lactate (RL) infusion (mL/kg/h) and total intraoperative volume [13].Total intraoperative RL ≥2.7 L was independently associated with POI and prolonged LOS, but not AKI.Of note, the infusion rate (ml/kg/h) was not retained as a superior predictive tool.Further work focused on patients undergoing major surgery and needing postoperative surveillance in an intermediate care unit [48].In this particularly vulnerable subgroup of patients, the fluid balance and weight course showed only a modest correlation.Both institutions further focused on POI in their analyses and found comparable results, with a strong correlation of fluid overload and POI in patients undergoing major surgery [18] and in patients undergoing loop ileostomy closure [56].In the largest dataset of the Mayo group with over 7000 patients, early AKI was very uncommon within the institutional ERP (2.5%), and long-term sequelae were exceptionally low [63].Interestingly, AKI patients received higher amounts of POD 0 fluids and had increased postoperative weight gain at POD 2. A further study of the Lausanne group revealed a protective effect of high compliance with the ERAS protocol to prevent postoperative pulmonary complications [12].A threshold of 4 kg at POD 2 appeared to be critical in this setting.Finally, both author groups showed increasing interest in short stay processes in recent years, and excess intraoperative fluids of >3 L turned out to impede early discharge and thus an outpatient strategy [70,71].
Taking the above summarized evidence together, a threshold of 3000 mL intraoperatively serves presently as a red flag in daily clinical practice in both authors' institutions.In addition to the mere focus on IV fluids, weight gain at postoperative day 2 turned out to be a valid surrogate for fluid overload [18].
Besides IV fluid management, several further ERAS care items help to maintain tissue homeostasis and an euvolemic state [24].Preoperative carbohydrate loading helps to attenuate the catabolic response through a reduction of insulin resistance in response to surgery [74].Clear fluids can be safely ingested until 2 h before surgery, whereas 6 h fasting for solid food is sufficient [75].While there is growing evidence in favor of combined mechanical and oral antibiotic bowel preparation, mechanical bowel preparation alone may lead to preoperative dehydration and electrolyte imbalances and should thus be avoided [76].Postoperatively, early oral nutrition is advocated and has proven its benefits by several meta-analyses and has been endorsed by different nutritional societies [64].Finally, early mobilization of at least 6 h per day is of utmost importance and helps to prevent muscle loss and to promote functional recovery due to a direct prokinetic effect on the intestines [77].Figure 1 summarizes the pre, intra, and postoperative measures within the institutions' standardized ERAS protocol.years, and excess intraoperative fluids of >3 L turned out to impede early discharge thus an outpatient strategy [70,71].
Taking the above summarized evidence together, a threshold of 3000 mL intraop tively serves presently as a red flag in daily clinical practice in both authors' institut In addition to the mere focus on IV fluids, weight gain at postoperative day 2 turned to be a valid surrogate for fluid overload [18].
Besides IV fluid management, several further ERAS care items help to maintain ti homeostasis and an euvolemic state [24].Preoperative carbohydrate loading helps t tenuate the catabolic response through a reduction of insulin resistance in respons surgery [74].Clear fluids can be safely ingested until 2 h before surgery, whereas 6 h ing for solid food is sufficient [75].While there is growing evidence in favor of comb mechanical and oral antibiotic bowel preparation, mechanical bowel preparation a may lead to preoperative dehydration and electrolyte imbalances and should thu avoided [76].Postoperatively, early oral nutrition is advocated and has proven its ben by several meta-analyses and has been endorsed by different nutritional societies [64 nally, early mobilization of at least 6 h per day is of utmost importance and helps to vent muscle loss and to promote functional recovery due to a direct prokinetic effec the intestines [77].Figure 1 summarizes the pre, intra, and postoperative measures w the institutions' standardized ERAS protocol.
Implications in Daily Clinical Practice
The fast track concept that eventually led to standardized ERAS pathways was in duced 25 years ago by Henrik Kehlet and helped to simplify patient management by geting the quality and speed of postoperative recovery [78].Standardization of care way to facilitate patient management and improve a multidisciplinary team appr [79].This holds true for surgical technique, but also intraoperative management and tient care in the ward.Postoperative care protocols with predefined care maps sim the workflow, especially for frequently performed procedures.Perioperative fluid m agement represents a key element of ERAS care.
ERAS guidelines suggest aiming for a zero fluid balance for elective colorectal re
Implications in Daily Clinical Practice
The fast track concept that eventually led to standardized ERAS pathways was introduced 25 years ago by Henrik Kehlet and helped to simplify patient management by targeting the quality and speed of postoperative recovery [78].Standardization of care is a way to facilitate patient management and improve a multidisciplinary team approach [79].This holds true for surgical technique, but also intraoperative management and patient care in the ward.Postoperative care protocols with predefined care maps simplify the workflow, especially for frequently performed procedures.Perioperative fluid management represents a key element of ERAS care.
ERAS guidelines suggest aiming for a zero fluid balance for elective colorectal resections, while GDFT should be reserved for high-risk patients and procedures [24,25].The use of vasopressors is advocated when fluid boluses fail to improve the stroke volume in order to prevent fluid overload [80].The thresholds described in the present study and used in the authors' institutions cannot replace careful individual risk-stratification in every patient before surgery.However, in the authors' experience, they help with raising awareness among both surgeons and anesthesiologists to discuss fluid management during and after the procedure.Furthermore, a weight gain threshold of 2.5 kg at POD 2 serves as a useful point of reference in the surgical ward.Postoperative body weight is easy to assess and helps to timely launch counterregulatory measures [48,81].In patients who exceed the threshold, subsequent fluid restriction, diuretics, and the promotion of mobilization can be initiated [50].
Conclusions
In conclusion, our practice of restrictive fluid management is based on institutional thresholds to help guide clinical practice, aiming to prevent deleterious fluid overloadrelated adverse outcomes.
Figure 1 .
Figure 1.Schematic representation of fluid management-related recommendations within th thors' institutional ERAS pathways.
Figure 1 .
Figure 1.Schematic representation of fluid management-related recommendations within the authors' institutional ERAS pathways.
Table 1 .
Randomized controlled trials comparing restrictive and liberal fluid regimens.
IV-intravenous, CS-colon surgery, CRS-colorectal surgery, R-restrictive, L-liberal, GDT-goal-directed therapy, NA-not available, LOS-length of stay, POD-postoperative day, IO-intraoperative, AKI-acute kidney injury.Total fluids relate to the total LOS unless specified otherwise.Arrow down: decreased, arrow up: increased, regular arrow: same.
Table 2 .
Fluid thresholds and related outcomes within the authors' institutions. | 2024-02-02T16:21:58.740Z | 2024-01-30T00:00:00.000 | {
"year": 2024,
"sha1": "658ae9b8d8ba3d4616a62400d26b54e842ee3444",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/13/3/801/pdf?version=1706616563",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "76a7f97cd1b33b36d1a7e5f0d9cd3082ee4de85a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
14629025 | pes2o/s2orc | v3-fos-license | Environmental tobacco smoke and the risk of eczema symptoms among school children in South Africa: a cross-sectional study
Objective The aim of this study was to investigate the association between eczema ever (EE) and current eczema symptoms (ES) in relation to exposure to environmental tobacco smoke (ETS). Design A cross-sectional study using the International Study of Asthma and Allergies in Childhood questionnaire. Setting 16 schools were randomly selected from two neighbourhoods situated in Ekurhuleni Metropolitan Municipality, Gauteng Province, South Africa. Participants From a total population of 3764 school children aged 12–14 years, 3468 completed the questionnaire (92% response rate). A total of 3424 questionnaires were included in the final data analysis. Primary outcome The prevalence of EE and current ES was the primary outcome in this study. Results Data were analysed using Multilevel Logistic Regression Analysis (MLRA). The likelihood of EE was increased by exposure to ETS at home (OR 1.30 95% CI 1.01 to 1.67) and at school (OR 1.26 95% CI 1.00 to 1.60). The likelihood of EE was lower for males (OR 0.66 95% CI 0.51 to 0.84). The likelihood of ES was increased by ETS at home (OR 1.93 95% CI 1.43 to 2.59) and school (1.44 95% CI 1.09 to 1.90). The likelihood of ES was again lower for males (OR 0.56 95% CI 0.42 to 0.76). Smoking by mother/female guardian increased the likelihood of EE and ES, however, this was not significant in the multivariate analysis. Conclusions Symptoms of eczema were positively associated with exposure to ETS at home and school. The results support the hypothesis that ETS is an important factor in understanding the occurrence of eczema.
BACKGROUND Eczema (or atopic dermatitis, AD) is a chronic, and the most frequent, inflammatory skin disease; it usually develops in childhood and can persist through to adulthood. 1 It is characterised by dry skin, itchy rash and excoriation, 2 3 and the condition affects 15-30% of children and 2-10% of adults. 3 The term eczema describes skin diseases with common clinical characteristics involving a genetically determined skin-barrier defect. Decreased barrier function leads to increased water loss through the outermost layer of the skin, resulting in a decrease in water content of this particular layer of skin, increased permeability to hydrophilic substances, decreased ceramides in the skin and decreased barrier to infectious agents. 4 Although not life-threatening, the condition may result in secondary infection and damage to the skin. The quality of life for those having the condition, particularly children and their caregivers, may be affected, for example, by lack of sleep and lack of concentration at school as a result of itching at night. 5 6 Families of affected children have an extra financial burden to care for the affected child. 7 8 The prevalence of eczema among children is reported to vary in different countries, with some countries experiencing an increase, and others with high prevalence undergoing a decline. [9][10][11] In Cape Town Province, South Africa, Zar et al 12 reported an increase in the prevalence of eczema from 11.8% in 1995 to 19.4% in 2002, from two International Study of Strengths and limitations of this study ▪ The use of a validated International Study of Asthma and Allergies in Childhood (ISAAC) questionnaire, which has been used in many studies globally with consistent results. ▪ Large sample size of over 3000 children. ▪ The participation rate was high (92%). ▪ The results of the study might be higher than the actual prevalence since they are based on self-reported answers from the questionnaire and no objective measures were taken at the time of data collection.
Asthma and Allergies in Childhood (ISAAC) studies that were questionnaire based and conducted 7 years apart. The reason for the increase in the prevalence of eczema is not clearly understood. The pathogenesis of eczema is complex, involving an interaction between several factors, which may include, among others, genetics, socioeconomic status, lifestyle, diet, meteorological and living conditions at home, and environmental air pollutants, such as type of fuel used for cooking and heating in homes, and traffic-related air pollution and exposure to environmental tobacco smoke (ETS). [13][14][15][16] Tobacco smoke is one of the most common indoor air pollutants. The literature, as early as in the 1970s, periodically reviewed ETS, or passive smoking and health. 17 Children usually get exposed to tobacco smoke at home due to parents and other family members smoking, but also during transportation and in areas such as schools and restaurants. 18 Although ETS has been considered to be a risk factor for eczema, the relationship between the two has not been sufficiently investigated. Studies have reported that smoking by the mother, or her exposure to smoke during pregnancy, may increase the risk of eczema during childhood. 19 Many studies focusing on eczema have been mainly reported from studies conducted in developed countries; little is known about the strength of such associations in developing countries such as South Africa. The aim of the study was to investigate the association of eczema ever (EE) and current eczema symptoms (ES) with ETS among children attending schools in urban areas of Tembisa and Kempton Park.
Study area
The study was conducted in Tembisa and Kempton Park areas, situated in the northern region of the Ekurhuleni Metropolitan Municipality (EMM), located in the eastern region of Gauteng Province, South Africa. Tembisa is the second largest township in Gauteng Province, with both formal and informal housing; it is home to mainly African ethnic groups. Kempton Park is a suburban area and the residents are predominantly Caucasian; it has only been in recent years, after the 1994 democratic elections, that some, mostly middle income, African ethnic families have moved into the area.
Study design, population and sample selection
A cross-sectional epidemiological study was conducted between February and June 2012, following the ISAAC Phase I protocol. 20 The ISAAC was designed as a multicentre study to investigate the epidemiology of asthma, rhinitis and AD among children, using standardised definitions, thus allowing comparisons worldwide. 20 A list of all schools ( primary and secondary) in EMM was provided by the Gauteng Department of Education. All primary schools were excluded and 16 high schools were randomly selected from the list. Each school was contacted and requested to participate in the study. Following the approval of the study by the principal and governing body in each school, all eligible children between the ages of 13 and 14 years and in grade 8 were requested to participate. An appointment was scheduled with the school to deliver the consent forms for the children 2 weeks prior to the study and the children were requested to return them within 3 days. The study population consisted of 3764 children, based on the numbers given by each school prior to data collection. Data were collected using the English versions of ISAAC written and video questionnaires. The questionnaires were completed by the children in the classroom under the supervision of the data collectors, who were specifically trained and briefed to avoid explanations that could interfere in the participant's answers.
Health outcomes
In this study, we estimated health outcomes on the basis of positive answers from the written ISAAC questionnaire. Answers to written questions were self-reported by children. 1. Have you ever had an itchy rash that was coming and going for the past 6 months? (Yes\No) 2. Have you had this itchy rash at any time in the past 12 months? (Yes\No) 3. Has this itchy rash at any time affected any of the following places: the folds of the elbow, behind the knees, in front of the ankles, under the buttocks, or around the neck, ears, or eyes? (Yes\No) 4. Current ES were defined as those children who, according to the written questionnaire, responded positively to questions 1, 2 and 3. 5. EE: have you ever had eczema? (Yes\No) Air pollution sources and potential confounding variables Air pollution sources included: ETS exposure at home in the past 30 days (yes/no), ETS exposure at school in the past 30 days (yes/no), tobacco smoking by participant (yes/no), mother/father smoking tobacco (yes/no), any other person smoking at home other than participant (yes/no). The following potential confounding variables were included in the study, similar to other ISAAC studies, 21 age, sex (male/female) and type of house (brick, mud, corrugated iron, combination); the children were asked to select the most frequently used energy source at home: for cooking (electricity, gas, paraffin, open fires) and for heating (electricity, gas, paraffin, open fires). The children were asked about the mode of transport to school (walking, taxi/bus, car, combination of car/taxi or train), the frequency of trucks passing near residences on weekdays (never, seldom, frequently through the day, almost all day). Other variables included in the questionnaire and reported in the descriptive analysis included: period lived in the residential area (<6 months, 6-12 months, 1-2 years, ≥3 years), being born in Tembisa/Kempton Park (yes/no) and availability of running water (yes/no).
Data management and statistical analysis
The data were entered into a database set up in EpiInfo V.3.5.3 and Stata V.12 was applied for the data analysis. Prevalence rates for the each health outcome and proportion of risk factors under investigation were calculated by dividing the number of participants who responded affirmatively to a particular question, by the number of questionnaires completed. Observations marked as 'do not know', 'not stated' or 'other responses' were set as missing. This resulted in each question having a slightly different sample size. Crude and adjusted OR and 95% CI were calculated with Multilevel Logistic Regression Analysis (MLRA) with random effect to estimate the likelihood of having EE and current ES given ETS exposure variable.
The multilevel data included 16 schools within two residential areas (Kempton Park and Tembisa) at level 1. ETS and confounding variables were added in a stepwise manner, starting with the most significant from the univariate analysis. Each time a new potential confounder was added to the model, if the effect estimate between the exposure of interest and respiratory outcome already in the models changed by more than 5%, the additional variable was retained in the final multiple MLRA, otherwise, the variable was removed and a different one was added. 22 The most parsimonious multiple MLRA models were reported, that is, those with variables having a p value <0.05. 22
Ethical considerations
The Gauteng Department of Education, Ekurhuleni North District, school principals and governing bodies were approached, and gave approval and cooperation for the study. Parents of participants were sent a letter explaining the study details and its nature, and requested to give consent to allow their children to participate in the study. All information was kept confidential.
RESULTS
The study population consisted of 3764 children from 16 schools; 3468 completed the modified ISAAC questionnaire at the schools (92% response rate). The study focused only on those children who were present at the time of fieldwork; therefore, 296 learners did not participate. The teachers gave assurance that most of the children were present. School attendance was high during the study, therefore bias, which may have been introduced by non-response rate, was assumed to be relatively low. Forty-four questionnaires were excluded during the data capturing due to incomplete information. A total of 3424 questionnaires were finally included in the data analysis. Table 1 summarises the frequencies and percentages for general characteristics and living conditions. 53% of the children were born within the study areas and more than three-quarters had been living in the study areas for longer than 3 years (76%). Girls accounted for 52% of the children.
The majority of the children lived in formal housing structures (86%) and fewer than 20% lived in houses without running water. Ten per cent of the children had a mother or female guardian who was a smoker, 27% a father or male guardian who was a smoker, or lived with someone who other than their parents, who was a smoker (44%). Forty-two per cent were exposed to tobacco smoke at home, while 34% were exposed at school. A small percentage of children reported gas most frequently (5%) and paraffin most frequently used (5%) for cooking at home, while the majority most frequently used electricity (88%).
Twelve per cent most frequently used gas for heating, 18% used paraffin, 7% used open fires (wood and coal), while the remaining 52% used electricity. Just over half of the children walked to school (51%), while the remainder used other modes of transport (cars, taxi, buses and train). Truck traffic passing near residences almost all day was reported by 35% of the children. The prevalence of health outcomes is summarised in table 2. Twenty-one per cent of the children reported having a rash that was coming and going for at least 6 months, the prevalence of ever having a rash in the past 12 months was 17% and the prevalence of ever having rash in the flexures was 10.2%. EE was reported by 14% and current ES 9.6%. ES were more prevalent among girls (table 2). Table 3 summarises the results of the MLRA for EE set at level 1 for residential areas. After adjusting for potential confounding variables, it was found that the likelihood of EE increased with exposure to: ETS at home (OR 1.30 95% CI 1.01 to 1.67) and at school (OR 1.26 95% CI 1.00 to 1.60). Smoking by mother/female guardian increased the likelihood of EE, however, the association was not significant. Among the confounding variables, significant association was observed for sex, the likelihood of EE was lower for males (OR 0.66 95% CI 0.51 to 0.84), gas frequently used for heating at home (OR 1.76 95% CI 1.28 to 2.43) and frequency of truck passing near residences on weekdays, frequently throughout the day (OR 1.60 95% CI 1.03 to 2.51) and almost all day (OR 1.70 95% CI 1.13 to 2.54). The use of the motor car as mode of transport to school was significant in the univariate analysis, however, it was not significant in the multivariate analysis. No association was observed for age and type of house.
DISCUSSION
The aim of the study was to investigate the association of EE and current ES with ETS among children attending schools in urban areas of Tembisa and Kempton Park, EMM. The prevalence of current ES in this study was 9.6%. In a cross-sectional study of centres participating in the ISAAC, the prevalence of eczema was from <1% in Albania to more than 17% in Nigeria for the age range of 13-14 years. High prevalence of ES was reported in Australia and Northern Europe, and lower prevalences were reported in Eastern and Central Europe, and Asia. Similar patterns were seen for symptoms of severe eczema. 9 Some centres in Africa were reported to be among those with the highest prevalence of eczema. 23 In Brazil, Porto Neto reported a prevalence of 13.6% for eczema in a study conducted among 2948 school children aged 13-14 years following the ISAAC methodology. 24 The prevalence of current ES in this study is lower than that reported by the study conducted in Polokwane (17%) and the two studies conducted in Cape Town, 11.8% in 1995 and 19.4% in 2002. 12 21 The slightly lower prevalence for this study might be attributable to the fact that the study area was situated in the Highveld region, with a higher altitude than Polokwane, which is in the Lowveld, while Cape Town is located on the coast.
The study found that EE and ES were positively associated with ETS exposure at home and school. For current ES, the risk of exposure appears to be much higher at home than at school (OR 1.93 at school vs 1.42 at home) as 42% of the children were exposed to tobacco smoke at home, while 34% were exposed at school. For ETS at home, the likelihood of ES was much higher in the adjusted model, OR 1.93 versus 1.33.
The findings were in line with other studies that identified ETS as one of the most common indoor air pollutants; the home being the most important site of such exposure. The association between eczema and ETS exposure has been reported previously. 25 The likelihood of current ES was also associated with smoking by mother/female guardian. Yi et al 19 found AD to be highly correlated with ETS among children whose mothers had smoked during pregnancy and/or in the first year after birth, in a study conducted in Korea among 7030 children between 6 and 13 years of age.
In a cross-sectional study conducted among 3153 Lebanese adolescents 13-14 years of age, females and passive smokers were at a 1.5 times risk of having eczema than their counterparts. 26 An ecological analysis of ISAAC Phase I data from 463 801 children aged 13-14 years in 155 countries and in 257 800 children aged 6-7 years in 91 centres in 38 countries, found an association between several factors including smoking by women and the symptom prevalence of three conditions (asthma, rhinoconjunctivitis and eczema). 27 In South Africa, in the study that was conducted in Polokwane Province, Wichmann et al 21 reported that the likelihood of having ES was significantly increased by 43% in rural areas and by 54% when exposed to tobacco smoke at home. The current study was conducted in Gauteng Province, 10 years after the Polokwane study; seemingly, exposure to tobacco smoke is still a problem in different communities in South Africa, with the home still the main environment where children are exposed to tobacco. This study was conducted in an urban setting where the majority of the children lived in formal housing, which may be one of the reasons for the lower prevalence than that in the Polokwane study. Time spent in the school environment is second to the time children spend at home, and seems to be another area where children are exposed to tobacco smoke.
Children start experimenting with cigarettes while in their early teens and rates of tobacco use among school children aged 13-15 years are high (WHO). 28 The Global Tobacco Surveillance System Collaborative group has analysed a sample of 747 603 adolescents from different countries and continents, and report that the frequency of current tobacco use varies from 11.4% in the Western Pacific Region to 22.2% in the Americas, with a global average of 17.3%. In general, girls were reported to smoke less than boys both in the Americas and Europe, while in the leading regions the frequency is almost the same between genders. 29 In a study conducted in Israel, to investigate the association of smoking and exposure to ETS with prevalence of atopic eczema in a national sample of 10 298 children aged 13-14 years, Graif et al 1 reported a dose-response association between smoking and atopic eczema compared to those not smoking. Furthermore, tobacco smoking has been proposed to promote hand eczema; a large population-based study in Sweden reported an association between heavy smoking and 1 year prevalence of hand eczema, and a dose-response relation was also indicated. 30 Conversely, studies such as those by Fedortsiv et al, 31 Ciaccio et al, 32 Schafer et al, 33 and Strachan and Cook 34 did not observe any association between atopic eczema and tobacco smoke. The debate as to whether exposure to tobacco smoke is associated with atopic eczema warrants further investigation, as the aetiology of the disease may differ from one country to another due to other risk factors. While research on the matter still continues, policies that are currently available to protect the public and children against exposure to the harmful effects of tobacco smoke should be implemented and enforced. Health education programmes on the harmful effects of tobacco smoke should be strengthened with more resources allocated to such programmes; these should focus on school children.
Limitations of the study
Certain limitations should be taken into account in the interpretation of the results, which should be interpreted as a whole. First, the study had a cross-sectional epidemiological design, as in all ISAAC studies. Cross-sectional studies are weak in providing causation as they are subject to difficulties in interpreting the temporal sequence of events since health status and determinants are measured simultaneously. However, our findings are supported by other studies, as discussed previously. Second, the results of the study might be higher than the actual prevalence since they are based on selfreported answers from the questionnaire and no allergy testing was performed at the time of data collection.
Third, no quantitative exposure assessment was conducted as part of the study; the number of cigarettes smoked was not included. Fourth, only age, sex, type of house, mode of transport to school, fuel frequently used for cooking and heating at home, and the frequency of trucks passing near residences, were included as confounding variables, of which most were highly significant in the final multilevel model. This supports the hypothesis that the development of eczema is associated with many other factors, therefore, studies on ETS should explore the co-existence of such factors in the development and exacerbation eczema. Despite these limitations, this study will contribute to the existing literature because very little data are available on the prevalence of eczema specifically in Gauteng Province, South Africa. The strength of our study is mainly the use of a validated ISAAC questionnaire, which has been used in many studies globally, with consistent results. Furthermore, cross-sectional studies are important indicators of health problems occurring in communities and serve as a baseline for further analytical and experimental investigation. The study had a large sample size and the participation rate was very high, which eliminated the risk of selection bias.
CONCLUSION
The study found that eczema was associated with ETS at home and in school. In the literature, most studies investigating eczema in relation to tobacco smoke were cohort studies following children from birth up to the ages of 6-7 years; there are limited studies focusing on the age group of 13-14 years. Studies have also suggested that ETS is associated with increased health symptoms during infancy and that the effect diminishes with the increasing age of the child, however, the results of this study suggest the condition may persist until teenage years through to adulthood. Most epidemiological studies have been conducted in developed countries. The aetiology of the disease may differ from that of children in other parts of the country or children in developed countries. The results of this study will add to the number of limited studies in developed countries, such as South Africa. The baseline data will serve as a benchmark for future epidemiological studies to build more evidence on the effect of ETS on eczema, in order to inform and influence policy decisions and to protect the public against the harmful effects resulting from exposure to tobacco smoke. | 2018-04-03T03:41:17.944Z | 2015-08-01T00:00:00.000 | {
"year": 2015,
"sha1": "9cfd6cf689659233b08d94adf999f3cae13a7f5e",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/5/8/e008234.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3eb08fde1086192ff184e1f2dbc7f1ea14c068e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118346626 | pes2o/s2orc | v3-fos-license | Broadband Linear Polarization of Jupiter Trojans
Trojan asteroids orbit in the Lagrange points of the system Sun-planet-asteroid. Their dynamical stability make their physical properties important proxies for the early evolution of our solar system. To study their origin, we want to characterize the surfaces of Jupiter Trojan asteroids and check possible similarities with objects of the main belt and of the Kuiper Belt. We have obtained high-accuracy broad-band linear polarization measurements of six Jupiter Trojans of the L4 population and tried to estimate the main features of their polarimetric behaviour. We have compared the polarimetric properties of our targets among themselves, and with those of other atmosphere-less bodies of our solar system. Our sample show approximately homogeneous polarimetric behaviour, although some distinct features are found between them. In general, the polarimetric properties of Trojan asteroids are similar to those of D- and P-type main-belt asteroids. No sign of coma activity is detected in any of the observed objects. An extended polarimetric survey may help to further investigate the origin and the surface evolution of Jupiter Trojans.
Introduction
Trojan asteroids are confined by solar and planetary gravity to orbiting the Sun 60 • ahead (L4 Lagrange point of the binary system planet Sun) or behind (L5 Lagrange point) a planet's position along its orbit (Murray & Dermott 1999). Stable trojans are supported by Mars, by Jupiter, by Neptune, and by two Saturnian moons. Because of their dynamical stability, they allow us to look at the earliest stages of the formation of our solar system. Saturn and Uranus do not have a stable Trojan population because their orbits are perturbed on a short time scale compared to the age of the solar system. Terrestrial planets may support a population of Trojan asteroids, (e.g. the Earth Trojan 2010 TK 7 discovered by Connors et al. 2011), but so far no stable population has been identified.
More than 6000 Trojans of Jupiter are known so far (Emery et al. 2015). In the framework of the Nice model of the formation of our solar system, Morbidelli et al. (2005) predicted the capture of Jupiter Trojans from the proto Kuiper belt. While these predictions were invalidated by further simulations (e.g. Nesvorný & Morbidelli 2012), Nesvorný et al. (2013) investigated the possibility of the capture of Jupiter Trojans from the Kuiper-belt region within the framework of the so-called jumping-Jupiter scenario, and succeeded at reproducing the observed distribution of the orbital elements of Jupiter Trojan asteroids. The model by Nesvorný et al. (2013) supports the scenario in which the majority of Trojans are captured from the trans-Neptunian disk, while a small fraction of them may come from the outer asteroid belt.
Unfortunately, direct spectral comparisons between the optical properties of Jupiter Trojans with those of Kuiper-belt objects (or trans-Neptunian objects, TNOs) show significant differences. TNOs have a wide range of albedos that extend, in particular, to higher albedos, while all known Jupiter Trojans have a low albedo and fairly featureless spectrum, all belonging to 'primitive' taxonomies, principally C-, D-, and P-types of the Tholen (Tholen 1984) classification system (Grav et al. 2012). These types are the most common ones in the outer part of the main belt. Emery et al. (2011) investigated the infrared properties of Jupiter Trojans and report a bimodal distribution of their spectral slopes. This bimodality is also seen in the albedos in the infrared (Grav et al. 2012), although it is not apparent in the optical albedo distribution. Emery et al. (2011) interpret the slope bimodality as the observational evidence of at least two distinct populations of objects within the Trojan clouds where the 'less red' group originated near Jupiter (i.e. either at Jupiter's radial distance from the Sun or in the Main Asteroid Belt), while the 'redder' population originated significantly beyond Jupiter's orbit (where similar 'red' objects are prevalent). Therefore, at least the near-IR spectroscopy observations of Emery et al. (2011) are broadly consistent with the widely accepted scenario suggested by Morbidelli et al. (2005) and Nesvorný et al. (2013), while the inconsistency in the optical albedo and spectral properties could be naturally explained by the fact that TNOs migrated to the Jupiter orbit have been exposed to a different irradiation and thermal environment (Emery et al. 2015).
Polarimetric measurements are sensitive to the microstructure and composition of a scattering surface. In the case of the atmosphere-less bodies of the solar system, the way that linear polarization changes as a function of the phase angle (i.e., the angle between the sun, the target, and the observer) may reveal information about the properties of the topmost surface layers, such as the complex refractive index, particle size, packing density, and microscopic optical heterogeneity. Objects that display different polarimetric behaviours must have different surface structures, so that they probably have different evolution histories. Polarimetric techniques have been applied to hundreds of asteroids (e.g., Belskaya et al. 2015), as well as to a few Centaurs (Bagnulo et al. 2006;Belskaya et al. 2010) andTNOs (Boehnhardt et al. 2004;Bagnulo et al. 2006Bagnulo et al. , 2008Belskaya et al. 2010). These works have revealed that certain objects exhibit very distinct polarimetric features. For instances, at very small phase angles, some TNOs and Centaurs exhibit a very steep polarimetric curve that is not observed in main-belt asteroids. This finding is evidence of substantial differences in the surface micro-structure of these bodies compared to other bodies in the inner part of the solar system. It is therefore very natural to explore whether optical polarimetry may help in finding other similarities or differences among Jupiter Trojans, and between Jupiter Trojans and other classes of solar system objects.
In this work, we carry out a pilot study intending to explore whether polarimetry can bring additional constraints that help to understand the origin and the composition of Jupiter Trojans better. We present polarimetric observations of six objects belonging to the L4 Jupiter Trojan population: (588) Achilles, (1583) Antilochus, (3548) Eurybates, (4543) Phoinix, (6545) 1986 TR6, and (21601) 1998 XO89. All our targets have sizes in the diameter range of 50-160 km, and represent both spectral groups defined by Emery et al. (2011).
From ground-based facilities, Jupiter Trojans may be observed up to a maximum phase-angle of ∼ 12 • . Our observations cover the range 7 • − 12 • and are characterized by a ultrahigh signal-to-noise ratio (S/N) of ∼ 5000, so their accuracy is not limited by photon noise, but by instrumental polarization and other systematic effects. Our observations are aimed at directly addressing the question of how diverse the polarimetric properties of the L4 population of Trojans are and how they compare with the polarimetric properties of other objects of the solar system. With our data we can estimate the minimum of their polarization curves and make a comparison with the behaviour of low-albedo main-belt asteroids. Finally, by combining the polarimetric images, we can also try to detect coma activity (if any) with great precision.
Observations and results
Our observations were obtained with the FORS2 instrument (Appenzeller & Rupprecht 1992;Appenzeller et al. 1998) of the ESO VLT using the well-established beam-swapping technique (e.g. Bagnulo et al. 2009), setting the retarder waveplate at 0 • , 22.5 • , . . . ,157.5 • . For each observing series, the exposure time accumulated over all exposures varied from a few minutes for (588) Achilles up to 40 minutes for (6545) 1986 TR6.
Instrument setting
Jupiter Trojans are relatively bright targets for the VLT, therefore the S/N may be limited by the number of photons that can be measured with the instrument CCD without reaching saturation, rather than by mirror size and shutter time. The telescope time requested to reach an ultra-high S/N is in part determined by overheads for CCD readout. The standard readout mode of the FORS CCD has a conversion factor from e − to ADU of 1.25 and 2×2 binning readout mode. Each pixel size, after rebinning, corresponds to 0.25 . Therefore, for a 1 seeing, the 2 16 − 1 maximum ADU counts set by the ADC converter limits the S/N achievable with each frame to ∼ 1000 − 1400 (neglecting background noise and taking into account that the incoming radiation is split into two beams). To increase the efficiency, we requested the use of a non-standard 1 × 1 readout mode for our observing run. This way, pixel size was reduced to 0.125 , and with a conversion factor of ADU to e − of 1.25, we could expect to reach a S/N per frame of ∼ 2000 − 2800. We also requested special sky flat fields obtained with the same readout mode. While flat-fielding is not a necessary step for the polarimetry of bright objects, we found that it improved the quality of our results because it reduces the noise introduced by background subtraction. For consistency with previous FORS measurements of Centaurs and TNO, our broadband linear polarization measurements were obtained in the R filter.
Aperture polarimetry
Fluxes were calculated for apertures up to a 30-pixel radius (=3.75 ) with one-pixel (=0.125 ) increments. Sky background was generally calculated in an annulus with inner and outer radii of 28 and 58 pixels (i.e. 4.5 and 7.25 ), respectively. Imaging aperture polarimetry was performed as explained in Bagnulo et al. (2011) by selecting the aperture at which the reduced Stokes paremeters P Q = Q/I and P U = U/I converge to a well-defined value. Polarimetric measurements are reported the perpendicular to the great circle passing through the object and the Sun adopting as a reference direction. This way, P Q represents the flux perpendicular to the plane Sun-object-Earth (the scattering plane) minus the flux parallel to that plane, divided by the sum of these fluxes. For symmetry reasons, P U values are always expected to be zero, and inspecting their values allows us to perform an indirect quality check of the P Q values. This "growthcurve" method is illustrated in Fig. 1 for one individual case, Table 1. Polarimetry and photometry of six Jupiter Trojans asteroids in the special R FORS filter. P Q and P U are the reduced Stokes parameters measured in a reference system such that P Q is the flux perpendicular to the plane Sun-Object-Earth (the scattering plane) minus the flux parallel to that plane, divided by the sum of the two fluxes. Null parameters N Q and N U are expected to be zero within error bars. m R is the observed magnitude in the R filter, and R(1, 1, α) is the magnitude as if the object was observed at geocentric and heliocentric distances = 1 au at phase angle α. Photometric error bars are estimated a priori = 0.05.
Phase Time
Exp 9, and 10 contain a lot of information, and it is worthwhile commenting on them in detail. In the left-hand panel of Fig. 1, the blue empty circles show the P Q values measured as a function of the aperture used for the flux measurement, with their error bars calculated from photon noise and background subtraction using Eqs. (A3), (A4), and (A11) of Bagnulo et al. (2009). The P Q values are offset to the value adopted in Table 1. Ideally, for apertures slightly larger than the seeing, P Q should converge to a well defined value, that should be adopted as P Q measurement value in Table 1. Practically speaking, Figs. 1 and 9 clearly show that P Q sometimes depends on the aperture in a complicated way, mainly due to the presence of background objects that enter into the aperture where flux is measured (see e.g. the case of Eurybates observed on June 1 in Figs. 9 and 10.) The values reported in Table 1 were selected through visual inspection of Figs. 9, as the value corresponding to the smallest aperture of a "plateau" of the growth curve rather than to its asymptotic value.
Lower in the figure, the empty red circles show the null parameters offset to the value adopted in Table 1, and offset by −0.5 % for display purpose. The solid circle shows the aperture adopted for the P Q measurement, and the corresponding N Q value is shown with a dotted line. In practice, the distances between the solid line at −0.5 % and the empty circles correspond to the null parameter values, and the distance between solid line and the dotted line shows the N Q value of Table 1. The physical significance of the null parameters is discussed in Sect. 2.2.1.
The black solid line shows the logarithm of the total flux expressed in arbitrary units. In this context it does not have any diagnostic meaning, but demonstrates that polarimetric measurements converge at lower aperture values than photometry and suggests that simple aperture polarimetry leads to results more robust than those of aperture photometry.
The right-hand panels of Figs. 1 and 10 refer to P U and N U and are organized in exactly the same way as the left-hand panels of Figs. 1 and 9, respectively. For quality-check purposes, the aperture of P U was selected to be identical to that of P Q (see Sect. 2.2.2).
Quality checks with the null parameters
The polarimetric measurements presented here were obtained using the so-called beam-swapping technique; i.e., Stokes parameters are obtained as the difference between two observations obtained at position angles of the retarder waveplate separated by 45 • . This technique allows one to minimize spurious contributions due to the instrument. For instance, the reduced Stokes parameter P Q was obtained as where where φ is the position angle of the retarder waveplate, and f ( f ⊥ ) is the flux measured in the parallel (perpendicular) beam of the retarder waveplate. The null parameter N Q (N U ) is defined as the difference between the P Q values obtained from distinct pairs of observations: Bagnulo et al. (2009) have shown that, in the ideal case, the results of repeated measurements of the null parameters are expected to be scattered about zero according to a Gaussian distribution with the same σ as the P Q (P U ) error bar. (Of course, we do not expect a Gaussian distribution for the N Q values measured on the same frames but with different apertures, since these are not independent measurements.) The consistency of the null parameters with zero within the P Q error bars are therefore an indirect form of quality check. For instance, an N Q value inconsistent with zero could be due to the presence of a cosmic ray, or a background object or reflection in the aperture for some positions of the retarder waveplate. These events would also affect the P Q measurement, therefore one has to be wary of P Q measurements that have high N Q values. Figures 1, 9, and 10 show that the null parameters are scattered about zero well within 2 σ.
Quality checks with P U
If the target is macroscopically symmetric about the scattering plane, P U is expected to be zero. An individual P U measurement that significantly deviates from zero means either that there is a problem with the measurement (similar to what is discussed for the null parameter) or that the object is not symmetric about the plane identified by the object, the Sun, and the observer. When considering a large sample, all P U measurements of different objects should be scattered around zero. The null parameters should be scattered around zero, and the null parameters normalized by their error bar should be fit by a Gaussian with σ = 1 centred on zero. Inspection of the distribution of P U parameters initially showed a systematic offset by ∼ −1 σ. Another way to see exactly the same effect is to calculate the average polarization position angle measured from the perpendicular to the scattering plane: we found 90.4 • instead of 90.0 • . This 0.4 • rotation offset can be easily explained by an imperfect alignment of the polarimetric optics and by an imperfect estimate of the chromatism of the retarder waveplate. To compensate for the waveplate chromatism in the R Bessel filter, we had originally adopted the rotation suggested by the FORS user manual of −1.2 • . After inspecting the P U values, we instead decided to adopt a rotation by −0.8 • . Figure 2 shows the histograms of the P U , N Q , and N U values normalized to their error bars. The marginal deviations from the expected Gaussian distribution do not look systematic and may only be ascribed to the sample still being relatively small statistically.
Aperture photometry
The importance of acquiring simultaneous photometry and polarimetry has probably been underestimated in the past. Modelling attempts need both pieces of information, which are only available for a handful of asteroids. However, at least with certain instrument configurations, photometry may be a by-product of polarimetric measurements. In the case of the FORS instrument, an acquisition image is always obtained prior to inserting the polarimetric optics. This can be used to estimate the absolute brightness of the target, if the observing night is photometric. (In fact, even if this is not the case, one could in principle observe the same field again during a photometric night and calibrate the previous observations). Therefore we performed aperture photometry from our acquisition images, and then we calculated R(r = 1 au, ∆ = 1 au, α) = m R − 5 Log 10 (r ∆) where r and ∆ are the heliocentric and geocentric distances, respectively, and m R is obtained from the instrument magnitude m (instr) where ZP R , k R and k VR are the zero point and the extinction coefficient in the R filter and the (V − R) colour index tabulated in the FORS2 QC1 database, respectively, and X is the airmass. Aperture photometry can also be performed on the images obtained with the polarimetric optics in, if these are calibrated. From a comparison between photometry obtained from the acquisition images and photometry obtained from the polarimetric images (obtained adding f and f ⊥ ), we estimated that zero points of the frames obtained in polarimetric mode with the R special filter can be obtained by subtracting 0.31 from the zero points obtained in imaging mode.
Both ZP R and k VR are night dependent (their values are ∼ 28.28 and 0.01, respectively). Based on the night-to-night variations, we a priori assigned an error of 0.05 and 0.0005 to the zero point and to the colour term, respectively. ESO classifies each night with the symbols S(table), U(known), or Non stable. Unfortunately, only three out of our 20 observing series were obtained during stable nights. The reason is that to maximize the chances that our observations would be performed during the desired time windows, we set only loose constraints on sky transparency. However, since we obtained several frames during an extended period of time (typically 30-60 min), it is still possible to roughly evaluate the stability of the atmospheric conditions at the time of our observations. We also note that the Line of Sight Sky Absorption Monitor (LOSSAM, available online through the ESO web site) shows that most of the observing nights were actually clear.
FORS acquisition images have a hard-coded 2×2 pixel readout mode. Aperture photometry was calculated on apertures up to 15 pixel, and background was calculated in an annulus with inner radius of 20 and 30 pixels, respectively (corresponding to 5 and 7.5 ). The results of our photometric measurements are also reported in Table 1, and Fig. 11 shows the magnitude measured in each observing series.
Searching for coma activity
After background subtraction, all polarimetric frames of each observing series were coadded (combining together both images split by the Wollaston prism and obtained at different positions of the retarder waveplate). The resulting frames were analysed as explained in Sect. 3.2 of Bagnulo et al. (2010) to check for the presence of coma activity. Briefly, we assumed that the number of detected electrons e − of the object per unit of time within a circular aperture of radius a is the sum of the contribution of the nucleus plus the potential contribution of a coma, plus, possibly, a spurious contribution due to imperfect background subtraction. To check for the presence of a coma, it is probably sufficient to compare the point-spread function (PSF) of the main target with those of the background stars. However, if we are interested in a more quantitative estimate (e.g. an upper limit), following A' Hearn et al. (1984), we can assume that the flux of a weak coma around the nucleus in a certain wavelength band can be written as where A is the bond albedo (unitless), f the filling factor (unitless), r the heliocentric distance expressed in au, ∆ the geocentric distance and ρ the projected distance from the nucleus (corresponding to the aperture). Finally, F is the solar flux at 1 au, integrated in the same band as F C , and convolved with the filter transmission curve. Following the approach of Tozzi & Licandro (2002), Bagnulo et al. (2010) have shown that if the derivative of the flux with respect to the aperture converges to a constant value k (C) , then where m is the apparent magnitude of the Sun (i.e., at 1 au) in the considered filter, ZP m is the zero point in that filter for the observing night, and d p the CCD pixel scale in arcsec (0.125 in our case). In Eq. (5) r and ∆ are measured in au and k (C) in e − per pixel, and A f ρ is obtained in cm. We found that in all cases, A f ρ is consistent with zero within a typical error bar of ∼ 10 cm (see Fig. 3). We conclude that there is no evidence of any coma activity.
Discussion
All our polarimetric and photometric measurements are reported in Table 1. In the following we first discuss the differences found among our sample, searching for a correlation between polarimetric properties and other characteristics of our objects, then we consider Trojans as a homogeneous class to compare with other atmosphere-less objects of the solar system. Orbital constraints meant that all six Trojans asteroids were observed in the negative branch, i.e. at those phase angles where we expect that the polarization of the reflected light is parallel to the scattering plane. We measured polarization values from −1.3 % to −0.9 % in a phase-angle range 7 − 12 • . The variations in polarization within the observed phase-angle range are small for all objects, yet, thanks to the high S/N of our observations, it is possible to distinguish some different behaviours. Figure 4 shows the results of our polarimetric observations as a function of the phase angle.
Several functions have been proposed to fit polarimetric measurements versus phase angle. One of the most popular ones is the one proposed by Lumme & Muinonen (1993): where b is a parameter in the range [0, 1], α 0 is the inversion angle (typically < ∼ 30 • ), and c 1 and c 2 are positive constants. Equation (6) was used, for example, by Penttilä et al. (2005) for a statistical study of the asteroids and comets. The number of our data points per object is even smaller than the number of free parameters, therefore it does not make sense to fit our data without making assumptions (such as about the inversion angle of the polarimetric curves). However, assuming that the minimum of the polarization is reached in the phase-angle range 6 • -12 • (a typical range for low-albedo objects would be 8 • -10 • ), even a simple visual inspection allow us to estimate the polarization minima of the various objects and, in particular, to conclude that our sample does not show homogeneous polarimetric behaviour. The object (3548) Eurybates is the largest member of a dynamical family mainly consisting of C-type objects (Fornasier et al. 2007). It has the deepest minimum (P min ∼ −1.3 %). All the remaining objects belong to the D-type taxonomic class (Grav et al. 2012). The objects (588) Achilles and (4543) Phoinix exhibit a shallower polarization curve (i.e., lower absolute values of the polarization) than the other four Trojans. Figure 4 also suggests that the minimum of the polarization curve of (588) Achilles is reached at a phase-angle value that is lower than that of (4543) Phoinix. Objects (1583) Antilochus, (6545) 1986 TR6, and (21601) 1998 X089 all seem to have similar polarimetric behaviour, with a minimum ∼ −1.2 %.
Before progressing in our analysis, it is important to discuss whether the observed diversities are real. This question arises since our photon-error bars are very small (a few units in 10 −4 ), and compared to them, instrumental or systematic errors may not be negligible. However, the relatively smooth behaviour with phase angle and the good consistency with zero of both the P U and the null parameters suggest that our photon-noise error bars are probably representative of the real error. Exceptions to the smooth behaviour are represented by the point at phase angle 10.1 • of asteroid (6545) 1986 TR6 (obtained on April 25 2013) and the point at phase angle 8.4 • of asteroid (3548) Eurybates (obtained on April 19 2013). Figures 9 and 10 show that, in the former case, polarimetric measurements depend strongly on the aperture and fail to converge to a well-defined value, probably due to the strong background, therefore the observed discrepancy (still within the error bar) is due to a larger error than what is typical in our dataset. The case of asteroid (3548) Eurybates is more puzzling. There is nothing in Fig. 9 that suggests a problem with aperture polarimetry in any of the observations, therefore one may hypothesize that the abrupt change observed between the point at phase 8.1 • and the point at phase angle 8.4 • is due to asteroid rotation.
The rotation periods of the observed Trojans range from 7.306 h for (588) Achilles to 38.866 h (4543) Phoinix, and their ligtcurves amplitudes are < ∼ 0.3 mag. While during an observing series we do not expect short-term photometric variations caused by asteroid rotation, it is possible that polarimetric data depend on the rotation phase at which observations were obtained.
Polarimetric behaviour may depend on the rotational phase of the observations. Although rarely observed, one notable example is that of asteroid (4) Vesta, with a rotational polarimetric amplitude of ∼ 0.03 % (Wiktorowicz & Nofi 2015) to 0.1 % (Lupishko et al. 1988). To test whether our polarimetric data are rotationally modulated, we calculated the rotation phase shift between observations of each object and found, for instance, that a large shift (0.36) occurs between the observations at phase angles 6.8 • and 7.8 • of asteroid (21601) (1998 X089). However, the polarization values at these phase angles are consistent with each other, and overall, the polarization curve is relatively smooth. By contrast, the rotation phase shift between phase angles 8.2 • and 8.4 • of asteroid (3548) Eurybates is only 0.1 of a rotation period. These differences may, therefore, not be due to rotation but instead to photon noise fluctuations or to small changes in the (already small) instrumental polarization. We conclude that there is no obvious evidence of a polarimetric modulation introduced by asteroid rotation in our data. On the other hand, our sample shows a polarimetric behaviour that is not perfectly homogeneous, which must reflect some difference in their surface structure and/or albedo.
Proper orbital elements
No strong correlations have been identified yet between the physical and orbital properties of Trojans, although there appears to be a bimodality in spectral slopes (Szabó et al. 2007;Roig et al. 2008). In confirming a similar bimodality within a sample of near-IR spectra of Trojans, Emery et al. (2011) point out a possible weak correlation with inclination amongst their less-red population. We have searched for trends between polarimetric behaviour and orbital properties in our sample. Figure 5 shows the proper orbital elements 1 of Jupiter Trojans with the six objects observed in this work denoted by red points and identified by their number. All the objects, except (588) Achilles, have low ( 0.05) proper eccentricity and high (> 15 • ) libration amplitude. We note that the two objects with a shallow polarization curve -(588) Achilles and (4543) Phoinix -also have the highest proper eccentricity in our sample. However, since the eccentricity of (4543) Phoinix (0.059) is only marginally higher than those of (3548) Eurybates and 21601 (1998 X089) (0.044 and 0.053, respectively), and given the small size of our sample, we do not attach any high statistical significance to this observation. Finally, there appears to be no trend linking polarization behaviour with inclination; both (588) Achilles and (4543) Phoinix have inclinations within 1 σ of the mean for the four other objects.
Albedo
It is well known that the minimum of the polarization is inversely correlated to the albedo; i.e., the higher the absolute value of the minimum, the lower the albedo (e.g. Zellner et al. 1977a;Cellino et al. 2015). In fact, various authors have tried to calibrate a relationship log p = C 1 log P min + C 2 (7) to estimate the albedo p from polarimetric observations. For instance, Lupishko & Mohamed (1996) give C 1 = −1.22 and C 2 = −0.92; Cellino et al. (2015) give C 1 = −1.426 ± 0.034 and C 2 = −0.917 ± 0.006. However, it is known that Eq. (7) is only an approximation that does not necessarily produce accurate albedo estimates (e.g. Cellino et al. 2015). In particular, a saturation effect may occur for the darkest objects, which was discovered in the laboratory for very dark surfaces (Zellner et al. 1977b;Shkuratov et al. 1992): the depth of negative polarization increases as the albedo decreases down to ∼ 0.05, but a further decrease of the albedo results in a decrease in the absolute value of the polarization minimum. This effect was observed for the very dark F-type asteroids by Belskaya et al. (2005) (see also Cellino et al. 2015). In the case of the observed Trojans, the albedo estimated from Eq. (7) and from our polarimetric minima are of the order of 0.08-0.12, which is inconsistent with what has been found from independent estimates of the albedo. We conclude that the polarimetric measurements of our Trojans are also in the regime of 'saturation' similar to what was observed for F-type asteroids.
In fact, the albedo estimates from the WISE (Grav et al. 2012) and AKARI (Usui et al. 2011) mid-IR surveys lead to contradictory conclusions. For instance, according to AKARI data (Col. 10 of Table 2), (588) Achilles and (4543) Phoinix (that show the shallower polarization minima) are actually the darkest objects. This finding is somehow contradicted by the WISE albedos (Col. 9), according to which (588) Achilles would still be the darkest object in our sample, but (4543) Phoinix would have an albedo higher than that of (1583) Antilochous and (3548) Eurybates. Albedo estimates strongly depend on the values of absolute magnitudes 2 adopted in the surveys (see Cols. 6 and 7). It is therefore of some interest to recalculate them using our photometric measurements in Table 1. 1 These are constants that parameterize the evolution of their osculating elements, the latter varying with time due to planetary perturbations (Milani, CeMDa, 1993) 2 The absolute magnitude H is the magnitude that would be measured in the V filter if the asteroid was observed at geocentric and heliocentric distances = 1 au and phase angle α = 0 To calculate the absolute magnitudes, we need to know magnitude-phase dependences of our targets. Shevchenko et al. (2012) have shown that D-type Trojans are characterized by a linear magnitude-phase dependence down to small phase angles without the opposition effect, i.e., that a linear fit gives a more precise estimate of the absolute magnitudes of the D-and P-type Trojans compared to what can be estimated with the so-called HG function . For the D-type asteroids, we therefore performed a linear extrapolation to zero phase angle assuming a 0.04 mag/deg slope, which is typical of these objects. For the C-type (3548) Eurybates, we assumed a non-linear magnitude-phase dependence similar to that of C-type asteroids (Belskaya & Shevchenko 2000). To calculate the absolute magnitudes H in the V band, we adopted the literature V − R colours of these objects, when available, or assumed V − R = 0.45 (see Fornasier et al. 2007). Our estimates of the absolute magnitudes H are shown in Col. 8 of Table 2. Although our photometric measurements agree with the measurements of Cols. 6 and 7, they exhibit a systematic negative offset, which may be consistent with the findings by Pravec et al. (2012) of a systematic bias in the absolute magnitudes of asteroids given in the orbital catalogues. Using our revised absolute magnitudes and diameters from WISE and AKARI surveys, we calculated the albedos of our objects again. Our new albedo estimates (Cols. 11 and 12 of Table 2) are no longer as scattered as the original estimates from Usui et al. (2011) andGrav et al. (2012), but actually very similar for all five D-type Trojans. The relationship of P min and albedo based on the updated data on albedos is plotted in Fig. 6. Data for Trojans are within the range of the low albedo asteroids. The saturation effect for low-albedo asteroids is fairly evident.
If our new estimates of the albedos are correct, then the differences observed between (4543) Phoinix, (588) Achilles, and the group of three asteroids (1583) Antilochus, (6545) 1986 TR6, and (21601) (1998 X089) may just reflect a difference in the surface structure that could also be revealed, e.g., by a difference in the reflectance spectra. The spectral properties of (4543) Phoinix have not been measured, and (588) Achilles is classified as an unusual D type (DU) by Tholen (1989). The remaining three asteroids are D type. We therefore expect that the taxonomic class of (4543) Phoinix may also differ from the typical D type. Depth of the polarization minimum P min versus the phaseangle α min where the minimum occurs for asteroids (black) , Trojans (red), and Centaurs (blue points).
Comparison with other atmosphere-less objects in the solar system
We have compared the polarimetric properties of Trojans to the literature data on TNOs (Bagnulo et al. 2008, and references therein), Centaurs (Belskaya et al. 2010, and references therein), and low-albedo asteroids (see database compiled by Lupishko and available at http://sbn.psi.edu/pds/resource/apd.html).
The mean values of the polarimetric parameters P min , α min , α inv and their scattering are given in Table 3.
The polarimetric properties of TNOs and Centaurs are not characterized as well as those of main-belt asteroids. Because of their distance, Centaurs can only be observed in a limited phase angle range (∼ 0 − 5 • , compared to ∼ 0 − 30 • of mainbelt objects). However, there are indications that the polarimetric curves of Centaurs reach a minimum at very small phase angles (as small as ∼ 1.5 • for Centaur Chiron, see Bagnulo et al. 2006;Belskaya et al. 2010). This feature was interpreted by Belskaya et al. (2010) as indicative of a thin frost layer of submicron water ice crystals on their dark surfaces. Both for Trojans and Centaurs, we can only estimate a lower limit of the inversion angles, since geometrical constraints make their direct measurements impossible from Earth-based observations. For TNOs, that with the exception of the binary system Pluto-Charon are visible from Earth only at phase angles < ∼ 2 • , we cannot even estimate the parameters P min and α min . Table 3. Polarimetric properties of some atmosphere-less objects. The number of objects for which polarization minima (P min at α min ) and inversion angles (α min ) were measured is indicated by N min and N inv , respectively. Figure 7 shows the relationship of polarization minimum and the phase angle where the minimum occurs for Trojans, Centaurs, and main-belt asteroids. The polarization-phase angle behaviour of the observed Trojans is very similar to that of lowalbedo asteroids, in particular the P-type asteroids, and quite different from those Centaurs for which polarimetric measurements have been obtained, in spite of closer proximity to the latter group of objects. Figure 8 shows the mean polarization-phase curves for the P-, F-, G-and C-type asteroids, and demonstrates that the data for the P-type asteroids and D-type Trojans are practically indistinguishable. Compared to the F-type asteroids, polarization minima of Trojans occur at a larger phase-angle, which suggests that their inversion phase-angles are also larger. Fornasier et al. (2006) obtained a polarimetric measurements of the D-type object (944) Hidalgo at a large phase angle, and (944) Hidalgo has an unusual orbit with a semi-major axis of 5.74 au and eccentricity of 0.66. This object reaches 1.94 au in perihelion, giving an opportunity to observe it in much larger phase-angle range than for other D types. The polarization measurement at α = 26.8 • lies exactly at the fitted phase curve for P-type asteroids and confirms a similarity of polarization properties of D-and P-type asteroids within the accuracy of polarimetric measurements.
Conclusions
We performed a pilot study of the polarization properties of Jupiter Trojan asteroids and have obtained measurements for six objects belonging to the L4 population. Comparing our targets, we found that they show similar but not identical polarization properties, in particular that there are at least two distinct polarimetric behaviours. Trojans (588) Achilles and (1583) Antilochus show a shallower polarization curve than the remaining four Trojans (3548) Eurybates, (4543) Phoinix, (6545) (1998 TR6), and(21601 (1998 X089). The C-type Trojan (3548) Eurybates shows the deepest minimum of polarization. D-type Trojans (1583) Antilochous, (6545) (1998 TR6) and (21601) (1998 X089) all have a minimum around −1.2 %, but overall their polarimetric behaviour does not appear very different from that of (3548) Eurybates. Considering all objects together, we found that the minimum of the polarization is reached at a phase angle ∼ 10 • and is in the range of−1.3 % to −1.0 %. This polarimetric behaviour is different from that of Centaurs, which seem to show polarization minima at much smaller phase angles and are very similar to low-albedo main-belt asteroids.
A&A-BBLP, Online Material p 10 Fig. 9. Aperture polarimetry: P Q and N Q parameters as function of the aperture for the various observing series. P Q parameters, represented by blue empty circles, are offset to the values corresponding to the aperture adopted for the measurement and reported in Table 1. This point is hightlighted with solid circles and dotted lines. N Q parameters, represented by red empty circles, are offset by −0.5 % for display purpose. Again, the adopted values are highlighted with solid symbols and dotted lines. Each panel of this figure is similar to the left panel of Fig.1 and is explained in more detail in Sect. 2.2. A&A-BBLP, Online Material p 11 Fig. 10. Aperture polarimetry: P U and N U parameters as function of the aperture for the various observing series. P U parameters, represented by blue empty circles, are offset to the value corresponding to the aperture adopted for the measurement and reported in Table 1. This point is hightlighted with solid circles and dotted lines. N Q parameters are represented by red empty circles, and are offset by −0.5 % for display purpose. Again, the adopted values are highlighted with solid symbols and dotted lines. Each panel of this figure is similar to the right panel of Fig.1 and is explained in more detail in Sect. 2.2. Fig. 11.
Photometric measurements. The blue solid circles represent the photometry measured in the acquisition images, and the red empty circles represent the photometry measured from the polarimetric images. The green lines show the final value adopted for the time series (dotted green line represent an upper limit). In each panel, between parenthesis we report the ESO QC1 classification of the night (U=unknown, N=non stable, S=stable) followed by the sky conditions as we estimate after inspection of the LOSSAM plots available (c=clear,t=thin to thick). The LOSSAM archive is available online through the ESO website. This Figure is discussed in Sect. 2.3. | 2015-11-30T12:37:17.000Z | 2015-11-30T00:00:00.000 | {
"year": 2015,
"sha1": "a77775e1951abe3b744a7ecbe8b614b4592698b5",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2016/01/aa26889-15.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "a77775e1951abe3b744a7ecbe8b614b4592698b5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
53523915 | pes2o/s2orc | v3-fos-license | Advanced Rolling Bearing Fault Diagnosis Using Ensemble Empirical Mode Decomposition, Principal Component Analysis and Probabilistic Neural Network
Aiming at the problem that the vibration signal of the incipient fault is weak, an automatic and intelligent fault diagnosis algorithm combined with ensemble empirical mode decomposition (EEMD), principal component analysis (PCA) and probabilistic neural network (PNN) is proposed for rolling bearing in this paper. EEMD is applied to decompose the vibration signal into a sum of several intrinsic mode function components (IMFs), which represents the signal characteristics of different scales. The energy, kurtosis and skewness of first few IMFs are extracted as fault feature index. PCA is employed to the fault features as the linear transform for dimension reduction and elimination of linear dependence between the fault features. PNN is applied to detect rolling bearing occurrence and recognize its type. The simulation shows that this method has higher fault diagnosis accuracy.
Introduction
Rolling bearings are common and vulnerable parts in rotating machinery. According to statistics, 30% of the rotating machinery faults are caused by the bearing failure, so the condition of the rolling bearing is closely related to the operation of the machinery, and it is of great importance to detect and diagnose the fault of rolling bearings. 1 At present, there are many scholars studying the fault diagnosis of rolling bearings. The Ref.1 uses a method of combining EMD decomposition with singular value difference spectrum. The Ref.2 uses the fast independent component analysis to extract the fault feature, but it does not extract the fault information in depth, and the Ref. 3 uses the wavelet packet de-noising method combined with LMD to extract fault information. In Ref. 4, discrete wavelet transform is proposed to extract the feature, but the adaptability of wavelet analysis is not as strong as EMD and EEMD when dealing with non-stationary signals. It can be seen from the above bibliographies that how to better distill the fault features from non-stationary and noisy bearing signal is the key to fault diagnosis of rolling bearings.
In the operation of rotating machinery, the vibration signal usually has non-stationary and nonlinearity characteristics, so it is difficult to obtain good effect in feature extraction by using traditional Fourier transform as the theoretical basis. And the collected signal is accompanied by a certain degree of noise. Therefore, EEMD is applied to decompose the vibration signal into a sum of several intrinsic mode function components (IMFs), which represents the signal characteristics of different scales. The energy, kurtosis and skewness of first few IMFs are extracted as fault feature index. Then PCA is employed to the fault features as the linear transform for dimension reduction and elimination of linear dependence between the fault features. Finally, PNN is applied to detect rolling bearing occurrence and recognize its type, the generalized algorithm for rolling bearing is shown in Fig1
Fault Feature Extraction Using EEMD
Empirical Mode Decomposition (EMD) is a new method for analyzing non-linear and non-stationary time series proposed by Huang N.E. in 1998.
The principle of EMD
EMD is a self-adapting time-frequency analysis method, which is different from the wavelet analysis in that is that there is no basis function system, that is, when analyzing the series, the original sequence is divided into a limited number of intrinsic functions rather than with certain basis functions, and each component shows uniqueness and reflects the information at each unique time scale in the signal. The decomposed signal is a stationary signal. In essence, EMD is a method to turn non-stationary signals into stationary ones. The main calculation procession sees Ref. 1.Use EMD to decompose the bearing signal, taking a number of high-frequency components and abandoning the last several IMF components which are generally low-frequency noise-based. 1
The principle of EEMD
The EEMD algorithm is a new signal decomposition method based on the traditional EMD algorithm. In order to overcome the algebraic phenomenon of abnormal events (such as pulse interference) in traditional EMD decomposition, the ensemble empirical mode decomposition algorithm, that is EEMD algorithm, is proposed. In the traditional EMD decomposition of the signal, the algebraic generation of the signal is related to the selection of the extreme value of the signal. If the interval of the extreme points of the signal is not uniform, the fitting error of the upper and lower envelope will occur, resulting in mode mixing. By adding different amplitude of the Gaussian white noise to the signal to change the extreme value of the characteristics of the signal, offsetting the IMF and the white noise, the EEMD algorithm can get accurate IMF and eliminate the noise, which can better suppress the alienation of the abnormal event pattern and the inherent pattern of signal vibration and better highlight the real signal characteristics.
The specific decomposition process changes to the following steps: (i) Adding a given amplitude white noise to the analysis signal ( ) X t and then normalizing the signal。 (ii) EMD decomposition of the signal after adding white noise.
(iii) Repeating the above two steps N times, each time adding random white noise sequence.
(iv) Average calculating operation for the corresponding decomposed ensemble IMF to offset the effect of multiple Gaussian white noise on the real IMF. Finally, the IMF component and the remainder of the EEMD decomposition can be obtained.
Use EEMD to decompose the bearing signal, taking a number of high-frequency components and abandoning the last several IMF components which are generally low-frequency noise-based [1] .
Multi-feature Extraction Using EEMD
The method of extracting characteristic parameters of signals with multiple characteristic parameters is Journal of Robotics, Networking and Artificial Life, Vol. 5, No. 1 (June 2018) 10-14 improved on the basis of intrinsic mode energy method. As a single energy method is the overall energy calculation for each component of IMF, it may lose some useful fault information for the extracted feature signal. Therefore, this paper adds two feature quantities, namely, the kurtosis and skewness of the signal 5 .
The kurtosis index is a dimensionless parameter which bears no relationship to the speed, size and load of the bearing and is particularly sensitive to the impulse signal, so it is quite suitable for pitting damage fault of the surface. Generally the normal bearing kurtosis index is close to 3.
The skewness index reflects the asymmetry of the vibration signal, indicating the degree of deviation of the center of the signal probability density function from the standard normal distribution and reflecting the asymmetry of the signal amplitude distribution compared with its ideal mean. Except the machinery with quick-return characteristics, if there is friction or collision in a certain direction, it will cause the asymmetry of the vibration waveform so that the degree of bias index increases.
They are calculated as follows: A sum of several intrinsic mode function components of the fault vibration signal are obtained after EEMD decomposition, their frequencies are arranged from high to low, the first few IMFs which are generally high-frequency are employed for feature extraction, the last several IMFs which are generally low-frequency noise-based are abandoned, so the first n IMF components are taken after EEMD decomposition, then the number of dimension of the final extraction of the feature index is 3n.
Reducing Feature Dimensions Using PCA
In the process of modern production industry, generally a lot of process variables will be collected to detect and control the process. Principal component analysis is a commonly used analytical method in multivariate statistical analysis, and there's a fundamental difference between this method and the mentioned Fourier system based method or other time-domain and frequencydomain analysis methods. It is characterized by the simultaneous processing of multiple dimension variables data from which finds the hidden statistical information characteristics and can eliminate the correlation between different dimensions well, so that a number of related variables can be turned into a few independent variables, that is, a new feature index of less dimensions rather than the original feature index reflects most of the information that should be originally available. And the principal component analysis can eliminate the noise and redundancy in the fault data and improve the accuracy. The main calculation procession sees Ref. 6.
Fault Type Recognition Using PNN
PNN was first proposed by Dr. Specht, and it can get the Bayesian optimal results when used for pattern classification. It has the following advantages: ① Easy for training with fast convergence; ② strong faulttolerant characteristics; ③ network nodes are up to the number of training samples and the number of patterns to decide.
The structure of the PNN network is divided into four layers, namely the input layer, hidden layer, sum of layers and the output layer. The raw data is processed as a PNN diagnostic model after a series of previous processing. The main calculation procession sees Ref. 7.
Description of the experiment
In order to verify the effectiveness of the diagnostic method in this paper, the data is taken from the Department of Electrical Engineering and Computer Science, Case Western Reserve University, USA. 8 The analyzed data is obtained with the motor in 2 horsepower, and there are the inner ring fault data, outer ring fault data, rolling element failure data, and normal bearing data. The above four kinds of data form the original data of this experiment. Select 1024 data as a group from the original data and 40 groups for each state from the above four states, totally 160 groups of data, of which 10 groups randomly selected for each state as the training group and the remaining 30 groups as the test group.
Feature extraction and principal component analysis
In this experiment, EEMD is used to decompose the above-mentioned vibration signals, and then each IMF component is calculated for the energy, kurtosis and skewness. Next, PCA method is used to reduce the dimension, and finally input into PNN for state classification. The classification flow chart is as follows. Five IMF components are obtained after EEMD for each group of data, forming five-dimensional data, and then calculate each IMF component for the energy, kurtosis, and skewness, getting 15-dimensional data to form a multi-feature data index.
Then, the principal component analysis is performed on the multi feature data index, and the following principal component analysis In general, the cumulative contribution rate of the main component analysis up to 85% can keep most of the original signal information. 6 As can be seen from the above table, starting from the 7th main element, the cumulative contribution rate reaches 95.8%, meeting the requirement of dimensionality reduction for the original data index.
Pattern recognition using PNN
This paper is based on the EEMD intrinsic mode energy method to improve. First, the energy of the first 5 IMF is calculated and then input into PNN, and the diagnostic accuracy is 85.33%; and then adds the other two features to form a 15-dimensional feature index, enter the PNN, the diagnostic accuracy is 91%. It can be concluded that after adding two features, PNN will increase the diagnostic accuracy.
Finally, the data analyzed by the principal component analysis are input to the PNN in turn for pattern recognition. As can be seen from the figure above, when the number of principal components reaches five, the correctness rate reaches 95%, and the cumulative contribution rate at this time is 88.9%. Then as the number of principal components increases, the cumulative contribution rate increases. The correct rate of accuracy fluctuates about 95%. Therefore, taking five main elements is the final solution, which can ensure the accuracy and efficiency. The comparison of the three methods is shown in Table 2.
Tab.2. Comparison of three methods for correct diagnosis.
It can be seen that the accuracy of the energy method is low, and the accuracy is improved effectively when two features are added. It is thus obvious that the kurtosis and skewness of vibration signal can be used to make up the deficiency of energy feature in fault information completeness. Finally, on the basis of multi-feature method, the PCA is processed to further improve the fault diagnosis accuracy, which proves that PCA can effectively remove the noise and redundancy in the vibration signal.
Conclusion
(i) The proposed feature index combined with the energy, kurtosis and skewness of first few IMFs, can recognize the fault type of rolling bearing.
(ii) The feature index of the principal component analysis has higher accuracy rate than that of the feature index without main component analysis, which indicates that the principal component analysis can eliminate the redundancy and noise in the information to a certain extent.
(iii)When the principal component analysis is used to reduce the dimensionality of the data, the higher the contribution rate is, the higher the accuracy is, but it remains within a certain range after reaching a certain limit. | 2018-10-17T13:02:38.649Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "c16fa30d9c75dcb54d0c904c5ae4b2a811e8eb12",
"oa_license": "CCBYNC",
"oa_url": "https://download.atlantis-press.com/article/25896464.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5f5038804000d935109679120349763cc3e2ce6f",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
22922876 | pes2o/s2orc | v3-fos-license | Synthesis of Glycosides of Glucuronic, Galacturonic and Mannuronic Acids: An Overview
Uronic acids are carbohydrates present in relevant biologically active compounds. Most of the latter are glycosides or oligosaccharides linked by their anomeric carbon, so their synthesis requires glycoside-bond formation. The activation of this anomeric center remains difficult due to the presence of the electron-withdrawing C-5 carboxylic group. Herein we present an overview of glucuronidation, mannuronidation and galacturonidation reactions, including syntheses of prodrugs, oligosaccharides and stereochemical aspects.
Introduction
Uronic acids are reducing sugars of biological relevance. They are involved in the metabolism of many drugs and endogenous compounds, and they are found natural products such as glycosaminoglycans, pectins and carragenans, among others, isolated from different sourcesmammals, plants and algae.
In uronidation reactions, uronic acids are attached to an aglycone through the anomeric carbon atom forming O-glycosidic bonds. Although sharing all common aspects with general glycosidations, which OPEN ACCESS has been extensively reviewed [1,2], the synthesis of uronic acid glycosides is particularly challenging, because the presence of the C-5 carboxylic group decreases the reactivity at the anomeric position. Different methodologies have been investigated in order to overcome this drawback and to develop general strategies allowing the synthesis of uronic acid glycosides of biological importance and of complex oligosaccharides in a regio-and stereoselective manner.
Several methods are available for the synthesis of glycosides of glucuronic acids, and these can be placed into two broad categories involving either oxidation of the corresponding glucoside or by carrying out the glycosidation on an activated glucuronic acid. This manuscript covers the second approach.
As glucuronidation has been previously reviewed in 1998 [3,4], the present review will summarize the latest advances on glucuronic, galacturonic and mannuronic acid glycosylation methodologies, especially those involving the synthesis of metabolites of bioactive molecules and oligosaccharides. As L-iduronic acid has been the subject of a great number of articles dealing to the synthesis of heparin sequences, its reactions are out of the scope of this review.
Metabolites Synthesis
Many drugs have been conjugated to D-glucuronic acid (GlcA) in order to obtain the required tools for improving insights on their absorption, metabolism and bioavailability. Moreover, the isolation of the metabolites is often tedious and analytical standard are necessary as reference compounds for quantification of metabolite levels in clinical samples and for further pharmacological evaluation. Different methods, for the preparation of these standards conjugated to polyphenol residues, have been developed [1,5,6]. The study of metabolites of drugs can contribute to the toxicity, research and safety assessment, taking into account that the biological activity of GlcA conjugates is often similar or even higher than that of the aglycone (drug) [7,8].
A number of GlcA donors have been used for the synthesis of glucuronides. The first report on stereoselective synthesis and characterization of morphine 6-α-D-glucuronide (M6αG), useful as a reference marker for testing the purity and stability of the pharmaceutically important morphine 6-β-Dglucuronide (M6G) was described by Rukhman et al. [9,10] Several groups reported various methodologies for the synthesis of M6G, the preparation of the α-isomer had not been previously reported, and therefore the chemical and biological properties of morphine 6-α-D-glucuronide remained unknown. The synthesis the α-anomer is based on the glycosylation of 3-O-acetylated morphine 2 with methyl 2,3,4-tri-O-acetyl-D-glucopyranosyluronate bromide 1 as glycosyl donor and zinc bromide as catalyst (Scheme 1).
The selectivity of this reaction is controlled by the amount of catalyst, the use of 1.8 equivalents of ZnBr 2 afforded M6αG in a 8:1 α/β ratio. After hydrolysis of the acetyl groups, crystallisation gave the α anomer in reasonable yield (63%). Other examples of Koenigs-Knorr procedures to synthesize metabolites are the preparation of doxorubicin, daunomycin, clenbuterol and edaravone glucuronates. Daunomycinone-7-D-glucuronide (DM7G, 5) and doxorubicinone-7-D-glucuronide (DX7G, 6) were conveniently prepared through the glycosylation at the 7-hydroxyl group of daunomycinone (3) or 14-acetoxydoxorubicinone (4) with αglucosyluronate bromide 1 by a Koenigs-Knorr procedure catalyzed by HgBr 2 (Scheme 2), followed by alkaline deacetylation using aqueous LiOH solution and Amberlite cation exchange material [11]. The desired compounds 5-6 were obtained as a 3:7 α/β mixture, the anomers could be separated by flash column chromatography. During the the Koenigs Knorr reaction, orthoesters are frequently produced as by-products. The formation of orthoester derivatives can be explained by the competitive nucleophilic attack of the oxygen atom of the alcohol on the two possible electrophilic sites of the intermediate II, which is obtained from the oxocarbenium ion I (Scheme 4). Edaravone, a neuroprotective agent is metabolized to the glucuronate metabolite in humans. Edaravone glucuronate and edaravone sulfate, two metabolites of edaravone, were synthesized in high yields [13]. The edaravone glucuronate 10 was synthesized from glucosyluronate bromide 1 by conjugation with edaravone (9) using silver trifluoromethanesulfonate as promoter (Scheme 5). The investigation of drug metabolism requires a substantial amount of metabolites. As isolation from urine is long and tedious, the material obtained by synthesis is often preferred. In the case of phenolic compounds, the synthesis of glucuronides has been studied for each individual case by Arewang et al [14]. A number of GlcA donors as benzoylated or acetylated glucosyluronate bromides were treated with acceptor 12 under silver triflate promotion. Coupling of the acetylated donor 1 with 12 at 0 °C, gave a α/β-mixture (1:3) in 40% yield, whereas the benzoylated donor 11 produced a mixture of β-glycoside 13 and the corresponding orthoester in similar yield at the same temperature (Scheme 6). Exclusive formation of β-glycoside 13, albeit still only in a moderate yield (40%), was obtained when the coupling between donor 11 and acceptor 12 was carried out at ambient temperature. The use of the other donors did not improve the yield.
For the aglycone 14, the acetylated donor 1 gave the best result. On the other hand, in the case of compound 17 the use of 1-O-acetyl derivative 16 as donor in the presence of BF 3 etherate afforded the glucuronide 18 in 67% yield (Scheme 6). These synthetic pathways used easily available glycosyl donors and allowed the preparation of substantial amounts of the target glucuronides. Bromide glucosyluronate donors gave moderate yields and have been compared to imidate glucuronyl donors which are more efficient in most cases. To study the resveratrol metabolites and their effects on cell viability and on the inhibition of HIV infection, a one-pot synthetic approach using a random glycosylation procedure between resveratrol (19) and methyl acetobromoglucuronate 1 was used leading to resveratrol 3-O-glucuronide 20 and resveratrol-4'-O-glucuronide 21 [15] (Scheme 7). The 3-O-and 4'-O-glycoside derivatives were formed in one-pot. After deprotection, the mixture was purified by HPLC and the desired resveratrol-3-O-and 4'-O-glucuronides 20 and 21 were isolated in 13 and 18% yields, respectively, based on the resveratrol used.
In order to improve the yields of these two conjugates, the glycosylation between the trichloroacetamidate donor 22 and the silylated resveratrol acceptors 23 and 24 was performed in the presence of trifluoromethanesulfonate. Resveratrol-3-O-and 4'-O-glucuronides 20 and 21 were obtained in 94 and 89% yields, respectively (Scheme 7). Another example is the chemical synthesis of quercetin 3'-glucuronide 29. Previously described by Wagner et al. [16], glucuronidation of 25 with glucosyluronate bromide (1) in the presence of silver oxide (Ag 2 O), only gave 26 in 40% yield from 25. The glucuronidation was accompanied by the formation of the glycal 27 (Scheme 8). The use of imidate donor 28 led exclusively to the quercetin-3'glucuronide 29 in 11% overall yield. The methyl ester trichloroacetimidate 28 was prepared using a known procedure starting from D-(+)-glucurono-3,6-lactone in 60% yield over four steps [17]. A comparison of the reactivity between bromide and imidate donors was studied for the glucuronidation of ABT-751 32 which was evaluated as a treatment for pediatric neuroblastoma [18]. Compound 32 is metabolized in humans to glucuronide 33 and therefore the synthesis of both compounds was required to support Phase II clinical studies. The initial synthesis of glucuronide 33 used p-nitrophenol 30 and the glucosyluronate bromide (1) to form the corresponding glycoside, followed by five further steps to reach the target molecule 33 (Scheme 9). The Schmidt trichloroacetamidate methodology, promoted by BF 3 .Et 2 O gave, after deprotection, the glucuronide 33 directly in 60% yield, allowing a faster synthesis of the multigram quantities required for clinical use (Scheme 9).
While methods to synthesize simple glucuronides are relatively well developed, the synthesis of structurally complex glucuronides is not straightforward. The efficiency and scale ability of such syntheses is often limited by low yields or unselective glycosidic couplings, complex protecting group strategies, tedious isolations, or enzymatic reactions. In these examples glucuronate imidate donors showed to be more reactive, leading to better results.
In general, the use of GlcA derived glycosyl donors is often inefficient due to the destabilizing effect of the C-5 electron withdrawing group on the glycosidic bond forming event. A gram-scale synthesis of the glucuronide metabolite of ABT-724, potent selective D 4 dopamine receptor agonist, could be obtained from imidate donor 28 and compound 34 in the presence of BF 3 .Et 2 O in 75% yield [19]. Compound 35 gave after 6 steps the metabolite of ABT-724 36 in 33% overall yield from 28 (Scheme 10).
ABT-724
The attempts to synthesize 36 directly from the ABT-724 phenolic compound failed whatever the conditions (donors, promoters) used. For other phenolic compounds, the glucuronidation needs to be compatible with the stability of the aglycone ring as flavones or isoflavones. For example, the first efficient synthesis of flavanone glucuronides as potential human metabolites was optimized for 7,4'-di-O-methyleriodyctiol (persicogenin, 38) because it did not involve a complex protection/deprotection strategy of the aglycone moiety [20]. Thus, the 2,3,4-triacetyl-D-methyl-glucuronate-N-phenyl)-2,2,2,trifluoroacetamidate donor 37 was treated with 38 in the presence of BF 3 etherate to yield the acetylated glucuronide in 41% which gave after deacetylation and deprotection of the methyl ester by an esterase the final compound 39 in 73% yield (Scheme 11). 3) pig liver esterase A high yielding synthesis of isoflavone 7-glucuronides was accomplished by the reaction between the 7-OH of the isoflavone esters and a novel O-acetyl glucuronyl (N-p-methoxyphenyl)trifluoroacetimidate donor 40 [21]. Treatment of 4-O-hexanoyl-daidzein (41a) and glycitein (41b) with O-acetyl glucuronyl trifluoroacetimidate 40 in CH 2 Cl 2 under the promotion of BF 3 .Et 2 O (0.2 equiv) at room temperature led to the desired coupling products, 42a and 42b in 81% and 78% yields, respectively, as only the β anomers (Scheme 12). To improve the yields of some glucuronidations of phenolic compounds, different protecting groups on the glucuronate donor were studied. A simple and direct glucuronidation strategy for the urolithin-B 44, the silylated resveratrol 48, and the corresponding hydroxytyrosol derivatives 51, was described (Scheme 13). The critical glycosylation step was optimized using a structurally simple phenol, urolithin-B, by modification of several reaction parameters (solvent, promoter, and glucuronide donor). Glycosylation of urolithin-B acceptor 44 and glucuronosyl donor 43 was first performed using TMSOTf as the promoter with a moderate yield but a very good stereoselectivity, only the β-monomer was obtained. To improve the yield, the most common promoter used in aromatic glycosylation BF 3 .OEt 2 was used [23] for the reaction of 44 with the glucuronosyl donor 43, producing compound 47 in much higher yield (78%). When the glucuronosyl donors 22 and 28 were reacted with urolithin-B 44, products 45 and 46 were obtained in very high yields (95% and 83%, respectively) with no sign of orthoester formation.
The glucuronidation of silylated resveratrol 48 was performed in 71% yield from the acetylated imidate donor 28. Benzoylated imidate 22 treated with 51 gave a higher yield (84%) than the two other donors 22 and 43 (18 and 53% yield respectively). These results showed the importance of the optimization of the reactions conditions, as well as promoters for each phenolic acceptor/protector donor pair.
Another protected imidate, the isobutyryl imidate 55, has been successfully used in this type of reactions. For example, the synthesis of morphine-3,6-di-β-D-glucuronide was efficiently synthesized from this imidate donor. Previous attempts to couple methyl 2,3,4-tri-O-acetyl-1-Otrichloroacetimidoyl-α-D-glucopyranuronate (28) to 3-acetylmorphine by Lewis acid catalysis, afforded mostly 3,6-diacetylmorphine and a small amount of the desired 6-glucuronate. Similar poor results were obtained with morphine 56 [24]. The methyl groups of the sugar acetates was replaced by larger groups in order to increase steric hindrance. Therefore, the rate of nucleophilic attack at the carbonyl, and hence transacylation, was reduced, whereas the rate of glycosylation was relatively unaffected. The isobutyrate group was found to be the best compromise, combining minimal transacylation with ease of hydrolysis. Subsequent use of the tri-isobutyrate 55 led to effective preparations of M3,6diG 57 and related derivatives, with essentially complete stereoselection for the βanomers due to participation of the neighbouring C-2 acyl group.
57
By reaction of imidate 55 with dry morphine 56 in dichloromethane in presence of BF 3 /Et 2 O catalyst, the morphine 3,6-β-D-glucuronide derivative 57 in crystalline form was obtained with exclusive β-stereochemistry at C-1 of both glucuronates in 60% yield (Scheme 14).
Glucuronates are well known to be poor glycosyl donors and the reactivity of alcohols 58-61 is rather low. To reduce transacylation, a known side-reaction, reaction of tri-isobutyryl imidate 55 was studied under both 'normal' (Method A; viz. adding Lewis acid catalyst to the mixture of alcohol and 55) and 'inverse' conditions (Method B; viz. adding imidate to a mixture of alcohol and catalyst). The tri-isobutyryl imidate 55 gave satisfactory results in inverse mode with androsterone 58 and epiandrosterone 59, as compounds 63 and 64 were obtained in 41 and 54% yields respectively, whereas in normal mode the yields were 16 and 34% respectively. Also, imidates 55, 43 and iodide 62 showed to be efficient donors for the β-glucuronidation of a range of steroidal secondary alcohols. When acetyl protecting groups were used, orthoesters side-products were obtained, whereas with isobutyryl imidate, in "inverse mode", the yield increased [26]. For example the glucuronation of estradiol derivative 65 with the imidate 55 gave the glucuronide 66 in 77% yield (Scheme 16). Studies on the rapid metabolism of the trioxane derivative artemisinin, dihydroartemisinin (DHA, 70) required its conjugation to GlcA [27]: Glucuronidation of 67 as acceptor component may work well if the donor can generate a highly stabilized carbonium ion, though at low temperature the 12α-1'β-glucuronide may predominate. So, when 67 and 68 were treated Use of TMS triflate-AgClO 4 at −10 °C, the 12α-isomer was obtained in 40% yield as the only product, while the use of BF 3 /Et 2 O at 20 °C gave mainly an anhydro-DHA. However, ZnCl 2 proved to be an effective catalyst. Thus reaction of 67 with 68 in the presence of ZnCl 2 afforded crystalline 69 in very satisfactory yield after chromatography (31%) (Scheme 17). The tri-O-isobutyryl imidate 55 showed improved stability and reduced transacylation compared to its acetyl protected analog. Thus reaction of 55 and 70 with BF 3 .Et 2 O gave complete reaction of 70 with noticeably less amounts of the DHA degradation products. By chromatography, the 12α,1'βglucuronide ester 71 was isolated in excellent purity and 32% yield and 15% of the 12β-isomer 69.
Both new esters 69 and 71 gave microanalytically pure material on recrystallization (Scheme 17). A successful synthesis of the glucuronide metabolite 73 was performed using a N-acetylated Soraprazan 72 and tri-iso-butyrate trichloroacetamidate donor 55 [28] (Scheme 18), avoiding the formation of orthoesters observed when using the analogous acetyl protected trichloroacetimidate donor [26]. Activation of the anomeric position of the glucuronate donor can also be achieved with a sulfonyl group. N-glucuronide 78, a major metabolite of 4-(imidazole-1-yl)butanamide derivative KRP-197/ONO-8025, known for its antimuscarinic activity, was synthesized via glucuronidation of compound 76 using methyl 2,3,4-tri-O-benzoyl-1-methanesulfonyl-α-D-glucopyranuronate (75) [29] (Scheme 19). The latter showed β-selectivity and the glucuronide 77 was obtained in moderate yield (41%). Although this work involved the synthesis of N-glycosides, the strategy has a potential application in the preparation of O-glycosides from nucleophilic hydroxyl containing acceptors. This methodology, with formation of an anomeric mesylate, could be applicable to O-nucleophiles.
Prodrug Therapy
Since most of the glucuronides exhibit a weaker biological activity than their corresponding aglycones, the glucuronidation is generally considered as an important detoxification metabolic process in mammals. However, even if the glucuronide has no activity itself, it can undergo an enzymatic hydrolysis catalyzed by β-D-glucuronidase, releasing the corresponding biologically active aglycone. In some cases, the glucuronidation can maintain or even increase the therapeutic effect of the drug [30], probably because the active compound is gradually liberated. Synthesis of prodrugs, via a glucuronidation reaction has been studied for the developpement of more selective drugs, specially for selective delivery of systemically administrated chemotherapeutic drugs for solid cancers.
Indeed, glucuronides can be selectively activated at the tumoural site since the enzyme β-Dglucuronidase is found at highly elevated concentrations in necrotic tumour tissue [31,32]. The design of a suitable glucuronide prodrug must be based upon four criteria: enhanced water solubility, stability in blood, decreased cytotoxicity and drug release after enzymatic cleavage. Several glucuronide prodrugs have already been synthesized and proved to be selectively activated by β-glucuronidase, either present in high concentration in necrotic tumour areas (PMT) [31] or previously targeted to the tumour sites (ADEPT [33], GDEPT [34]), and consequently demonstrated superior efficacy in vivo compared to standard chemotherapy [35]. For example, two glucuronide prodrugs of the histone deacetylase inhibitor CI-994 81 were synthesized [36]. The β-O-glucuronyl carbamate 80 was synthesized by coupling the methyl glucuronate 67 with commercially available 2-nitrophenyl isocyanate in a very high β-diastereoselectivity in 86% yield (e.d. 97%) using the method developed by Leenders et al. [ A series of anthracycline prodrugs containing an immolative spacer were synthesized for application in selective chemotherapy. The key step in the synthesis of all prodrugs is the highly βdiastereoselective addition reaction of the anomeric hydroxyl of a glycosyl donor 67 to a spacer isocyanate resulting in the respective β-glycosyl carbamate pro-moieties [38] (Scheme 21).
The synthesis and biological evaluation of novel prodrugs based on the cytotoxic antibiotic duocarmycin was realized from imidate donors 28 and 86. The resulting glucuronide compounds were not isolated and directly coupled with the indole carboxylic acid 88 to afford the corresponding βglucuronide 89 and 90 in 59% and 43% yields respectively [
Antibacterials Inhibitors
In the development of aryl glucuronides as potential probes for heparanase, the acid-catalysed glycosidation between the trichloroacetimidate-activated GlcA 28 and a variety of phenols was investigated. In preliminary studies, the BF 3 /Et 2 O-catalysed coupling of imidate 28 with phenols provided the desired aryl glucuronides 91-94 in high yields (61-81%) (Scheme 23). The attempted BF 3 .Et 2 O-catalysed glycosidation of 28 with 4-hydroxycinnamic acid 95 did not give the desired glycoside, but instead gave a complex mixture of products 96-99 [40] CRM646-A and -B , two fungal glucuronides with a dimeric 2,4-dihydroxy-6-alkylbenzoic acid (orcinol p-depside) aglycone showing significant heparinase and telomerase inhibition activities, were synthesized for the first time [41]. The successful approach involved the construction of the phenol glucuronidic linkage, via coupling of the orsellinate derivative 102 with glucosyluronate bromide 1, before assembly of the phenolic ester onto the depside aglycone (Scheme 25). Attempts to perform direct glycosylation of the depside aglycone derivatives were not successful.
Methodologies and synthesis of oligosaccharides
To develop the synthesis of biologically active lactone O-glucuronides two strategies from 2-O-acyl glucuronate donors and glycosylation with more reactive benzyl protected glucuronates were attempted. The anchimeric assistance of an acyl substituent on O-2 leading preferably to the βconfiguration, whereas stereochemical control of the glycosylation by using ether protected intermediates remained complicated. The anchimeric participation of the acetyl group in 28 guarantees the formation of the β-linkage, whereas the tribenzylated donor 107 gave the glucuronide 110 as a 2:8 anomeric mixture [43] (Scheme 27). An indirect strategy for the synthesis of glycosides GlcA involved the use of ulosyl bromide 113 (easily obtained in four steps from glucuronolactone). Glycosylation in the presence of insoluble silver catalyst led to the β-glycoside 114 which could be converted stereoselectively to the gluco isomer 115, whereas selectride reduction afforded the mannuronide derivative [44] Glucuronyl iodide 62 has been studied as a "disarmed" glycosyl donor and primary or secondary alcohols as acceptors, promotion with NIS/I 2 followed by TMSOTf gave the corresponding βglucuronides in good yields [45] (Scheme 29). For example, 2-phenylethanol was glucuronylated in 88% yield when CuCl was used. This methodology was applied to the synthesis of disaccharides with the same βstereoselectivity.Glycosylation of conformationnally inverted donors derived from GlcA was studied towards several silyl ethers [46]. 6,1−Lactone derivative 117 has been used in the synthesis of 1,2-cisglycoside 118, the SnCl 4 -catalyzed coupling of silyl ethers with 117 provides α-O-glucuronides in significantly improved yields without loss of stereoselectivity (Scheme 30). This methodology was extended to 2-deoxylactones, which gave α or β-glycosides depending on the structure of the donor. The stereoselectivity observed for both 117 and 119 contrasted with that of the methyl ester 16, which only gave β-glucuronides 120. Similarly, the 2-deoxy-2 iodo donor 121 gave the β-glycosides 122, which is explained by the participation of iodine, better than the 2-O-acetyl group (Scheme 30).
Cyclic imidates can be used as glycosyl donors, and it was observed that 1,2-cis glycosides obtained from the reactions of glycosyl acetates or cyclic imidates, resulted from the anomerisation of initially formed 1,2-trans glycosides [47].
For example, reaction of imidate 123 with phenol in the presence of TMSOTf-SnCl 4 (2.5/.5) gave after 1h a mixture of compounds 124, 125 and 126 in 1:1:2.2 ratio, and after 24h only the α-anomer in 63% yield (Scheme 31). The rate of the anomerisation reaction showed to be dependent on the structure of the aglycone [48] and for glucopyranuronic acid the anomerisation is faster than that of glucopyranuronate compounds.
35-98%
Other heterogeneous systems were used, sulfuric acid loaded on porous silica (H 2 SO 4 /SiG 60 ) and silica-supported Keggin type heteropolyacid. The reaction of the GlcA with different alcohols in the presence of these catalysts, gave glucofuranosidurono-6,3-lactone glycosides 129 in 62-98% yields (Scheme 33). The supported sulfuric catalyst was stable under microwave conditions and could be recovered and reused [51]. The formation of alkyl glucofuranosidurono-6,3-lactones have been already described from unprotected GlcA in heterogeneous media and promoted by Lewis acids [52].
The synthesis of the selectively protected disaccharides glycosides 132 and 133, which are required for further conversion into glycosyl donors for block synthesis of more extended oligosaccharides was studied.
A strategy for the synthesis of the target disaccharides [53], was the selection of the glucosyluronic donor. Deactivated donors such as GlcA phosphate 141 were found to be highly efficient in reactions with primary or secondary alcohols. Combined with the straightforward synthesis from readily accessible GlcA glycal precursors, the use of 141 as a glycosylating agent provided a direct entry to complex glycan structures [56]. The promoters used are TMS or TBS triflates, and the reactions were performed at low temperatures (−50 °C to −20 °C) affording 143 and 145 in 72% and 84% yield, respectively (Scheme 37). Among others, Rele et al. showed that N-acetylglucosamine derivatives are unreactive acceptors [58]. Glycosylation of the n-pentenyl-terminated N-acetyl-D-glucosamine acceptor 151 with either glycosyl donor 1 or 28 was unsuccessful (Scheme 40). Thiophenyl glucuronate disaccharide donnor 158 was used by Dinkelaar et al. [59] in an iterative strategy for hyaluronic acid (HA)-oligosaccharides assembling (Scheme 42). First, the reducing end glucosamine 159 was condensed with dimer 158 using the Ph 2 SO/Tf 2 O activating system. Although preactivation of the thiodisaccharide proceeded smoothly, the ensuing reaction with acceptor 159 did not go to completion and trisaccharide 160 was isolated in 46% yield. Changing from Ph 2 SO/Tf 2 O to the related BSP/Tf 2 O reagent system significantly improved the outcome of the glycosylation, allowing to obtain the protected hyaluronic acid trisaccharide 160 in 75% yield. N-iodosuccinimide (NIS)/TfOH as activator system was also examined, giving trisaccharide 160 in 75% yield. Interestingly, the NMR spectrum of 160 revealed a rather small homonuclear coupling constant (JH1′-H2′) for the anomeric proton of the glucuronate moiety (H-1′) of 4.4 Hz. Upon deprotection of the oligosaccharides the coupling constant changed to 8.4 Hz, indicative of the β-glucuronic acid linkage formed. The small coupling constant for the glucuronate anomeric proton suggested that the glucuronate ester takes up a flattened 4C1-chair conformation, when positioned in between two 4,6-O-di-tert-butylsilylidene glucosamine residues. To elongate trisaccharide 160, the C3′′-O-Lev was deprotected and the resulting alcohol 161 was condensed with dimer 158 (NIS/TfOH activation) and pentamer 162 was obtained in 98% yield. Ensuing delevulinoylation of 162 gave alcohol 163 which was elongated in a subsequent NIS/TfOH mediated glycosylation with building block 158. Heptamer 165 was easily separated from the smaller products in the reaction mixture by size-exclusion chromatography on Sephadex LH-20, and isolated in 61% yield. In a investigation of the use of a safety-catch linker for supported synthesis of HA oligosaccharides, de Paz et al. [60] performed the glycosylation of acceptor 166 with donor 165 affording polymerbound GlcA derivative 167 (Scheme 43). The reaction was repeated to drive it to completion as the first cycle resulted only in partial glycosylation. Unfortunately, after delevulinoylation, the corresponding alcohol failed as acceptor with a glucosamine donor for disaccharide synthesis. The same drawback was encountered when a glucosamine acceptor was fixed on the resin and glycosylation with 165 was tried. This problem was attributed to the acylsulfonamide linker, and model glycosylations were carried out in solution and on PEG support without the N-acylsulfonamide linker to demonstrate this hypothesis. Thus, polymer acceptor 168 was efficiently glycosylated with trichloroacetimidate 165 to afford bound disaccharide 169. Therefore, the safety-catch linker approach was not suitable for oligosaccharide assembly involving glycosylation of low nucleophilic acceptors with electron-poor donors. It is reasonable to suppose that the chemical nature of the linker, in particular the high acidity of the NH proton, can explain the presence of charged species that hinder coupling reactions mediated by oxocarbenium ions.
Scheme 37. Glucuronidation of alcohols with
The synthesis of glycosaminoglycan oligosaccharides has been the main interest of several research groups. Concerning heparan sulfate oligosaccharides, the synthesis of the key disaccharide building block 172 was smoothly accomplished by TMSOTf-catalyzed reaction (−30 °C) of the 2-O-benzoylprotected GlcA imidate 170 with the azido acceptor 171, providing the desired disaccharide donor 172 in 89% yield [61] This block was used for the synthesis of a tetrasaccharide involved in prion diseases. The enzymatic synthesis of GlcA glycosides by the use of snail (Helix pomatia and Helix aspersa), limpet (Patella vulgata), and bovine glucuronidases was investigated by Nagatsukaa et al. [62]. As
Mannuronidation
Stereocontrolled synthesis of homooligomers of mannuronic acid was performed from thiomannuronic derivatives. In this alginate oligomer, the uronic acid monomers are interconnected through 1,4-interglycosidic linkages that have a 1,2-cis configuration. Preactivation of thiomannuronic donor 186 with NIS/TMSOTf followed by addition of the mannuronic acceptor gave the β-disaccharide 188 in 78% yield, which was subjected to the same coupling reaction with donor 186 [63]. The trisaccharide 190 was obtained in 50% yield (Scheme 46). Further studies on conformationally restricted mannuronates 196 and 198 were enterprised to explore the stereoselectivity of 1,2-cis-glycosylation [67,68]. The stereoselectivity is dependent on the nature of the protecting groups on the mannose core.
Homogalacturonanes are interesting targets as their are the main components of plant pectin. Thioglycoside galacturonate donor 202 (Scheme 52) was tested with acceptor 217 using N-iodosuccinimide/silver triflate promotion at −20 °C with rigorous exclusion of moisture. The corresponding α-disaccharide 218 was obtained in 42% yield based on 217, showing that the tert-butyl ester was not stable under the glycosylation reaction conditions. During the glycosylation conditions described above using p-methoxybenzyl derivative 219 as glycosyl donor and compound 220 as acceptor, the p-methoxybenzyl protective group proved not to be stable and the corresponding disaccharide with a free hydroxyl group in the C-4'-position was isolated in only 25% yield. In order to manage the lability of the p-methoxybenzyl function, different promotors were checked. The best result was obtained with freshly prepared iodonium di-symcollidine perchlorate (64% isolated yield based on 220). Silver triflate/silver carbonate promoted glycosylation of glycosyl acceptor 220 with methyl(galactopyranosyluronate)bromide 222 provided the disaccharide 223 in 35% yield based on 220 [71]. This work by Vogel et al. allowed the preparation of di-and tri-galacturonan fragments.
On the other hand, the glycosylation of the galacturonate acceptor 220 with trichloroacetimidate donors in a ratio of 1:1 promoted by trimethylsilyl trifluoromethanesulfonate revealed a peculiar effect of the chosen substitution pattern (Scheme 53) [72]. Thus, the 3,4-di-O-acetyl-α-D-galacturonate trichloroacetimidate 224 provided 67% yield of the α-(1→4)-coupled disaccharide 225 and only 5% of the β-coupled disaccharide was detected. By way of contrast, the more active 2,3-di-O-benzyl glycosyl donor 226 coupled with 220 furnished the corresponding disaccharides 227 in a yield of 59% but no βcoupled disaccharide was observed. In earlier experiments the coupling of 220 with 226 in the presence of boron trifluoride diethyl etherate furnished the corresponding disaccharides 227 in a total yield of 53% and a disappointing α/β ratio of 1 Finally, the methyl group in the O-4 position of the glycosyl donor 228 gave rise to the lowest stereoselectivity (nearly 1:1 for disaccharide 229) with a total yield of 52%. Subsequent experiments shown that the αor β-configuration of the trichloroacetimidate group at the anomeric center of the donors exerted no influence on the outcome of stereoselectivity of the glycosylations investigated. This approach using galacturonates suitable as donors in α-glycosylation reactions can be carried out directly from commercially available D-galacturonic acid avoiding the crucial oxidation step in comparison to an approach involving D-galactose-derived intermediates.
The synthesis of glycosphingolipids is an important challenge. The Seeberger group studied the glycosylation of galacturonic acids 230 and 231 with ceramide A, to yield conjugate 234-236 (Table 2) [73] (Scheme 54). The problem in these reactions is the relatively poor solubility of ceramide A in many solvent systems at low temperature. During these studies, the well-known benefits of ether and the remote anchimeric assistance of C4 esters in galacto-configured systems in obtaining good αselectivities became again apparent, as omission led to a dramatic increase in β-glycoside formation. A compromise between yield and selectivity was found by employing acetyl-protected thioglycoside 228 and the NIS/TfOH activator system. Thus, the product 234 was isolated in 85% yield and 4.2:1 selectivity with dioxane/toluene (3:1) as solvent. Both anomers were separated by flash column chromatography. Fischer glycosylation of free galacturonic acid was recently studied by Allam et al. Tests were first conducted using sulfuric acid as the catalyst [74]. The best yields (80%) were obtained using 10 equiv of octanol at 80 °C during 48 h with a catalytic amount of sulfuric acid. A lower excess of octanol (5 equiv) clearly decreased the overall yield.
The formation of furanosiduronate compounds has already been reported when galacturonic acid was treated with methanol in the presence of Amberlite IR-120H for 48 h at 35 °C in an orbital shaker affording methyl (methyl D-galactofuranosid)uronate as the only product, in a 2.6:1 β:α ratio [75].
Shorter reaction times also resulted in lower yields. A higher temperature than 80 °C led to more important formation of degradation products (dark brown solution) while reaction at 50 °C showed a yield decrease. Several strong organic or mineral acid catalysts were used without noticeable changes in anomer ratio. In each case, the four anomers 237-238, 239-240 were present whatever the reaction conditions and the major one was identified as the β-furanose anomer (Scheme 55). The use of ptoluenesulfonic acid (PTSA) as the acid catalyst gave n-octyl (n-octyl D-galactoside) uronates 237-240 in good yield (83%), suggesting that organic sulfonic acids possess the appropriate acidities to promote the ester condensation. Under these experimental conditions, the ratio of the four anomers was approximately 50% of β-furanose 237, 15% of α-furanose 238, 5% of β-pyranose 239 and 30% of αyranose 240 as determined by 1 A significant acceleration of the reaction was observed using microwave activation, since all reactions were complete within 10 min. With H 2 SO 4 as the catalyst, the best results were obtained with a 10-fold excess of octanol (20 equiv) at 100 °C. A lower temperature (80 °C) was detrimental since the yield in n-octyl (n-octyl D-galactosid)uronates 237-240 dramatically decreased (25%). On the contrary, with a higher temperature (120 °C), the reaction turned brown with partial degradation of the sugar compounds and lower yield (52%). When compared to thermal activation, yields and product purity were improved since the reaction medium was only slightly yellow after the reaction. Unfortunately, the ratio of the four anomers was only slightly modified by microwave activation: βfuranose 237 60%, α-furanose 238 15%, β-pyranose 239 5%, and α-pyranose 240 20%. In the presence of a catalytic amount of PTSA, n-octyl (n-octyl D-galactosid)uronates 237-240 were only obtained as the furanose isomers in 63% yield, the pyranose ones representing about 5% of the mixture. Several attempts to improve the reaction yield and to reduce the ratio of the α-furanose anomer were unsuccessful. A rapid esterification of the carboxylic group with PTSA, which proceeds faster than glycosylation, could favor the formation of the furanose ring.
Conclusions
Uronic acid derivatives are poor donors, due to the deactivating effect of the electron-withdrawing carboxylate group. Peracetylated bromides are easy to prepare and have been widely used as donors, however they are less reactive than imidates and can lead to the formation of by-products as glycals or orthoesters. In addition, trichloroacetimidates allow glycosylation conditions compatible with a large variety of protecting groups. Thioglycosides, iodides and phosphates have been also employed as donors, but they are less developed. Benzoyl protected donors prevents orthoester formation, whereas isobutyryl derivatives reduce the risk of transacylation to the acceptor, often observed when using acetyl protected donors. The use of 6,1-lactones is an alternative approach to GlcA glycosides, either in "classical" conditions or by MW-assisted procedures. Glucuronidation of aminosugars is especially difficult. NHAc-containing derivatives fail in these reactions, the most useful acceptors are azido precursors or NHTCA derivatives. Phenolic hydroxyls required the development of specific methods. As observed in general glycosylation reactions, both reactivity and stereoselectivity is highly dependent on the electronic and steric character of protecting groups. Finally, mannuronic and galacturonic acid glycosidations are less developed, probably due to their relative lower abundance in natural products, however analogous methodologies have been applied leading to comparable results. | 2017-05-21T16:53:21.306Z | 2011-05-01T00:00:00.000 | {
"year": 2011,
"sha1": "1b78da206056e9edaab5a66d870d1b9f05e063b9",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/1420-3049/16/5/3933/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b78da206056e9edaab5a66d870d1b9f05e063b9",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
254334986 | pes2o/s2orc | v3-fos-license | Oral biopsies in a Portuguese population: A 20-year clinicopathological study in a university clinic
Background Performing a biopsy is very important in oral medicine and the anatomopathological examination is fundamental to obtain or to confirm the diagnosis in oral and maxillofacial pathology. The purpose of this study is to analyse the frequency and characteristic patterns of biopsied oromaxillofacial lesions in a Portuguese population. Material and Methods A descriptive statistical analysis of the data from the anatomopathological reports of the biopsies performed between 1999 and 2019 at the university clinic of the Faculty of Dental Medicine of the University of Lisbon was performed, regarding the patient’s gender and age, type of biopsy, location of lesions, clinical and histological diagnosis, and the results were obtained. Association relationships were studied using the chi-square test and the Kruskal-Wallis test to correlate variables. P<0.05 was considered statistically significant. Results From a total sample of 1448 patients, 826 (57.1%) were female, 610 (42.1%) were male, and 12 (0.8%) had no gender information, with a mean age of 50.14 years (standard deviation ± 17.61). The preferred location was the buccal mucosa, vestibule fundus and alveolar mucosa (20.7%). Benign lesions (BL) were the most common, in 82,8% of the cases, followed by oral potentially malignant disorders (OPMD) in 15,5%, and finally, malignant lesions (ML) in 1.7%. Focal fibrous hyperplasia was the most frequent diagnosis in the total sample (25.6%). In the young group, the most common entity was mucocele (34.0%), with a predominance of the lower lip (32.9%). In OPMD, leukoplakia was the most frequently diagnosed (48,7%). The most common ML was squamous cell carcinoma (92.0%), appearing mainly in the tongue (34.8%). A statistically significant relation between ML and older age was found. Conclusions This study included biopsies analysed over a period of 20 years, being BL the main pathology to affect the oral cavity. Although less frequent, OPMD and ML should not be neglected and must be correctly diagnosed and treated. Key words:Oral biopsies, Oral and maxillofacial pathology, Oral medicine, Clinicopathological analysis, Epidemiological study, University clinic.
Introduction
The dentist is the health professional responsible for the study, prevention, diagnosis and treatment of anomalies and diseases of the teeth, mouth, jaws and adjacent structures, integrating the patient's multidisciplinary approach. He has also an increasingly proactive role in the daily lives of the population, being often the first to detect pathologies, such as oral cancer, that require specialized attention and treatment. Oral medicine is concerned with the diagnosis and medical management of specific diseases of the orofacial tissues, as well as with the treatment of oral manifestations of systemic conditions (1,2). The lesions of the oral cavity cover a wide spectrum regarding their nature and characteristics, which can difficult their diagnosis, sometimes (3). Therefore, performing a biopsy is an important tool for the dentist. It is a surgical procedure that aims to obtain tissue from a living individual for histopathological analysis. The pathological examination helps to define the diagnosis, facilitates the determination of the prognosis of malignant lesions, contributes to the institution of treatment or evaluation of its effectiveness, and constitutes a document with legal medical value (4). Epidemiological studies provide an overview of the lesions that are most frequently found in the doctor's office, which is why they are important methods of investigation. This epidemiological study aimed to evaluate the oral pathologies most frequently subjected to biopsy in a Portuguese university clinic, therefore being benign or malignant. It was also intended to analyse the demographic and clinical characteristics of pathological entities with indication for histological diagnosis.
Material and Methods
The present study consists of an observational epidemiological study of the last 20 years. Data related to the patient's gender and age, type of biopsy performed, anatomical location of lesions, clinical and histological diagnosis were collected from the anatomopathological reports of the biopsies performed between 1990 and 2019 at the Faculty of Dental Medicine of the University of Lisbon (FMDUL). Then the information was transferred into a Microsoft Excel database. The lesions that were not located in the oromaxillofacial region and without a well-defined histological diagnosis were excluded. The pathologies were classified according to the International Classification of Diseases, Dentistry and Stomatology (ICD-11), and by the fourth edition of the World Health Organization (WHO) Classification of Head and Neck Tumours (5). We divided the lesions into three groups for analysis: benign lesions (BL); oral potentially malignant disorders (OPMD) according to the WHO 2020 classification; and malignant lesions (ML) (6).
Ages were distributed into age groups at 10-year intervals. For study, they were also further divided into three age groups: young group (0-17 years); adult group (18-64 years) and elderly group (≥65 years). The type of biopsy was considered excisional, incisional, and unspecified. The anatomical locations of the lesions were grouped into categories for further analysis. Finally, the correlation between the respective clinical and histological diagnoses was also analysed. Statistical analysis was performed using SPSS 27.0 Data Editor (SPSS Inc., Chicago, USA). The results describe the absolute and relative frequencies, as well as study the possible associations between variables. Association relationships were studied using the chi-square test to correlate qualitative variables and the Kruskal-Wallis test to correlate age and some qualitative variables. P<0.05 was considered statistically significant.
Results
From an initial total of 1623 biopsies, 175 were excluded for not belonging to the target region or for not having a histological diagnostic record, resulting in a sample of 1448 cases. From a final sample of 1448 cases analysed, 12 (0.8%) had no gender information, 260 (18%) had no age information, 2 (0.1%) had no record of the location of the biopsied lesion and 214 (18.6%) had no record of provisional clinical diagnosis hypotheses. Of the sample, 57.1% were female patients and 42.1% male patients, corresponding to a total of 826 and 610 biopsies, respectively. The patient's age ranged between 4 and 91 years, with a mean age ± standard deviation of 50.14 ± 17.61 years. The most common age group was from 50 to 59 years old, representing 23,5%. In terms of valid percentage, it was observed that 4.5% of the sample belonged to the young group, 72.3% to the adult group and 23.2% to the elderly group. Of 1448 biopsied lesions, 1199 (82,8%) corresponded to benign lesions, 224 (15,5%) to oral potentially malignant disorders and 25 (1,7%) to malignant lesions. Despite the preference for the female gender in all types of pathologies, there was no statistically significant relationship between nature of lesion and gender (P > 0.5). It was found that the age in ML is significantly higher than the age of the other two types of lesions (P=0.000). It was observed that ML manifested preferentially in the tongue, while the other types of lesions occurred mainly in buccal mucosa, as seen in Table 1. There were 85 different diagnoses, and the most frequent was focal fibrous hyperplasia, also known as traumatic fibroma (25.62%). This pathology revealed an average age of 52.70 years, occurring mainly in the buccal mucosa (28.6%). The radicular cyst was the most diagnosed hard tissue lesion, mostly located in the anterior region in both genders, 21.5% in males and 28.7% in females.
Although the pathologies diagnosed in each gender are very similar, in females we observe a greater number of OPMD (Fig. 1). When we organized patients into age groups, we found that in the young group 34.0% of the lesions corresponded to mucoceles. Focal fibrous hyperplasia was the most frequent lesion both in the adult group, occupying 25.6% of the biopsied lesions, as well as in the elderly group with a percentage of 26.1% (Fig. 2).
The most frequent location was the buccal mucosa, vestibule fundus, and alveolar mucosa with 300 associated lesions (20.7%), with the tongue being the second most frequent location with 210 lesions (14.5%). More information about the frequency according to anatomical site is written on the Table 3. From a total of 1448 biopsies performed, 68.09% were excisional biopsies, 16.44% incisional biopsies and 15.47% did not have this specified information.
Another fact that was found was that when an incisional biopsy was performed, the percentage of oral potentially malignant disorders was higher than the percentage of benign lesions, contrary to what happens in excisional biopsies. It should also be noted that, in malignant lesions, the preferred type of biopsy was incisional, unless the lesion is smaller than 1cm (Fig. 3).
In most of the cases studied, there is a correlation between the clinical diagnosis and the histological diagnosis, with a total of 902 concordant cases, making up 62.3% of the total study sample. On the other hand, there was no correlation in 37.7% (n=546) of the cases. The group of OPMD revealed a higher percentage of cli- On the other hand, about benign lesions, in 59.5% of the cases there was confirmation of the clinical diagnosis by histological study (n=714), and in 40.5% of the biopsies (n=485) this agreement was not verified (Fig. 4).
Discussion
Lesions in the oral and maxillofacial region cover a wide range of alterations that may be imperceptible to the patient or have serious implications for their quality of life (7,8). It is important for the dentist to be aware of the most common aspects for his daily clinical practice, being able to diagnose simple diseases and conditions as well as to detect more complex situations. Therefore, it is essential to have epidemiological studies that describe both the frequency of oromaxillofacial lesions and their predominant characteristics (3,7). This study involved 1448 cases, describing biopsies from 1999 to 2019, performed at Faculty of Dental Medicine of the University of Lisbon. Although the sample turns out to be smaller than that used by Monteiro et al. (3), where 3212 oral biopsies performed over a 16-year period were analysed at the Oporto Hospitalar Center, the present study is the largest carried out in a university environment, to date. The analysis showed a slightly higher frequency of oromaxillofacial lesions in females compared to males, which agrees with other studies (3,9,10). This could be explained on the one hand by the larger female population in Portugal, such as demonstrated in the results of the 2011 Census, and on the other by the fact that there is greater care and concern with oral health on the part of women (11). This frequency was found for all types of injury, in contrast to what is found in other studies, regarding oral potentially malignant disorders and malignant lesions, in which the male gender was preponderant (3,12). It should also be noted that, although no significant relationship has been demonstrated between gender and the type of lesion, the total number of injuries in the OPMD and ML groups was quite small, so further studies are needed on this possible association.
In the present study, the age ranged between 4 and 91 years, demonstrating that lesions in the oral cavity can appear at any time in life. However, the data showed that the age group with the highest frequency of pathologies was from 50 to 59 years old. Regarding OPMD and ML, there was a greater preponderance at older ages, noting that in the young group only BL were seen as the most common, while in the elderly group ML are already part of the most frequent pathologies. These data are consistent with the literature, where advanced age is an important non-modifiable risk factor for oncological diseases (3,(13)(14)(15)(16)(17)(18).
In the young group, from 0 to 17 years of age, the mucocele stood out, being more common in males, with preferential location in the lower lip, which is consistent with other data (18,22). This frequency may it will be explained by the fact that earlier ages are more associated with possible trauma that could be the origin of this entity. Benign lesions were the most common type of lesion in the population studied, and focal fibrous hyperplasia represented 25.6% of the total. This change was more commonly found in female patients in their fifth decade of life. As this is a reactive hyperplasia of fibrous connective tissue, its preferential occurrence in non-keratinized areas, namely the buccal mucosa, can be framed with the fact that these areas are related to factors of trauma, such as occlusal plane and prosthesis limits, which agrees with data found in other epidemiological studies (3,17,19,20). Regarding OPMD, leukoplakia was more frequent on male patients aged between 50 and 59 years, manifesting itself with special preference in the tongue and in the buccal mucosa, similarly to what is reported in other studies (3,9,10). On the other hand, oral lichen planus was concentrated in female patients aged between aged 50 and 59 years, manifesting in more than half in non-keratinized areas, which is in agreement with the literature (13,19,21). Squamous cell carcinoma was the most frequent type of cancer in its population group, appearing at an average age of 63 years, manifesting itself preferentially in the tongue, similarly to what has been reported in several studies (3,9,10,12,15,16). However, in the current study, with regard to gender, this disease occurred equally in females and males, in contrast to what is reported in the literature (3,9,10,15,16,23,24). This divergence may be due to the fact that the sample related to cancer diseases is very small, and therefore it is a random sample.
Regarding the type of biopsy, the data showed that an excisional biopsy is predominantly performed, which agrees with what was demonstrated by Sixto-Requeijo and Diniz-Freitas (9). Considering that the size and nature of the lesion, among other factors, influence the taking decision on the type of biopsy to be performed (4), and a higher frequency of benign lesions was verified, which were more associated with small dimensions, this predominance of excisional biopsies is justified. In our study 11 patients (1.1%) with malignant lesions undergo excisional biopsy because the lesions were small or clinically benign. As for the correlation between clinical diagnosis and histological diagnosis, it was verified in 62.3% of the cases studied, noting that in OPMD there was a higher percentage of confirmation of the diagnosis, contrary to what happens in ML. It should be noted that some of the data were lost, others would be incomplete, so in these situations a 'Non-Concordance' was applied, which could bias the results in this regard.
It should be noted that the present study is limited by a relatively small sample, and despite being a public establishment, it refers only to the population attending the University Clinic of FMDUL, mostly residing in the metropolitan area of Lisbon. Additionally, it lacks the registration of essential information for a correct characterization of diseases of the oral cavity, so the extrapolation of its results must be done within its restrictions. Considering the limitations raised above, there is a need for a study with a broader sample at national level. In this way, it would be possible to characterize the most frequent injuries and their respective patterns in the Portuguese population, to compare with international studies.
Conclusions
This study included a sample of 1448 biopsies over 20 years, through which it was found that benign pathologies are the ones that most affect the oral cavity, with focal fibrous hyperplasia being the most frequent diagnosis in the study population. This pathology predominantly manifests itself in areas exposed to trauma. Leukoplakia was the prominent oral potentially malignant disorder, manifesting itself in the tongue and other non-keratinized areas. Squamous cell carcinoma was the most frequent malignant lesion, with a higher frequency in the tongue. Both OPMD and ML are associated with older ages. It was found that, in most cases, there is a concordance between the clinical and histological diagnosis.
The present study reinforces that oral medicine is, in fact, very vast, and that for a good clinical practice it is necessary to acquire and deepen some knowledge from pre-graduate training. Thus, this comes with the expectation of promoting the search for knowledge in this area and helping all students and dentists in their daily clinical practice, especially in carrying out differential diagnosis, through the characterization of the most frequent pathologies. | 2022-12-07T19:18:54.932Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "bc8f6f0ea6c96fe2fd117b35d15dba1f30dcaf57",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4317/jced.59688",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ebc775ba7c4769c7dd2588044178ea81fd99d9eb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
249401746 | pes2o/s2orc | v3-fos-license | Free-living wrist and hip accelerometry forecast cognitive decline among older adults without dementia over 1- or 5-years in two distinct observational cohorts
The prevalence of major neurocognitive disorders is expected to rise over the next 3 decades as the number of adults ≥65 years old increases. Noninvasive screening capable of flagging individuals most at risk of subsequent cognitive decline could trigger closer monitoring and preventive strategies. In this study, we used free-living accelerometry data to forecast cognitive decline within 1- or 5-years in older adults without dementia using two cohorts. The first cohort, recruited in the south side of Chicago, wore hip accelerometers for 7 continuous days. The second cohort, nationally recruited, wore wrist accelerometers continuously for 72 h. Separate classifier models forecasted 1-year cognitive decline with over 85% accuracy using hip data and forecasted 5-year cognitive decline with nearly 70% accuracy using wrist data, significant improvements compared to demographics and comorbidities alone. The proposed models are readily translatable to clinical practices serving ageing populations.
INTRODUCTION
Alzheimer's disease and related major neurocognitive disorders (ADRD) affect over 50 million people worldwide, with an increase of 10 million new cases per year 1 . The ADRD disease burden is expected to increase as the world population ages 2,3 . ADRD disproportionately affects socioeconomically disadvantaged groups and minorities 4 and is associated with lower quality of life, increased mortality, care dependence, and institutionalization. Preservation of cognitive abilities and a positive mindset may maintain quality of life in later years 5 . Few US Food and Drug Administration (FDA) approved treatment options exist at this time; therefore, the mainstay of current management remains on the prevention side 6 . Cognitive trajectories vary widely among older adults, with recent studies showing that different races experience varying rates of decline 7,8 . Finding sensitive forecasters of early decline could trigger more frequent monitoring and aggressive preventative interventions, advance care planning, and even ADRD research study eligibility 9 .
There is an acute need for easily deployed, noninvasive, clinical tools to identify cognitively intact older adults most at risk of subsequent cognitive decline. Certain clinical and environmental factors including age, gender, education, body mass index, neighborhood socioeconomic status, and history of stroke or diabetes are easy to gather clinically during a visit or even during a telephone screen 10 . In a meta-analysis, structural and functional aspects of one's social environment (including network size, social activity, and loneliness) are also predictive of cognitive decline among older adults 11 . Genetic susceptibilities, such as APOE carrier status, can improve forecast models but are more invasive for patients to collect 12 .
Wearable sensors have been gaining attention for their ability to remotely collect free-living activity and sleep patterns and the association of these patterns with other important age-related conditions: frailty [13][14][15] , disability 16 , social disengagement 17 , and death 18 . The relationship between free-living activity and cognitive performance has been less studied. In cross-section, greater activity volume (highest and middle tertiles of active minutes/day) was associated with better processing speed among cognitively intact adults at risk of mobility disability 19 and steps/day were associated with better executive functioning in healthy older adults 20 . Longitudinally, cognitively intact older adults with a higher percentage of moderate to vigorous physical activity (MVPA) per week had a lower risk of cognitive impairment and better maintenance of executive function and memory over an average of 3 years 21 . However, these findings were not consistent across racial/ethnic groups. A higher percent of MVPA predicted maintenance of only memory and not executive function in African American/Black adults, as compared to White adults 21 .
Few prior studies have leveraged the high-resolution nature of accelerometer data in analyses to maximize unique pattern recognition that may differentiate health risk across individuals, a concept familiar to those studying precision medicine. While accelerometry is not currently used in routine clinical care, it has been increasingly used in major research studies to remotely assess older adult health and poses significant advantages in the era of telehealth [22][23][24][25][26][27][28] . Translation of accelerometry in clinical practice has been challenged by the lack of accelerometry tools with clear clinical applications and the inability to apply research findings across device body locations and manufacturers.
The objective of this study was to significantly advance the prior work on forecasting early cognitive decline among older adults without dementia by discovering prognostic, free-living accelerometry patterns using 24-h data. We considered 98 accelerometry measures, the most comprehensive set of movement-related measures in a study of its kind to date. With a screening clinical application in mind, we chose a simple, binary clinical outcome that is most relevant to triggering clinical or research decision making: any cognitive decline versus stable or improving cognition. We further probed into the generalizability of the developed methodology, by applying it to data from two studies that gathered data from two different accelerometers worn at different body locations and with different wear protocols.
Cohort characteristics
The characteristics of the two study cohorts are shown in Table 1. The hip accelerometry cohort was older (mean age 73.2), had a slightly higher baseline Montreal Cognitive Assessment (MoCA) score (mean 25.4), and included a larger proportion of females (80.9%) and those self-identifying as African American (81.7%) than the wrist accelerometry cohort (mean age 70.0, mean MoCA 23.4, proportion female 59.1%, proportion African American/ Black 11.3%).
Demographic and clinical predictors of cognitive decline As we observe in Table 2, the clinical characteristics had somewhat limited capability to distinguish between those with stable/improving cognition versus those with declining cognition at 1 and 5 years in the local and national cohorts, respectively. We provided a full dictionary of features in the Supplemental Data.
Combining demographic, clinical, and accelerometry predictors of cognitive decline To investigate the importance of the accelerometry activity measures and harmonic features beyond that of the demographic and clinical characteristics on cognitive degradation forecasting, we trained CDPred on three different sets of measures: (1) the CDPred basic model using demographic and clinical characteristics; (2) the CDPred-4 model using demographic and clinical characteristics with C4 and V4; (3) the CDPred-4+ model using demographic and clinical characteristics, C4 and V4, plus the harmonic features derived from accelerometry. The number of features in each model are listed in Table 3. To summarize, we compared three models: CDPred, CDPred-4, and CDPred-4+. CDPred includes the baseline demographic and clinical features. CDPred-4 model uses the baseline demographic and clinical features and two baseline accelerometry metrics (C4 and V4). CDPred-4+ models use the full gamut of information: the baseline demographic and clinical features, the two baseline accelerometry metrics, and all extracted 98 accelerometry harmonic features.
Performance of the models
The model performance metrics on the hold-out samples are shown in Table 4. The CDPred-4+ model including all measures predicted cognitive decline 1 year later with an accuracy of over 85% (hip accelerometry cohort) and predicted cognitive decline 5 years later with nearly 70% accuracy (wrist accelerometry cohort). The hip-worn accelerometry confusion matrix and ROC-AUC for the CDPred-4+ model in the hold-out sample is shown in Fig. 1. Figure 2 shows predictors sorted by relative importance, from the highest to lowest, excluding features with zero importance. Similarly, we show the confusion matrix and ROC-AUC for the CDPred-4+ model in the wrist-worn accelerometer data in Fig. 3, and nonzero predictor relative importance sorted in descending order in Fig. 4.
DISCUSSION
Our model significantly expands work previously published in this space. Casanova et al. (2020) similarly used a Random Forest Classifier to distinguish cognitive trajectories 12 . Three classes, low-, medium-, and high-risk trajectories were created using a combination of baseline and repeated cognitive performance scores. This study found that age, gender, education, BMI, stroke, diabetes, neighborhood socioeconomic status, and APOE carrier status were among the top predictors of cognitive trajectories. They did not include accelerometry assessments. We found that the accelerometry pattern features outperformed many demographic and clinical characteristics in predicting cognitive decline in a community-dwelling cohort, suggesting the potential value of noninvasive and remote accelerometry in augmenting the clinical evaluation.
Our analyses have shown that, compared to simpler, clinical models predicting cognitive decline (e.g., using only demographic and clinical characteristics), our accelerometry-based classifier model performs significantly better. This model uniquely identifies preclinical cognitive decline among older adults without a diagnosis of dementia over short (1-year) and longer-term (5-year) follow-up. The model was robust to varying wear protocols (7 days versus 72 h), device location (hip versus wrist), and device manufacturer. In both models, many accelerometry features were rated more 'important' in distinguishing those who experienced any decline in cognition than many demographic and clinical characteristics including age. We are hopeful that the current level of model performance may be useful to flag older adults most vulnerable to subsequent cognitive decline. We note and emphasize that accelerometry currently has no diagnostic capacity for any clinical diseases; its role in the current study is restricted to an assessment of day-to-day movement (accelerations and decelerations) which seems to reflect some level of health, here cognitive, risk.
Limitations
This study has several limitations worth mentioning. It is not technically possible to guarantee (or test) that there was no overlap between the two cohorts used in this study. The NSHAP dataset did collect zip code information on participants, but the FACE Aging dataset did not collect any address information. Since the FACE Aging dataset is composed of study participants residing in the few neighborhoods surrounding the University of Chicago and NSHAP sampled across the nation using a complex sampling design based on census tracts, if overlap occurred, it would have been a very small number of participants.
Another limitation of the current study is, despite the importance of understanding ADRD for socio-demographically disadvantaged groups, the datasets for this study were not sufficient in size for understanding the relative predictive power of the models for different sociodemographic groups. We did include effect size measures for different race/ethnicities in Table 2 Our forecast model was only 70-80% accurate leaving room for improvement. It is likely that our forecast model could be enhanced in future work to reach higher and more consistent accuracy. This can be achieved by including additional metrics derived from accelerometry data, possibly using additional physiologic sensors such as heart rate monitoring to capture richer data, and incorporating additional clinical data, such as blood or genetic markers, family history of dementia, and current medications.
The wrist accelerometry model did not perform as well as the hip accelerometry model. The weaker performance of the wrist accelerometry location might be due to the shorter wear protocols, increased motion "noise" related to the position, and longer follow-up cognitive assessments. Future work comparing more similar wear protocols and devices, even if worn at different body locations, would be of value.
Our experiments show that this predictive model can forecast preclinical cognitive decline using data from dissimilar accelerometry device locations, wear protocols, follow-up times, and unique cohorts. Hip-and wrist-worn accelerometers are subject to unique patterns of movements in space, yet data from both accelerometry devices improved the predictive capacity of the respective models. The somewhat inferior performance of the wrist accelerometry, among other factors, may be related to the shorter wear protocol (72 h versus 7 days versus "noisy" data at the wrist) and longer follow-up (5 years). A major challenge to accelerometry research and clinical translation has been the reliance on a particular device location, protocol duration, and/or proprietary data processing software for generating accelerometry measures 29,30 . These limitations have stimulated movement toward using open source programs or approaches for generating accelerometry metrics, as we have done in this study, and identifying methodologic approaches applicable across multiple devices and varying wear protocols.
Study populations
To evaluate the robustness of our proposed methodology, we used information about two non-overlapping cohorts of community-dwelling older adults, one cohort equipped with hip-based and another with wristbased accelerometers.
Hip accelerometry cohort: frailty, aging, body composition and energy expenditure in aging (FACE aging) study. Study participants (n = 151) were recruited from the community around the primary geriatrics practice site for the University of Chicago located on the south side of Chicago. The sample was limited to community-dwelling (not living in residential care) older adults, 65 or older. Exclusion criteria included hospitalization, surgery, or procedure within 2 months of participating in the study; addition or change in dose of the thyroid (e.g, levothyroxine) or a diuretic (e.g, furosemide, hydrochlorothiazide, or spironolactone) medication within 2 months of participating in the study; use of oral steroids; use of beta-blockers (e.g., metoprolol, atenolol, or carvedilol); persistent hyperglycemia greater than 250; life expectancy less than 1 year; and history of moderate or advanced dementia or Montreal Cognitive Assessment (MoCA) less than or equal to 18. Hospital, surgery, medication, and hyperglycemia exclusion criteria were required to optimize resting metabolic rate testing at baseline (data not used in this analysis). Data collection occurred over multiple evaluations: (1) baseline survey and physical exam in the clinic, (2) a 7-day free-living hip accelerometry protocol immediately following the exam, (3) fasting resting metabolic rate measurement with indirect calorimetry and DEXA scan for body composition within 2 weeks of baseline assessment, (4) a 1-year followup survey and physical exam in the clinic. We restricted the study sample to participants with complete clinical data and one or more valid (≥10 daytime hours) accelerometer-wear days, which left us with 115 participants eligible for our classifier development.
Hip accelerometer protocol: Hip accelerometry data were collected from all participants at baseline. Following the baseline survey and physical exam, an Actigraph wGT3X+ hip accelerometer was placed over the participant's mid, anterior right hip and secured with an elastic belt. Study participants were asked to keep the device on their hip continuously for 7 full days (including during bathing or showering). The accelerometers recorded data at a frequency of 30 Hz. The subsecond-level data were extracted from the devices using the ActiLife software (version 6.0). The low-frequency extension filter was NOT applied.
Wrist accelerometry cohort: the national social life, health, and aging project. We used wrist accelerometry data generated by the National Social Life, Health, and Aging Project (NSHAP) as the sample. NSHAP is a nationally-representative, longitudinal survey study that collects extensive information on physical, mental, cognitive, and social health in United Study, community-dwelling older adults 31 . The first wave of NSHAP was in 2005-6 which included a nationally, statistically representative sample of community-dwelling adults born between 1920-47 (aged 57-85) and over-sampled for African-Americans, Hispanics, and males; 3377 respondents participated (weighted response rate = 75.5%). Five years later (2010-11), respondents were re-interviewed as were their cohabiting spouse or partner, for a total n = 3377. Interviews were conducted in the homes of each respondent by professional interviewers from NORC at the University of Chicago. A random subset of the 2010-11 respondents were invited to participate in a wrist accelerometry protocol, the data used in the current analysis.
Wrist accelerometry sub-study protocol: Wrist accelerometry data were collected from a randomly selected subset of 793 respondents in the 2010-2011 data collection wave. The 2010-2011 accelerometry protocol has been previously described in detail 28 . Briefly, randomly selected respondents in the 2010-2011 data collection wave were asked to wear an ActiWatch Spectrum ® on their non-dominant wrist continuously for 72 consecutive hours (including during bathing or swimming activities) 28 . The accelerometers recorded data at a frequency of 32 Hz. Upon receiving returned devices, data were downloaded from the device and then preprocessed using the Actiware ® software 32 . The maximum absolute value was computed for each second; the sum of these absolute values was then computed for every 15-s epoch. The ActiWatch has a galvanic heat sensor that identifies when a device is on the wrist. All non-wear periods were excluded (only 0.17% of epochs across all wake data were classified as nonwear). Days with at least 10 h of daytime recording were considered "valid"; days with less than 10 h of daytime recording were excluded. The 24-h time interval was used to generate the wrist accelerometry features for this analysis. The study sample was restricted to participants with complete clinical data and ≥1 valid accelerometry wear day which left 584 participants eligible for our classifier development.
Cognitive function
Hip accelerometry cohort: The Montreal Cognitive Assessment (MoCA) was used to determine cognitive function at baseline and 1-year follow-up for the hip accelerometry training sample. The MoCA evaluates seven domains of cognitive function. The scale ranges from 0 to 30 with higher scores indicating better function. Because education was included as a covariate and our primary focus was on change in cognition, we did not add an additional point to the MoCA score for education levels below 12 years as is clinically done 33 . In both cohorts, we calculated cognitive change as a difference in MoCA scores between the baseline and follow-up assessments (1 year for the hip accelerometry cohort and 5 years for the wrist accelerometry cohort): Patients with deteriorating MoCA scores (Δ < 0) were assigned to the cognitively declined group, denoted as Δ − . The remaining patients were assigned to the group with a lack of cognitive decline, denoted as Δ + . The ratio of Δ + /Δ − was 67/48 in hip-worn-and 279/296 in wrist-wornaccelerometer cohorts. The range of 1-year cognitive change (hip) was −8 to 6. The range of 5-year cognitive change (wrist) was −14.9 to 14.9.
Covariates. The covariates were measured similarly across cohorts.
Hip accelerometry cohort: In the hip accelerometry cohort, age, race, gender (female vs. male), education (high school≥ vs. < high school graduate), and monthly income category ($0 < 2000, $2000-3999, $4000-5999, and $6000+) were recorded through self-reported measures. Options for the race included Black or African American and Other (White, American Indian or Alaska Native, Asian Indian, Chinese, Filipino, Japanese, Korean, Vietnamese, Other Asian, Native Hawaiian, Guamanian or Chamorro, Samoan, Other Pacific Islander, or Other). No participants reported Hispanic ethnicity. Information on previously diagnosed comorbidities (self-reported and chart review) was recorded and scored using the Charlson Comorbidity Index and included heart attack, asthma, emphysema, chronic bronchitis, a chronic obstructive pulmonary disorder, peripheral vascular disease, liver disease, diabetes, and cancer (continuous, range 0-30) 36 .
Wrist accelerometry cohort: In the wrist accelerometry cohort, age (centered, continuous) was calculated using the reported date of birth and interview date. Gender (female versus male), race (White/Caucasian, Black/ African American, other), and Hispanic ethnicity 37 . A modified Charlson Comorbidity Index (range 0-16, continuous) was constructed using selfreported comorbidity data in the 2010-2011 data collection wave. Respondents were asked whether they had ever been told by a doctor that they had any of the following conditions (number of points given in parentheses): congestive heart failure (1), heart attack (1), coronary procedure (1), stroke (1), diabetes (1), rheumatoid arthritis (1), asthma, emphysema, chronic obstructive pulmonary disease, or chronic bronchitis (1), dementia (1), non-metastatic cancer excluding skin cancer (2), or metastatic cancer excluding skin cancer (6) 38 .
Accelerometer data preparation
Data were restricted to enrollees with at least one valid day. We calculated the Euclidean norm minus one (ENMO), counts per minute (CPM), and vector magnitude count (VMC) for each participant using the hip and wrist data. To calculate these metrics, the accelerometry data needed to be in the form of the vector magnitude/Euclidean norm. The subsecond-level wrist accelerometry data were already converted to the vector magnitude/ Euclidean norm by the manufacturer's software, 1 data point for every 15-s epoch, where N = 24 h per day × 60 min per hour × 4 samples per minute = 5760 samples per day for wrist-worn accelerometer data.
The hip accelerometer data were in the form (x(t), y(t), z(t)), where x(t), y (t), and z(t) are dimensionless data provided by the accelerometry device, which are approximately proportional to the (x-, y-, and z-axis) directional acceleration 39 . Time t is discrete, which for each day t runs from 1 to N, where N = 24 h per day × 60 min per hour × 60 s per minute × 30 samples per second = 2,592,000 samples per day for hip-worn accelerometer data. The vector magnitude/Euclidean norm r(t) was computed in the hip accelerometry data as follows: To normalize the vector magnitude/Euclidean norm r(t) to a consistent length across both the wrist and hip accelerometry cohorts, the vector magnitude/Euclidean norm r(t) was reshaped to a D × T matrix R = R dt where D represents the total number of wear days and T represents collected samples per day. The average, normalized vector magnitude/ Euclidean norm r t ð Þ is computed as follows: We then used non-overlapping 1-minute, sliding windows to extract the Euclidean norm minus one (ENMO), the counts per minute (CPM), and the vector magnitude count (VMC), both formally defined below. The ENMO was used to remove noise and gravitation effects from subminute and subsecond-level data. Letting H denote the number of time measurements in a one-minute sliding window, we can write ENMO as: The feature CPM was further derived as: Note that H = 60 s per sliding window × 30 samples per second = 1800 samples per sliding window for hip accelerometer data and H = 60 s per sliding window × 4 samples per minute = 4 samples per sliding window for wrist accelerometer data.
The VMC was used to evaluate the mean amplitude deviation in the sliding window period with size H, defined as: where t now varies over the minutes each day, from 1 to N, with N = 24 h per day × 60 min per hour = 1440 min per day.
Accelerometry activity level measures (C4 and V4). Two categorical activity measures that we call C4 and V4 were computed. After extracting CPM and VMC measures from the accelerometer data, we generated the 75th percentile for CPM and VMC data points for each participant denoted as CPM 75 and VMC 75 . The sample-based distribution of the CPM 75 and the VMC 75 were then categorized into four levels at each quartile to create a C4 and V4, respectively. Figure 5 shows the cohort-specific-based quartiles for CPM 75 N = 575). The number of features for the wrist-and hip-worn devices differed because the income and ethnicity/race categories in the two datasets were not identical. The specific features for ENMO(t) and VMC(t) are listed in Table 5.
We illustrated the meaning of the individual harmonic features in Fig. 6. While we computed a relatively large number of harmonic features, the features belong to just a few categories: differential entropy (flatness of a distribution), fast Fourier transform (revealing periodicity in activity), and statistics describing shapes of a distribution, such as mean, variance, skewness, and kurtosis.
Statistical analysis. First, we computed characteristics of the two cohorts: means (±standard error, SE) for continuous measures and proportions (±SE) for categorical variables. Second, to analyze the statistical significance of the covariates for distinguishing between the two classes (Δ − , Δ + ), we evaluated the predictive importance of each of the demographic, comorbidity, and accelerometry measure in both cohorts. Third, we built a binary classifier called CDPred to distinguish Δ + from Δ − in the two cohorts using XGBoost (Extreme Gradient Boosting). To evaluate the performance of the model in each cohort, we randomly chose 10% of the hip accelerometry cohort and 15% of the sample from the wrist accelerometry cohort as a hold-out sample. The CDPred hyperparameters Includes animations illustrating the entropy, skewness, harmonics, kurtosis, and amplitude accelerometry features used in our analysis (we did not illustrate more standard statistics, such as mean and variance of measurements). a Differential entropy: Differential entropy is the highest for a uniform distribution of activity (for example, when a person stays inactive 24 h a day, so there are no bursts of activity). When a person is more active through the day and inactive at night, the entropy of activity drops, because the daytime activity exceeds the night-time activity average. b Fast Fourier transform (FFT): The fast Fourier transform refers to the number of harmonics that can be used to describe a curve. Any curve can be decomposed into a spectrum of harmonics. In this case, the hypothetical activity curve shown in red is the sum of 3 harmonics with nonzero amplitude: one with four cycles a day, one with a single full cycle a day, and one with a two-day cycle. In real accelerometry data, the number of accelerometry harmonics composing a 24-h circadian pattern is typically over 15 harmonics. c Skewness is a statistic characterizing the asymmetry of the distribution of activity; it can be applied to entire device wear time or to smaller intervals of accelerometry readings. d Excess kurtosis is a statistic indicating deviation of a distribution from a normal distribution. Kurtosis is zero for a normal distribution, positive for distributions with heavier (than normal) tails, such as t-distribution, and negative for distributions that have lighter tails, such as Beta with parameters (2,2). e Amplitude: The amplitude of each harmonic in an FFT reflects the distance between minimum and maximum activity values. For non-essential (noise-level) harmonics in FFT, the amplitude is close to zero.
were fine-tuned using 5-fold cross-validation, to maximize the area under the curve (AUC) score. We then reported the performance of each model in terms of predicted accuracy and AUC on a hold-out sample. The feature importance of distinguishing Δ + and Δ − were then listed in descending order of importance for the best-performing model in each dataset.
Ethics. The study was approved by the University of Chicago Institutional Review Board (IRB # 13-0443). Study participants provided written informed consent.
DATA AVAILABILITY
The NSHAP data are publicly available and can be obtained from the National Archive of Computerized Data on Aging (http://www.icpsr.umich.edu/icpsrweb/NACDA/ studies/34921) after completing a Data Use Agreement. The FACE Aging study data are available from one of the corresponding authors (M.H.S.) upon reasonable request and after completion of a Data Use Agreement and Institutional Review Board assessment. | 2022-06-07T13:13:42.781Z | 2022-06-06T00:00:00.000 | {
"year": 2022,
"sha1": "1d8aada59fbd09406e61ac5316adf1e4477dde5d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41514-022-00087-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "af1c659092c0ed3750dcd14d9d2121eb2766ab46",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240975302 | pes2o/s2orc | v3-fos-license | Adenomyoepithelioma With Myoepithelial Carcinoma of the Breast With Axillary Lymph Node Metastasis: Two Case Reports and Review of the Literature
Objectives: Adenomyoepithelioma (AME) of the breast exhibits characteristic proliferation of the epithelial and myoepithelial cells. Most AMEs are benign, but the 2 inherent cell types can become malignant. The present study reports 2 cases of AME with myoepithelial carcinoma of the breast, one with axillary lymph node metastasis. Methods: A modified radical mastectomy was performed in a 67-year-old woman, because a sentinel lymph node biopsy revealed one metastatic lymph node composed of a myoepithelial carcinoma component. Despite receiving radiotherapy and chemotherapy, the patient died from lung and brain metastases 21 months later. In the second case, breast-conserving surgery with sentinel lymph node biopsy was performed in a 55-year-old woman. Following additional treatment with radiotherapy and chemotherapy, there were no signs of recurrence or metastasis. Results: The tumors of the 2 patients were diagnosed as malignant, based on their high mitotic rate and severe nuclear atypia. Conclusions: Based on previously reported cases with distant metastases, the prognosis of myoepithelial carcinoma is poor. Myoepithelial carcinoma should be followed up with careful screening and treated aggressively.
A denomyoepithelioma (AME) of the breast is a rare tumor that is characterized by biphasic proliferation of the epithelial and myoepithelial cells. 1 In rare instances, the epithelial, the myoepithelial, or both components of an AME may become malignant. 1,2 Myoepithelial carcinoma (malignant myoepithelioma) is a malignant lesion composed of spindled myoepithelial cells with infiltrating margins and high mitotic activity. 1 According to the fourth edition of the World Health Organization (WHO) classification, myoepithelial carcinoma is classified under metaplastic carcinoma. 2 It is diagnosed through recognition of the overlapping morphologic and immunophenotypical characteristics. 1 Most AMEs and myoepithelial carcinoma tends to present as a painless, palpable mass. A rapid growing mass is highly suggestive of myoepithelial carcinoma. Although myoepithelial carcinoma of breast still does not have an established standard treatment, the basic treatment for primary tumor is surgery. In previous reports, a few cases were treated with chemotherapy, but the response has not been favorable. 1,3 Certain studies have reported myoepithelial carcinoma with local recurrence and metastasis after the initial surgery. 1,3 Local recurrence may occasionally occur, but distant metastasis is extremely rare and is usually hematogenous. 3 We report herein 2 cases of AME with myoepithelial carcinoma of the breast, of which one had axillary lymph node and distant metastasis.
Case 1
Clinical summary A 67-year-old woman presented to ****** Hospital (******, ****, *****) with a large palpable mass in the upper midregion of the right breast. Physical examination indicated an~4-3 3-cm firm, tender mass. Mammography showed a 4-cm irregularly shaped hyperdense mass with indistinct margins in the upper midportion of the right breast. On ultrasound examination, a 3.6-cm inhomogeneous, irregularly marginated, and hypoechoic mass was located at the 12 o'clock position of the right breast (Fig. 1A). The patient underwent an ultrasoundguided core-needle biopsy, and the pathologic diagnosis was of a benign pleomorphic adenomalike, salivary gland-type neoplasm. Elective surgery was planned, but the patient did not return to the hospital for 6 months due to poor economic status. On returning, the patient underwent a lumpectomy, and the surgical specimen showed a gray-white 5-3 4-cm mass on the cut surface. The breast tumor was histologically diagnosed as myoepithelial carcinoma with an AME component, and the inferior and lateral resection margins of the tumor were positive. The patient then underwent a modified radical mastectomy. On sentinel lymph node biopsy, one of the sentinel lymph nodes showed metastasis, and an axillary lymph node dissection was performed. On pathologic examination, 2 residual tumors diagnosed as AME with myoepithelial carcinoma were observed, and 1 of the 14 dissected axillary lymph nodes was metastatic. Following surgery, the patient underwent adjuvant chemotherapy with 4 cycles of a docetaxel and cyclophosphamide regimen (docetaxel 75 mg/m 2 intravenous [IV] infusion plus cyclophosphamide 600 mg/m 2 IV infusion given on day 1 for 3 weeks), without doxorubicine, due to a myocardial infarction. After 6 months, multiple lung metastases were observed on follow-up chest computed tomography (CT) scans (Fig. 1B). The patient therefore underwent palliative chemotherapy with 4 cycles of a gemcitabine and cisplatin regimen (gemcitabine 700 mg/m 2 IV infusion plus cisplatin 30 mg/m 2 infusion given on days 1 and 8 every 3 weeks). At 18 months after surgery, brain metastasis was observed, and brain radiotherapy (whole-brain radiotherapy of 30 Gy in 10 fractions over 2 weeks) and palliative chemotherapy with vinorelbine (vinorelbine 30 mg/m 2 IV on days 1 and 8 every 3 weeks) were performed. However, the patient died 3 months after the diagnosis of brain metastasis and 21 months after the first surgery.
Microscopically, the tumor showed multilobulated collections of monomorphic, polygonal, epithelial, or spindled cells, with multiple pseudocystic necrotic foci ( Fig. 2A). The tumor cells demonstrated high mitotic activity and marked pleomorphism (Fig. 2B). The number of mitoses was 16 per 10 highpower fields (HPFs). A few of them encircled and entrapped the normal glands. The tumor showed bicellular proliferation of the epithelial and myoepithelial cells (Fig. 2C). In 1 lymph node, metastatic myoepithelial carcinoma of the breast was observed on hematoxylin and eosin staining (Fig. 2D). The immunohistochemical staining (Fig. 3) of the AME component was positive for Cam5.2, epithelial membrane antigen (EMA), and cytokeratin 5/6 (CK5/6) in the epithelial cells and for CK5/6, CD10, and p63 in the myoepithelial cells. The myoepithelial carcinoma component was positive for CK5/6, CD10, and p63. The tumor cells were negative for estrogen receptor, progesterone receptor, and human epidermal growth factor receptor 2 (HER2)/neu on immunohistochemical staining.
Clinical summary
A 55-year-old woman presented to **************** Hospital with a large palpable mass in the upper inner quadrant of the right breast. Physical examination indicated an~1.5-cm firm mass. On mammography, ultrasound examination, and breast magnetic resonance imaging, a 2.3-cm, spiculated, irregularly shaped, peripheral enhancing mass was observed in the right upper midportion of the breast ( Fig. 4A and 4B). The patient underwent an ultrasound-guided core-needle biopsy, and the pathologic diagnosis was of an epithelial-myoepithelial tumor with myoepithelial overgrowth. The patient then underwent breast-conserving surgery with sentinel lymph node biopsy. The breast tumor was histologically diagnosed as AME with myoepithelial carcinoma, with clean resection margins and no sentinel lymph node metastasis. The patient underwent adjuvant radiotherapy (external beam radiotherapy with 45 Gy for 6 weeks) and 4 cycles of anthracycline-based chemotherapy (doxorubicin 60 mg/m 2 IV infusion plus cyclophosphamide 600 mg/m 2 infusion given on day 1 every 3 weeks).
After 1 year, this patient currently shows no evidence of recurrence or metastasis.
Microscopically, the tumor was composed of a ductal component and a spindle cell component. The ductal component was small, tubular, or cystically dilated, with epithelial and myoepithelial cell components. The spindle cells were epithelioid and vaguely clustered and fibroblastic or smooth muscle-like. The myoepithelial carcinoma was composed mostly of a dominant spindle cell component with admixed ductal cells (Fig. 5A). The stromal cells showed frequent mitoses (10/10 HPF), necrosis, and 30%-40% Ki-67 labeling. On immunohistochemical staining, the spindle cell components were positive for smooth muscle marker (smooth muscle actin [SMA]), epithelial markers (CK and high-molecular weight CK), and myoepithelial cell marker (p63) (Fig. 5B and 5C). Certain cells were positive for S-100 protein, but there were no glial fibrillary acidic protein-positive cells.
Discussion
AMEs are rare tumors, and myoepithelial carcinoma arising in an AME has been reported only in individual case reports and studies of fewer than 5 cases. 1,3-7 AME is composed of a proliferation of layers of myoepithelial cells around epitheliumlined spaces. 2 Either one or both cell types can undergo malignant transformation. 1 According to the fourth edition of the WHO classification, the malignant transformation of AME can be divided into 3 subtypes: (1) epithelial type; (2) myoepithelial type; and (3) epithelial and myoepithelial type. 2 In the present study cases, the myoepithelial components underwent malignant transformation; thus, these are instances of AME with myoepithelial carcinoma.
Using double immunofluorescence labeling, the study by Hungermann et al showed that adenomyoepithelial tumor cells coexpress basal type CK5/6 either alone or in combination with glandular CK8/18 or SMA. 7 The study suggested that CK5/6-positive cells may be an essential component in the histogenesis of AME and that biphasic epithelial and myoepithelial tumors may be a consequence of transformation events in CK5positive stem cells or adult pluripotent progenitor cells, which have undergone divergent differentiation into fully secretory luminal and myoepithelial cells. In the present cases, the immunohistochemical staining of the AME component was positive for CAM5.2, EMA, and CK5/6 in the epithelial cells and for CK5/6, CD10, and p63 in the myoepithelial cells. The tumor cells demonstrated high mitotic activity and marked pleomorphism, and thus the tumor was diagnosed as a myoepithelial carcinoma. Up to 40% of the myoepithelial carcinomas reported in the literature metastasized, and most of these metastases were hematogenous. 3 To date, 14 cases of myoepithelial carcinoma of the breast have been reported with distant metastases, including case 1 of the present study (Table 1). [4][5][6][8][9][10][11][12][13][14][15][16][17] The median age of the metastatic cases was 58 years (range, 42-86 years), and the mean size of the primary tumor was 6.5 cm (range, 2-17 cm). The mitotic counts ranged from 3 to 37 per 10 HPFs. Metastases occurred mostly in the lungs, but also occurred in the liver, bone, and brain and at other sites. 5,8,9,17 A total of 1 thyroid, 13 1 chest wall, 9 and 1 liver metastasis 15 were recorded. Time to progression varied in the 14 cases. In case 1 of the current study, lung metastasis was observed only 6 months after the initial surgery. Michal et al reported lung metastasis within 5 months of surgery, 5 and Chen et al reported bone metastasis within 3 weeks of surgery. 6 However, for most cases, the time from initial treatment to distant metastasis was .21 months.
By contrast, lymphatic metastasis is rare in malignant AME. There is no indication for an axillary lymph node dissection for these lesions unless clinically detected lymphadenopathy is present, as metastasis to the nodes is unusual. 1 Among the 13 cases, only 2, including case 1 of the current study, exhibited axillary lymph node metastasis. Chen et al previously reported 1 case of myoepithelial carcinoma with axillary lymph node metastasis. 6 In that case, bone metastasis was observed only 3 weeks after initial treatment, and the patient died 7 months after the initial treatment. 6 In case 1 of the present study, there was 1 metastatic axillary lymph node; an axillary lymph node dissection was performed, and the patient had a poor prognosis. The prognostic implication of axillary lymph node metastasis is not well understood, as axillary lymph node metastasis is rare in myoepithelial carcinoma. However, the 2 cases with lymph node metastasis suggest that it may be associated with a poor prognosis.
In case 1 of the present study, following adjuvant chemotherapy, lung metastasis was found on chest CT scan only 6 months later. As the tumor size was Excisional biopsy .2 cm and the tumor exhibited high-grade malignant transformation, 2 this patient was at high risk of metastasis and a poor prognosis. The role of chemotherapy in the management of AME with carcinoma is not proven. According to previously reported cases with distant metastases, AME with carcinoma responds poorly to chemotherapy and has a poor prognosis. The role of radiotherapy also lacks objective evidence. Generally, in AME, immunohistochemical stains for estrogen and progesterone receptors are negative, as is HER2. 2 Therefore, tamoxifen will not be effective for the treatment of AME or AME with carcinoma. In the present cases, immunohistochemical stains for estrogen, progesterone, and HER2 receptors were all negative; therefore, the patients were not treated with hormone therapy or anti-HER2 therapy.
In conclusion, the present study reported 2 cases of AME with myoepithelial carcinoma, 1 of which exhibited metastatic involvement of an axillary lymph node, as well as lung and brain metastases. It is difficult to diagnose AME with myoepithelial carcinoma due to its unusual morphologic features. Therefore, precise diagnosis is important, with the use of relevant immunohistochemistry. The prognosis of AME with myoepithelial carcinoma is poor and the optimal treatment is not proven; therefore, close follow-up and adequate treatment with surgery and adjuvant chemotherapy or radiotherapy should be considered. | 2021-10-10T16:30:36.723Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "dbe4defc60c6e96673285d30ac81600ddb013c80",
"oa_license": null,
"oa_url": "https://meridian.allenpress.com/international-surgery/article-pdf/104/5-6/203/2714211/i0020-8868-104-5-203.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "98998335ce6dbcf734c9cf806fa74f59b9dc1b89",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
256662706 | pes2o/s2orc | v3-fos-license | Quantum no-signalling bicorrelations
We introduce classical and quantum no-signalling bicorrelations and characterise the different types thereof in terms of states on operator system tensor products, exhibiting connections with bistochastic operator matrices and with dilations of quantum magic squares. We define concurrent bicorrelations as a quantum input-output generalisation of bisynchronous correlations. We show that concurrent bicorrelations of quantum commuting type correspond to tracial states on the universal C*-algebra of the projective free unitary quantum group, showing that in the quantum input-output setup, quantum permutations of finite sets must be replaced by quantum automorphisms of matrix algebras. We apply our results to study the quantum graph isomorphism game, describing the game C*-algebra in this case, and make precise connections with the algebraic notions of quantum graph isomorphism, existing presently in the literature.
In recent years, many fruitful interactions have emerged between entanglement and non-locality in quantum systems, on one hand, and the theory of operator algebras and operator systems, on the other. At a high level, this connection stems from the laws of quantum mechanics, which dictate that the input-output behaviour of local measurements on (bipartite) quantum systems is encoded by non-commutative operator algebras of observables and their state spaces. This provides powerful means to translate between questions of a physical nature and questions formulated in the language of non-commutative analysis. At the base of these developments lie the work of Junge, Navascues, Palazuelos, Perez-Garcia, Scholz and Werner [29], where the relation between the Tsirelson Problem in quantum physics and the Connes Embedding Problem in operator algebra theory was first noticed (see also [44]), and that of Paulsen, Severini, Stahlke, Winter and the third author [46], where the notion of synchronous no-signalling correlation was first defined and characterised. The fruitfulness of these connections has been borne out by many recent works; see [44,35,37,36,39,38,1,40,9] for an incomplete list. We specifically single out Sloftsra's ground-breaking work [50,49], which injected ideas from geometric group theory into the theory of non-local games, showing that the set of bipartite quantum correlations is not closed, and the work of Helton, Meyer, Paulsen and Satriano [26], in which an algebraic approach to non-local games was formulated. All of these ideas recently culminated in the resolution of the weak Tsirelson problem and Connes Embedding problem in the preprint [28] by Ji, Natarajan, Vidick, Wright and Yuen.
In the present work, we are primarily interested in investigating the structure of quantum input-quantum output bipartite correlations which generalise the bisynchronous correlations introduced by Paulsen and Rahaman in [47]. Recall that a no-signalling bipartite correlation over the quadruple (X, X, A, A), where X and A are finite sets, is a family of conditional probability distributions p = {p(a, b|x, y) : (x, y) ∈ X × X, (a, b) ∈ A × A} that has well-defined marginals (see e.g. [35]). Operationally, in the commuting operator model of quantum mechanics, p describes the input-output behaviour of a bipartite quantum system, given by a Hilbert space H in state ξ, interpreted as a unit vector in H, on which local measurements are jointly performed: for each x, y ∈ X, two non-communicating parties Alice and Bob have access to mutually commuting local measurement systems E x = (E x,a ) a∈A ⊆ B(H) (for Alice) and F y = (F y,b ) b∈A ⊆ B(H) (for Bob). Given input x, Alice uses the system E x to measure ξ, and similarly, given y, Bob uses F y to measure ξ; the resulting outcomes of Alice and Bob's measurements are (a, b) ∈ A × A with probability p(a, b|x, y) = E x,a F y,b ξ, ξ .
We say that a correlation p is synchronous if p(a, b|x, x) = 0 for all x ∈ X and a = b. Heuristically, Alice and Bob's behaviour is synchronised in that they appear to invoke the same "virtual function" X → A to obtain their outputs, depending on the given inputs. A correlation p is called bisynchronous [47] if it is synchronous and has the additional property that p(a, a|x, y) = 0 for all a ∈ A and x = y. In this case, the "virtual function" X → A behaves as though it were in addition injective.
Using the language of operator algebras and non-commutative geometry, one can make the intuition, highlighted in the previous paragraph, precise. Let A X,A = ⋆ |X| ℓ ∞ (A) be the unital free product of |X| copies of the |A|dimensional abelian C*-algebra ℓ ∞ (A). The C*-algebra A X,A is a C*-cover of the universal operator system S X,A with generators e x,a , where x ∈ X and a ∈ A, subject to the relations e x,a = e 2 x,a = e * x,a and a∈A e x,a = 1, x ∈ X. Within the framework of non-commutative geometry, A X,A can be regarded as a quantisation of the finite-dimensional C * -algebra C(F(X, A)) of complex-valued functions on the set F(X, A) of functions f : X → A. It was shown in [46] that a no-signalling correlation p of quantum commuting type is synchronous if and only if there is a tracial state τ on A X,A such that p(a, b|x, y) = τ (e x,a e y,b ), x, y ∈ X, a, b ∈ A. (1) If the correlation p is bisynchronous (and |X| = |A|), then [47] p arises via (1) from a tracial state τ on the C*-algebra C(S + X ) of the quantum permutation group [56]. Similarly to A X,A , the C*-algebra C(S + X ) is the universal unital C*-algebra with generators e x,a , x, a ∈ X, further satisfying the additional relations x∈X e x,a = 1, a ∈ A. Note that C(S + X ) is a free analogue of the algebra C(S X ) of complex functions on the permutation group S X of X, and is itself a C * -algebraic quantum group [56].
Bisynchronous correlations arise in the analysis of certain classes of nonlocal games, most notably the graph isomorphism game [1,36,38,9] and the related metric isometry game [22]. Here, deep and unexpected connections emerged between quantum permutation groups, no-signalling correlations and graph theory. At the same time, connections were established between graph isomorphism games and quantum graphs [40,41,9]. In particular, in the aforementioned works, a natural (operator) algebraic notion of a quantum isomorphism between quantum graphs was introduced.
One of the main motivations behind the present work is the desire to provide an operational characterisation of quantum isomorphisms between quantum graphs in terms of bipartite correlations. As the term suggests, the description of a quantum graph (in any of its many guises [51,40,9,11]) requires a suitable quantum version of the notion of a vertex or edge, using the language of bipartite quantum systems. Hence one is naturally led to consider bipartite no-signalling correlations which allow quantum states as inputs and outputs.
Quantum input-quantum output no-signalling (QNS) correlations were introduced by Duan and Winter [20], and subsequently systematically studied in [52,7,11]. Given finite sets X and A, and denoting by M X (resp. M A ) the full matrix algebra over the |X|-dimensional Hilbert space, a QNS correlation over the quadruple (X, X, A, A) is a quantum channel satisfying a pair of additional constraints, equivalent to the existence of marginal channels (see equations (5) and (6), and the article [20] for further details). Since any classical no-signalling correlation p over (X, X, A, A) can be regarded as a QNS correlation Γ p that preserves the corresponding diagonal subalgebras, QNS correlations constitute a genuine generalisation of their classical counterparts (see also equation (8)).
The main purpose of the present work is to develop a notion, and find (operational and operator algebraic) characterisations, of bisynchronicity in the quantum input-output setting. In parallel with the classical setting, here we focus our attention on the case where the input and output systems are of the same size, that is, |A| = |X|. In this case, it is natural to consider "bistochastic" correlations Γ : M X ⊗ M X → M A ⊗ M A , that is, unital QNS correlations with the additional property that the dual channels Γ * are also QNS correlations; these channels are referred to as QNS bicorrelations (see Definition 5.1). A quantisation of bisynchronicity must involve a suitable quantum counterpart of the property of sending identical inputs to identical outputs. In bipartite quantum systems, this is naturally captured by how Γ (and Γ * ) acts on the canonical maximally entangled state. More precisely, if (ǫ x,y ) x,y∈X is the canonical matrix unit system of M X , and J X = 1 |X| x,y∈X ǫ x,y ⊗ ǫ x,y is the maximally entangled state, then it is natural to impose the condition Γ(J X ) = J A .
Condition (2) on a QNS correlation Γ was introduced and studied in detail in our previous work [11], where it was called concurrency. For a QNC bicorrelation Γ, its concurrency is equivalent to concurrency for Γ * (see Remark 6.2). From an operational viewpoint, concurrent bicorrelations Γ are characterised by the property that Γ and Γ * preserve the perfect correlation of local measurements in both directions: the input state J X is characterised by the property that local measurements performed on J X in any fixed basis are always perfectly correlated with uniformly random outcomes. Concurrent bicorrelations thus respect this perfect correlative structure, and hence rightfully can be interpreted as fully quantum versions of bisynchronous correlations.
We study the various types of QNS bicorrelations (quantum commuting, quantum approximate, quantum and local) in detail, providing operator system/algebra characterisations thereof. After providing necessary preliminaries in Section 2, in Section 3 we exhibit operator bistochastic matrices, which can be viewed as quantum and operator-valued generalisations of classical bistochastic matrices. Operator bistochastic matrices turn out to be the suitable mathematical objects encoding each of the parties of a QNS bicorrelation. We characterise concretely the universal operator system T X of an operator bistochastic matrix as the subspace spanned by natural order two products associated with the entries of a universal block operator bi-isometry V : C |X| ⊗ H → C |X| ⊗ K (that is, an isometry V for which the transpose V t is also an isometry). We further identify the dual operator system of T X and establish several properties of T X and its universal C*-algebra C X . At the heart of our arguments is a factorisation result for bistochastic operator matrices (Theorem 3.2). Our results should be compared to those of [52], where a similar development was undertaken for the universal operator system T X,A of a block operator isometry, and the corresponding C*-algebra C X,A .
The diagonal expectations (intuitively, the classical components) of bistochastic operator matrices coincide with quantum magic squares, introduced by De Las Cuevas, Drescher and Netzer in [16]; contrapositively, bistochastic operator matrices can be viewed as quantum versions of quantum magic squares. In Section 4, we build up on this connection and rephrase some of the results of [16] in the language of operator systems. Indeed, one of the main results in [16] is the fact that not every quantum magic square admits a dilation to a quantum permutation. In Theorem 4.5, we characterise the dilatability of a quantum magic square in terms of the complete positivity of natural maps, associated with the given quantum magic square, and defined on the operator system P X ⊆ C(S + X ) spanned by the coefficients of a quantum permutation matrix. We demonstrate that the non-dilatability of quantum magic squares is due to the distinction between different operator system structures.
In Section 5, we introduce the types of quantum no-signalling bicorrelations, corresponding to different physical models (local, quantum, approximately quantum, quantum commuting and general no-signalling), and characterise them in terms of states on the various operator system structures, with which the algebraic tensor product T X ⊗ T X can be endowed. Here we rely on the tensor product theory developed in [32]. We pay a separate attention to classical no-signalling bicorrelations, showing that their corresponding encoding operator system S X is the universal operator system spanned by the entries of an X × X-quantum magic square studied in Section 4, and obtaining similar characterisations in terms of states on operator system tensor products on the algebraic tensor product S X ⊗ S X .
In Section 6, we focus our attention on concurrent bicorrelations, establishing in Theorem 6.7 a characterisation of concurrent quantum commuting bicorrelations in terms of tracial states. We show that the C*-algebra, whose tracial states are of interest here, is the C*-algebra C(PU + X ) of functions on projective free unitary quantum group. Recall that the C*-algebra of the free as symmetric skew subspaces U ⊆ C X ⊗ C X [8,51,19,52,11]. We define quantum isomorphisms between quantum graphs in terms of perfect QNS strategies for a suitable quantum graph isomorphism game, building up on the approach to quantum graph homomorphisms followed in [52]. In Theorem 7.4, we characterise quantum commuting isomorphisms between quantum graphs U , V ⊆ C X ⊗ C X in terms of the existence of a bi-unitary matrix U = (u x,a ) x,a ∈ M X (B(H)) such that C(PU X + ) admits a tracial state τ , and whereS U andS V are the traceless, symmetric subspaces, canonically associated to U and V, respectively. Note that condition (3) is a quantum counterpart of the characterisation [1] of quantum isomorphisms of classical graphs in terms of quantum permutations matrices that intertwine the relevant adjacency matrices, through the replacement of quantum permutations by bi-unitaries (see Remark 7.6). We further formalise the relations (3) in Theorem 7.10, where we introduce a natural game algebra A P,Q whose tracial states encode the perfect quantum commuting strategies for the (U , V)isomorphism game. We note, in particular, that when U = V, the algebra A P,Q admits the structure of a compact quantum group, which seems to generalise the quantum automorphism group of a classical graph. We leave the study of these quantum groups for future work.
Finally, in Section 8, we compare the operational notion of quantum graph isomorphism of Section 7 to the operator algebraic notions that have appeared previously in the literature, and which have been based mainly on adjacency matrices [40,41,9,15]. We show, in Theorem 8.9, that the algebraic quantum isomorphisms considered in the aforementioned works fit into our framework as special cases. The arguments and ideas for the proof of this theorem rely on the recent work of Daws on quantum graphs [15]. In Theorem 8.14, we establish a partial converse, exhibiting the precise conditions, under which the algebraic and the operational notions of quantum graph isomorphism coincide.
Acknowledgements. M.B. was partially supported by an NSERC discovery grant. S.H. was partially supported by an NSERC Postdoctoral Fellowship. I.T. was partially supported by NSF grant DMS-2154459 and a Simons Foundation grant (award number 708084).
Preliminaries
In this section, we collect basic preliminaries on quantum no-signalling correlations, set notation and introduce terminology. Let H be a Hilbert space. As usual, we denote by B(H) the space of all bounded linear operators on H and sometimes write L(H) if H is finite dimensional. We denote by I H the identity operator on H and, if ξ, η ∈ H, we let ξη * be the rank one operator given by (ξη * )(ζ) = ζ, η ξ. In addition to inner products, ·, · will denote the duality between a vector space and its dual. We let B(H) + be the cone of positive operators in B(H), and further denote by T (H) its ideal of trace class operators and by Tr -the trace functional on T (H).
An operator system is a self-adjoint subspace S ⊆ B(H), for some Hilbert space H, containing I H . If S is an operator system, the universal C*-cover of S [34] is a pair (C * u (S), ι), where C * u (S) is a unital C*-algebra and ι : S → C * u (S) is a unital complete order embedding, such that ι(S) generates C * u (S) as a C*-algebra and, whenever K is a Hilbert space and φ : S → B(K) is a unital completely positive map, there exists a *-representation π φ : C * u (S) → B(K) such that π φ • ι = φ. If S is a finite dimensional operator system then its Banach space dual S d can be viewed as an operator system [14,Corollary 4.5]. We refer the reader to [45] for information and background on operator systems and completely positive maps.
We denote by |X| the cardinality of a finite set X, let H X = ⊕ x∈X H and write M X for the space of all complex matrices of size |X| × |X|; we identify M X with L(C X ) and set I X = I C X . For n ∈ N, we let [n] = {1, . . . , n} and M n = M [n] . We write (e x ) x∈X for the canonical orthonormal basis of C X , (ǫ x,x ′ ) x,x ′ ∈X for the canonical matrix unit system in M X , and denote by D X the subalgebra of M X of all diagonal matrices with respect to the basis (e x ) x∈X . If V is a vector space, we write M X (V) for the space of all X × X matrices with entries in V; we note that there is a canonical linear identification between M X (V) and M X ⊗ V. Here, and in the sequel, we use the symbol ⊗ to denote the algebraic tensor product of vector spaces.
For an element ω ∈ M X , we denote by ω t the transpose of ω in the canonical basis, and write ω for the complex conjugate of ω; thus, ω = (ω t ) * . The canonical complete order isomorphism from M X onto its dual operator system M d X maps an element ω ∈ M X to the linear functional f ω : M X → C given by f ω (T ) = Tr(T ω t ); see e.g. [48,Theorem 6.2]. We will thus consider M X as self-dual with the pairing (4) (ρ, ω) → ρ, ω := Tr(ρω t ).
On the other hand, note that the Banach space predual B(H) * can be canonically identified with T (H); every normal functional φ : B(H) → C thus corresponds to a (unique) operator S φ ∈ T (H) such that φ(T ) = Tr(T S φ ), T ∈ B(H). In the case where X is a fixed finite set (which will sometimes come in the form of a Cartesian product), we will use a mixture of the two dualities just discussed: if ω, ρ ∈ M X , S ∈ T (H) and T ∈ B(H), it will be convenient to continue writing ρ ⊗ T, ω ⊗ S = Tr(ρω t ) Tr(T S).
If X and Y are finite sets, we identify M X ⊗ M Y with M X×Y and write M XY in its place. Similarly, we set D XY = D X ⊗ D Y . For an element ω X ∈ M X and a Hilbert space H, we let L ω X : M X ⊗ B(H) → B(H) be the linear map given by L ω X (S ⊗ T ) = S, ω X T . If H = C Y and ω Y ∈ M Y , we thus have linear maps L ω X : and a similar formula holds for L ω Y . We let Tr X : M XY → M Y (resp. Tr Y : M XY → M X ) be the partial trace; thus, Tr X = L I X (resp. Tr Y = L I Y ). Let X, Y , A and B be finite sets. A quantum channel from M X into M A is a completely positive trace preserving map Φ : M X → M A . A quantum correlation over (X, Y, A, B) (or simply a quantum correlation if the sets are understood from the context) is a quantum channel Γ : M XY → M AB . Such a Γ is called a quantum no-signalling (QNS) correlation [20] if We denote by Q ns the set of all QNS correlations. A stochastic operator matrix over (X, A), acting on a Hilbert space H, is a positive block operator matrixẼ = (E x,x ′ ,a,a ′ ) x,x ′ ,a,a ′ ∈ M XA (B(H)) such that Tr AẼ = I.
A QNS correlation Γ : M XY → M AB is quantum commuting if there exist a Hilbert space H, a unit vector ξ ∈ H and stochastic operator matrices for all x, x ′ ∈ X and all y, y ′ ∈ Y . Quantum QNS correlations are defined as in (7) We write Q qc (resp. Q qa , Q q , Q loc ) for the (convex) set of all quantum commuting (resp. approximately quantum, quantum, local) QNS correlations, and note the inclusions Recall that a (classical) no-signalling (NS) correlation is a family p = {(p(a, b|x, y)) a,b : (x, y) ∈ X × Y } of probability distributions over A × B, such that b∈B p(a, b|x, y) = b∈B p(a, b|x, y ′ ), x ∈ X, y, y ′ ∈ Y, a ∈ A, and a∈A p(a, b|x, y) = a∈A p(a, b|x ′ , y), x, x ′ ∈ X, y ∈ Y, b ∈ B (see e.g. [35,46]). We denote the (convex) set of all NS correlations by C ns . With a correlation p ∈ C ns , we associate the classical information channel Γ p : D XY → D AB , given by The subclasses C t of C ns , for t ∈ {loc, q, qa, qc}, are defined as in the previous paragraph, but using classical stochastic operator matrices, that is, stochastic operator matrices of the form E = x∈X a∈A ǫ x,x ⊗ ǫ a,a ⊗ E x,a . Note that the condition for E being stochastic is equivalent to the requirement that (E x,a ) a∈A is a positive operator-valued measure (POVM) for all x ∈ X. We note the inclusions all of which are strict: C loc = C q is the Bell Theorem [4], C q = C qa is a negative answer to the weak Tsirelson Problem [49] (see also [21,50]), and C qa = C qc -in view of [25,29,44], a negative answer to the announced solution of the Connes Embedding Problem [28].
Bistochastic operator matrices
In this section we define and examine bistochastic operator matrices, which constitute a specialisation of stochastic operator matrices [52,Section 3] to the new context to be considered herein. Let X be a finite set, and set A = X. The distinct symbols X and A will continue to be used to indicate the variable with respect to which a partial trace is taken; the symbol X usually refers to the domain of a quantum channel, while A -to its codomain. (ii) there exist a Hilbert space K and operators V a,x ∈ B(H, K), x, a ∈ X, such that (V a,x ) a,x∈X is a bi-isometry and
and hence Tr
be the linear map, given by Φ(ǫ a,a ′ ) = E a,a ′ , a, a ′ ∈ A. By Choi's Theorem, Φ is a unital completely positive map and, by Stinespring's Theorem, there exist a Hilbert spaceK, an isometry V : C X ⊗ H →K and a unital *-homomorphism π : x ∈ X, for the entries of V , when V is considered as a block operator matrix. As in [52, Theorem 3.1], we conclude that On the other hand, writing ρ = (ρ a,a ′ ) a,a ′ ∈X , we have The latter equality holds for every ω ∈ T (H); thus, that is, V t is an isometry.
3.2. The universal operator system. Recall [27,58] that a ternary ring is a complex vector space V, equipped with a ternary operation [·, ·, ·] : A ternary representation of V is a linear map θ : V → B(H, K), for some Hilbert spaces H and K, such that We call θ non-degenerate if span{θ(u) * η : u ∈ V, η ∈ K} is dense in H. A (concrete) ternary ring of operators (TRO) [58] is a subspace U ⊆ B(H, K) for some Hilbert spaces H and K such that S, T, R ∈ U implies ST * R ∈ U . We refer the reader to [6,Section 4.4] for details about TRO's and their abstract versions that will be used in the sequel. Let V 0 X be the ternary ring, generated by elements v a,x , a, x ∈ X, satisfying the relations for all x, x ′ , x ′′ , a, a ′ , a ′′ ∈ X. Note that relations (11) are equivalent to (12) for all x, x ′ , a, a ′ ∈ X and all u ∈ V 0 X . Conditions (12) imply that the non-degenerate ternary representations θ : V 0 X → B(H, K) correspond to bi-isometries V = (V a,x ) a,x via the assignment V a,x = θ(v a,x ); in this case, we write θ = θ V . Following [52, Section 5], we letθ = ⊕ V θ V , where in the direct sum we have chosen one representative from each unitary equivalence class of bi-isometries and the cardinality of the underlying Hilbert spaces are bounded by that of V. The assignment u := θ (u) defines a semi-norm on V 0 X ; we set V X := V 0 X / kerθ, observe that V X is a TRO, and continue to write v a,x for the images of the canonical generators of V 0 X under the quotient map q : V 0 X → V X . The mapsθ and θ V (for a bi-isometry V ) give rise to corresponding ternary representations of V X , which we denote in the same way. Let C X be the right C*-algebra of the TRO V X (so that, up to a *isomorphism, viewed as an operator subsystem of C X . It is immediate that and that the relations (14) b∈A e x,x ′ ,b,b = δ x,x ′ 1 and y∈X e y,y,a,a ′ = δ a,a ′ 1, x, x ′ , a, a ′ ∈ X, hold true. For a bi-isometry V , acting on the Hilbert space H, we write π V : C X → B(H) for the *-representation of C X , given by (15) π Lemma 3.3. The following hold true: (i) Every non-degenerate ternary representation of V X has the form θ V , for some bi-isometry V . (ii) The mapθ is a faithful ternary representation of V X . (iii) Every unital *-representation π of C X has the form π V , for some bi-isometry V .
Proof. The arguments are similar to the ones in [52, Lemma 5.1] where a version of our current setup is considered for isometries (that are not necessarily bi-isometries). We address (iii) for the convenience of the reader.
Let V X,A be the universal TRO of an isometry (ṽ a,x ) a,x∈X , defined similarly to the TRO V X [52, Section 5]. Thus, the TRO V X,A arises from a ternary ring, whose canonical generatorsṽ a,x , x, a ∈ X, are required to satisfy only the first of the relations (11). We let C X,A be the right C*-algebra of T X,A . Lettingẽ x,x ′ ,a,a ′ =ṽ * a,xṽa ′ ,x ′ , x, x ′ , a, a ′ ∈ X, we write (16) T X,A = span{ẽ x,x ′ ,a,a ′ : x, x ′ , a, a ′ ∈ X}, viewed as an operator subsystem of C X,A [52]. It was shown in [52, Theorem 5.2] that, for a Hilbert space H, the unital completely positive maps φ : T X,A → B(H) correspond to stochastic operator matrices (E x,x ′ ,a,a ′ ) x,x ′ ,a,a ′ via the assignment φ(e x,x ′ ,a,a ′ ) = E x,x ′ ,a,a ′ . We next provide a bistochastic version of this fact, to be used subsequently.
(ii)⇒(iii) By Theorem 3.2, there exist a Hilbert space K and a bi-isometry x ′ ,a,a ′ ), and hence the *-representation π V of C X is an extension of φ.
(iii)⇒(i) is trivial. (i')⇒(ii') is a direct consequence of (13) and the fact that T X is an operator subsystem of C X .
(ii')⇒(i') Let T = φ(1) and note that, for any x, a ∈ X, we have Assume first that T is invertible. Following the proof of [52,Proposition 5.4], let ψ : T X → B(H) be the map given by (18) ψ Setting F = ψ(e x,x ′ ,a,a ′ ) x,x ′ ,a,a ′ , we have that and (17) shows that F is a bistochastic operator matrix. By the implication (ii)⇒(i), ψ is completely positive, and hence so is φ, as φ(·) = T 1/2 ψ(·)T 1/2 . Now relax the assumption that T be invertible. Using the implication (ii)⇒(i), let f : T X → C be the state given by f (e x,x ′ ,a,a ′ ) = 1 |X| δ x,x ′ δ a,a ′ and, for ǫ > 0, let φ ǫ : and φ ǫ (I) = T +ǫI is invertible. By the previous paragraph, φ ǫ is completely positive and, since φ ǫ → ǫ→0 φ in the point-norm topology, we conclude that φ is completely positive. Finally, suppose that E = E x,x ′ ,a,a ′ x,x ′ ,a,a ′ is a bistochastic operator matrix acting on H. Letting V be the bi-isometry, associated with E via Theorem 3.2, we have that the completely positive map φ := π V | T X satisfies the equalities φ(e x,x ′ ,a,a ′ ) = E x,x ′ ,a,a ′ for all x, x ′ , a, a ′ .
We note that, if S is an operator system, its Banach space dual S d can be equipped with a natural matricial order structure. To this end, we recall [14, Section 4] that any matrix φ = (φ i,j ) n i,j=1 ∈ M n (S d ) gives rise to a linear map F φ : S → M n , defined by letting and set It was shown in [14,Corollary 4.5] that, if S is a finite dimensional operator system then the (matrix ordered) dual S d is an operator system, when equipped with a suitable faithful state as an Archimedean order unit. It is straightforward to verify that, in this case, S dd ∼ = c.o.i. S. We identify an element T ∈ M XA with its matrix (λ x,x ′ ,a,a ′ ) x,x ′ ,a,a ′ , where , e x ⊗ e a , x, x ′ ∈ X, a, a ′ ∈ A. Let and consider L X,A as an operator subsystem of M XA . It was shown in [52,Proposition 5.5] that the linear mapΛ : T d X,A → L X,A , given bỹ is a unital complete order isomorphism between T d X,A and L X,A . Let Remark 3.5. If C = (λ x,x ′ ,a,a ′ ) x,x ′ ,a,a ′ ∈ M XA is a matrix and c 1 , c 2 are scalars such that b∈X λ x,x ′ ,b,b = δ x,x ′ c 1 for all x, x ′ ∈ X and y∈X λ y,y,a,a ′ = δ a,a ′ c 2 for all a, a ′ ∈ A, then Proposition 3.6. The linear map Λ : T d X → L X , given by (20) Λ is a well-defined complete order isomorphism.
Proof. The arguments follow the proof of [52, Proposition 5.5], and we only highlight the required modifications. Using Theorem 3.4, we see that the map Λ + : T d X + → L + X , given by is well-defined; by additivity and homogeneity, Λ + extends to a (C-)linear map Λ : T d X → L X . A further application of Theorem 3.4, combined with Theorem 3.2, shows that Λ is completely positive and bijective. Let Thus, Λ −1 is completely positive, and the proof is complete.
The linear map f : T X → T X , given by f(e x,x ′ ,a,a ′ ) = e x ′ ,x,a ′ ,a , is a complete order automorphism.
Proof. The map Φ : M XX → M XX , given by Φ(ǫ x,a ⊗ ǫ x ′ ,a ′ ) = ǫ x ′ ,a ′ ⊗ ǫ x,a , is a (unitarily implemented) complete order automorphism. Further, Φ(L X ) = L X , and hence Φ induces a complete order automorphism Φ 0 : L X → L X . Using Proposition 3.6, we have that its dual Φ * 0 a complete order automorphism of T X . For x, x ′ , a, a ′ ∈ X and T = (λ x,x ′ ,a,a ′ ) ∈ L X , we have , T , and the proof is complete.
thus, J X is a linear subspace of the operator system T X,A defined in (16). LetJ X be the closed ideal of C X,A , generated by J X . Write q X for the quotient map from T X,A onto T X,A /J X . Recall that, if S is an operator system, a subspace J ⊆ S is called a kernel [33,Definition 3.2] if there exist an operator system R and a unital completely positive map (equivalently, a completely positive map) φ : S → R such that J = ker(φ).
Proposition 3.8. The space J X is a kernel in T X,A and the linear map ι, given by is a well-defined complete order isomorphism from T X,A /J X onto T X . In addition, C X,A /J X ∼ = C X , up to a canonical *-isomorphism.
Proof. Let α : L X → L X,A be the inclusion map. Since L X and L X,A are operator subsystems of M XX , we have that α is a complete order embedding. By [24,Proposition 1.15], [52,Proposition 5.5] and Proposition 3.6, its dual α * : T X,A → T X is a complete quotient map. Note that, if T ∈ L X and a, a ′ ∈ X then α * y∈Xẽ y,y,a,a ′ − δ a,a ′ 1 , T = y∈Xẽ y,y,a,a ′ − δ a,a ′ 1, α(T ) = 0, that is, J X ⊆ ker(α * ). Consider the canonical linear mappings of which the first two are surjective linear maps whose composition is completely positive, while the third is a complete order isomorphism (note that the quotient T X,A /J X is linear algebraic). Dualising and using Proposition 3.6, we obtain the chain of maps By the definition of J X (see (21)), the elements of (T X,A /J X ) d correspond, via the last of the three maps in (23), to elements of the subspace L X of L X,A .
It now follows that the middle map in (23) is a linear isomorphism, and hence ker(α * ) = J X . In particular, J X is a kernel in T X,A and (T X,A /J X ) d ∼ = L X complete order isomorphically. Dualising, we see that T X,A /J X ∼ = T X complete order isomorphically via the map ι defined in (22). By the universal property of C X , there exists a unital *-epimorphism π : The block operator matrix ẽ x,x ′ ,a,a ′ +J X x,x ′ ,a,a ′ is bistochastic, and hence it gives rise, via Theorem 3.4, to a canonical unital surjective *homomorphism π ′ : C X → C X,A /J X . We thus have a chain of unital *homomorphisms whose composition is the identity. It follows that J =J X , and the proof is complete.
In the sequel, writeq X : C X,A → C X for the quotient map arising from Proposition 3.8, and continue to write q X for the quotient map from T X,A onto T X . Before formulating the next corollary, we recall that an operator system S is said to possess the local lifting property [33,Section 8] if for every finite dimensional operator subsystem S 0 ⊆ S, C*-algebra A, and closed ideal J ⊆ A, every unital completely positive map φ 0 : S 0 → A/J admits a lifting to a completely positive map φ : S 0 → A (that is, if q : A → A/J denotes the quotient map, the identity q • φ = φ 0 holds). Corollary 3.9. The operator system T X has the local lifting property.
Proof. By [52, Corollary 5.6], T X,A is an operator system quotient of M XX while, by Proposition 3.8, T X is an operator system quotient of T X,A . It follows that T X is an operator system quotient of M XX . The statement is now a consequence of [31, Theorem 6.8].
Realising the commuting tensor product of operator systems as an operator subsystem of maximal tensor products has been of importance from the beginning of the tensor product theory in the operator system category [32]. By Theorem 3.4 and [32, Theorem 6.4], for an arbitrary operator system R, ; the next proposition establishes a stronger inclusion.
Proposition 3.10. Let R be an operator system. Then Proof. Let ι : T X → C X be the inclusion map. By the functioriality of the commuting tensor product and the fact that the commuting and the maximal tensor products coincide provided one of the terms is a C*-algebra [32, Theorem 6.7], ι ⊗ id : let H be a Hilbert space, and φ : T X → B(H) and ψ : R → B(H) be unital completely positive maps with commuting ranges. By Theorem 3.4, φ extends to a *-homomorphism π : C X → B(H). Since C X is generated by T X as a C*-algebra, π(u) ∈ ψ(R) ′ for every u ∈ C X ; thus, and hence w ∈ M n (T X ⊗ c R) + . It follows that ι ⊗ id is a complete order embedding.
Quantum magic squares
In [16], the concept of a quantum magic square was defined and studied, exhibiting examples which show that not every quantum magic square dilates to a magic unitary. The aim of this section is to present an operator system viewpoint on this result, linking the dilation properties of a quantum magic square to complete positivity of canonical maps, associated with it. The universal operator system of a quantum magic square and its properties will further be used in Section 5.
Recall [16] that a block operator matrix The quantum magic square E is called a magic unitary (or a quantum permutation) if E x,a is a projection for all x, a ∈ X (see e.g. [36, Definition Two subclasses of quantum magic squares were singled out in [16] (see [16, Definition 5 and Example 8]). We will call a quantum magic square (E x,a ) x,a , acting on a Hilbert space H, dilatable if there exists a Hilbert space K, an isometry V : H → K, and a quantum permutation (P x,a ) x,a acting on K, such that The quantum magic square (E x,a ) x,a will be called locally dilatable if (24) holds for a commuting family {P x,a } x,a that forms a quantum permutation. It is clear that, up to unitary identifications, condition (24) can be replaced by the conditions E x,a = QP x,a Q, where we have assumed that H ⊆ K, and Q : K → H is the orthogonal projection. For x, a ∈ X, we set e x,a := e x,x,a,a and viewed as an operator subsystem of T X . (ii) (φ(e x,a )) x,a is a quantum magic square,
a is a quantum magic square acting on a Hilbert space H then there exists a (unique) unital completely positive map
be a unital completely positive map, for some Hilbert space H. By Arveson's Extension Theorem, φ has a completely positive extensionφ : x,x ′ ,a,a ′ is a bistochastic matrix. In particular, (φ(e x,a )) x,a , that is, (φ(e x,a )) x,a , is a quantum magic square.
a,a ′ is a bistochastic operator matrix and, by Theorem 3.4, there exists a (unital) completely positive mapφ : x,a is a quantum magic square; by the implication (ii)⇒(i), the linear map ψ : S X → B(H), given by ψ(e x,a ) = T −1/2 E x,a T −1/2 , is completely positive. Since φ(u) = T 1/2 ψ(u)T 1/2 , u ∈ S X , the map φ is completely positive. If T is not invertible, we fix a state f : S X → C and, for ǫ > 0, consider the map φ ǫ : S X → B(H), given by φ ǫ (u) = φ(u)+ǫf (u)I. The proof now proceeds similarly to the proof of the implication (ii')⇒(i') of Theorem 3.4.
The last statement in the theorem follows from the proof of the implication (ii)⇒(i). Let considered as an operator subsystem of D XX . Since every operator system is spanned by its positive elements, M X is the operator system spanned by the scalar bistochastic matrices in D XX .
Using Theorem 4.1, we see that there exists a completely positive map φ : S X → M n such that φ(e x,y ) = E x,y , x, y ∈ X. On the other hand, the element γ (n) (E) of M n (S d X ) gives rise, via (19), to a linear map F γ (n) (E) : S X → M n . We have that In particular, F γ (n) (E) is completely positive, and it follows that the map γ is completely positive.
It follows from Theorem 4.1 that the (linear) map γ is surjective; thus, it is injective. We show that γ −1 is completely positive. Assume that W ∈ M n (S d X ) + ; this means that the linear map F W : S X → M n , canonically associated with W , is completely positive. Set E x,y := F W (e x,y ), x, y ∈ X; by Theorem 4.1, E := (E x,y ) x,y ∈ (M n ⊗ M X ) + . This, in turns, means that (γ −1 ) (n) (W ) ∈ (M n ⊗ M X ) + . Since relations (25) are satisfied for the matrices E x,y , we have that, in fact, (γ −1 ) (n) (W ) ∈ (M n ⊗ M X ) + , and the proof is complete.
Let
J = X = span e x,x ′ ,a,a ′ : x = x ′ or a = a ′ ; note that J = X is a linear subspace of the operator system T X .
Proposition 4.3. The space J = X is a kernel in T X and, up to a unital complete order isomorphism, S X ∼ = T X /J = X . Proof. By Theorem 3.4, there exists a unital completely positive map β : On the other hand, by Proposition 3.6 and Corollary 4.2, we have a chain of four canonical linear maps of which the first, the second and the fourth are completely positive. In addition, the image of M X in L X under the composition of these maps coincides with itself; thus, ker(β) ⊆ J = X and hence J = X is a kernel in T X . Dualising the second map in (26), we further obtain a chain of completely positive maps, whose composition is the identity map on T X /J = X . On the other hand, we have a chain of canonical completely positive maps S X → T X → T X /J = X → S X , whose composition is the identity map on S X . It follows that S X ∼ = T X /J = X , up to a canonical complete order isomorphism.
In Theorem 4.5 below, we characterise the dilatable and locally dilatable quantum magic squares in operator system terms. Let C(S + X ) be the universal C*-algebra generated by projections p x,a , x, a ∈ X, with the properties b∈X p x,b = y∈X p y,a = 1, x, a ∈ X (thus, C(S + X ) is the universal C * -algebra of functions on the quantum permutation group on X; see e.g. [9]). Write viewed as an operator subsystem of C(S + X ). Recall [48, Section 3] that the minimal operator system based on P X has matricial cones M n (OMIN(P X )) + , given by for all λ i ∈ C, i ∈ [n]}, and that the corresponding maximal operator system based on P X has matricial cones M n (OMAX(P X )) + generated, as cones with an Archimedean order unit, by the elementary tensors of the form T ⊗ u, where T ∈ M + n and u ∈ P + X .
Proposition 4.4. There exist canonical unital completely positive maps
Proof. By Theorem 4.1, the linear map q : S X → P X , given by q(e x,a ) = p x,a , x, a ∈ X, is (unital and) completely positive. Suppose that φ ∈ (S d X ) + ; by Proposition 4.3, φ can be canonically identified with a matrix (λ x,a ) x,a in M + X . By Birkhoff's Theorem and the argument in the proof of Corollary 4.2, we can further assume that there exists a permutation f : X → X such that λ x,a = δ f (x),a , x, a ∈ X. By the universal property of C(S + X ), the permutation f gives rise to a canonical *-representation π : C(S + X ) → C. It follows that π| P X : P X → C is (completely) positive. We thus obtain a canonical positive map r : S d X → P d X which, by the universal property of the minimal operator system structure, gives rise to a canonical completely positive map S d X → OMIN(P d X ); dualising, we have a canonical completely positive map OMAX(P X ) → S X .
Note that the composition of the maps in (27) is the identity map on P X ; hence q is invertible. Since q −1 = r, we have that q −1 is positive, completing the proof. Proof. (i) Let P = (P x,a ) x,a be a magic unitary on a Hilbert space K containing H such that, if Q is the projection from K onto H, then E x,a = QP x,a Q, x, a ∈ X. By the universal property of C(S + X ), there exists a unital *homomorphism π : C(S + X ) → B(K) such that π(p x,a ) = P x,a , x, a ∈ X. Let φ : P X → B(H) be the linear map, defined by φ(u) = Qπ(u)Q, u ∈ P X . As a compression of a completely positive map, φ is completely positive; by For the converse direction, letφ : C(S + X ) → B(H) be a unital completely positive extension of φ, whose existence is guaranteed by Arveson's Extension Theorem. Using Stinespring's Theorem, let K be a Hilbert space, π : C(S + X ) → B(K) be a unital *-representation, and V : H → K be an isometry, such thatφ(u) = V * π(u)V , u ∈ C(S + X ). Letting P x,a = π(p x,a ), we have that (P x,a ) x,a is a magic unitary that dilates E.
(ii) We first consider the case where n := dim(H) is finite. Identifying B(H) with M n , suppose that φ : OMIN(P X ) → M n is a unital completely positive map. Let be the canonical functional, associated with φ as in [45,Chapter 6]; thus, By [45,Theorem 6.1], f φ is positive. By [48,Theorem 4.8], f φ can be canonically identified with an element of M n (OMAX(P d X )) + (see [48,Section 3]). By Proposition 4.4, Corollary 4.2 and the definition of the maximal operator system structure, where α l ∈ M + X and β l ∈ M + n , l ∈ [r φ ]. Assume that the representation (28) has the form f φ ≡ α ⊗ β, where α ∈ M X and β ∈ M n . In this case, φ is given by In particular, if P θ is the permutation unitary corresponding to the permutation θ on X, and Returning to the representation (28), use Birkhoff's Theorem to write α l = θ λ (l) θ P θ , where the summation is over the permutation group of X, the coefficients λ (l) θ are non-negative. Thus, where γ θ ∈ M + n and the summation is over the permutation group of X. By the previous paragraph, Now [16, Theorem 12 and Remark 7] implies that (φ(p x,a )) x,a is locally dilatable, after noticing that the matrix convex hull of the set denoted CP (|X|) therein coincides with the locally dilatable magic quantum squares over M n . The converse direction follows by reversing the given arguments. We now relax the assumption on the finite dimensionality of H. For simplicity, we consider only the case where H is separable. Fix a sequence (Q n ) n∈N of projections of finite rank such that Q n → n→∞ I in the strong operator topology. Assuming that E is locally dilatable, so is (I X ⊗ Q n )E(I X ⊗ Q n ) for every n ∈ N and hence, by the assumption, the map φ n : OMIN(P X ) → B(Q n H), given by φ n (p x,a ) = Q n E x,a Q n , x, a ∈ X, i ∈ I, is completely positive. Since φ(u) = lim n→∞ φ n (u), in the weak operator topology, u ∈ P X , we have that φ is completely positive.
Conversely, assuming that φ : OMIN(P X ) → B(H) is completely positive, let φ n : OMIN(P X ) → B(Q n H) be the (completely positive) map, given by x,y = Q n E x,y Q n . Since E x,y ≤ 1 for every x, y ∈ X, we therefore have that γ (n) θ ≤ 1 for every n ∈ N. We can now choose successively weak* cluster points of the sequences γ , and assume that [16,Theorem 12] implies, after replacing the identity operator denoted I s therein with I H , that E is locally dilatable.
Representations of bicorrelations
In this section, we define the notion of a bicorrelation and obtain representations of the different bicorrelation types in terms of operator system tensor products. We will use the main operator system tensor products, introduced in [32]: the minimal (min), the commuting (c), and the maximal (max). If τ ∈ {min, c, max} and φ i : S i → T i are completely positive maps between operator systems, i = 1, 2, we write φ 1 ⊗ τ φ 2 for the corresponding tensor product map from S 1 ⊗ τ S 2 into T 1 ⊗ τ T 2 (note that this map is well-defined by [32, Theorems 4.6, 5.5. and 6.3]).
We fix throughout this section finite sets X and Y , and let A = X and B = Y . The symbols A and B will continue to be used for clarity, as needed.
is also a (unital) quantum channel.
We let Q bi ns be the set of all QNS bicorrelations. We next define different types of QNS bicorrelations, motivated by the analogous definitions of QNS correlation types. A QNS bicorrelation Γ : M XY → M XY is quantum commuting if there exist a Hilbert space H, a unit vector ξ ∈ H and bistochastic operator matricesẼ = (E x,x ′ ,a,a ′ ) x,x ′ ,a,a ′ andF = (F y,y ′ ,b,b ′ ) y,y ′ ,b,b ′ on H with mutually commuting entries, such that the Choi matrix of Γ coincides with For t ∈ {loc, q, qa, qc}, we let Q bi t be the set of all QNS bicorrelations of type t.
Remark 5.2. If t ∈ {loc, q, qa, qc, ns} and Γ ∈ Q bi t then Γ * ∈ Q bi t . The claim is part of the definition in the case where t = ns and straightforward in the case where t = loc. For the case t = qc, suppose that are bistochastic operator matrices with mutually commuting entries, such that the Choi matrix of Γ coincides with (30). LetẼ a,a ′ ,x, a,a ′ and henceẼ is a unitary conjugation of E, implying thatẼ ≥ 0; similarly,F ≥ 0. The claim now follows from the fact that the Choi matrix of Γ * is equal to Ẽ a,a ′ ,x, x,x ′ . The case t = q is analogous, while t = qa is a consequence of the continuity of taking the dual channel.
we conclude that the representation (31) can be chosen with the property that Φ i and Ψ i are unital quantum channels, i = 1, . . . , k, that is, Γ is automatically a local bicorrelation.
We write f y,y ′ ,b,b ′ (resp.f y,y ′ ,b,b ′ ), y, y ′ , b, b ′ ∈ Y , for the canonical generators of the operator system T Y (resp. T Y,B ). If s is a linear functional on T X ⊗ T Y or on C X ⊗ C Y , we write Γ s : M XY → M XY for the linear map, given by We note that Γ * s is given by the identities Clearly, the correspondence s → Γ s is a linear map from the vector space is the quotient map (see the paragraph of equation (21)); we have thats is a state of T X,A ⊗ max T Y,B . Since Γ = Γs, by [52, Theorem 6.2], Γ ∈ Q ns . In addition, We verify that Γ * is no-signalling: for any ω X = (λ a,a ′ ) a,a ′ ∈ M X and any Similarly, if ω X ∈ M X has trace zero and ω Y ∈ M Y is arbitrary then Tr X Γ * (ω X ⊗ ω Y ) = 0 and hence Γ * is no-signalling.
The relations, together with Remark 3.5, now imply that L ω (C) ∈ L X and L ω ′ (C) ∈ L Y for all ω ∈ M Y Y and all ω ′ ∈ M XX (recall that L σ denotes the slice map along a functional σ).
Theorem 5.5. Let X and Y be finite sets and Γ : M XY → M XY be a linear map. The following are equivalent: Proof. By Theorem 3.4 and [32, Theorem 6.4], T X ⊗ c T Y ⊆ C X ⊗ max C Y completely order isomorphically and hence, by Krein's Extension Theorem, (ii) and (iii) are equivalent.
(i)⇒(iii) follows from the universal property of C X detailed in Theorem 3.4 and arguments, similar to the ones in [52, Theorem 6.3].
(iii)⇒(i) The GNS representation of s and the universal property of the maximal C*-algebraic tensor product yield *-representations π X : C X → B(H) and π Y : C Y → B(H) with commuting ranges, and a unit vector , and appealing to Theorem 3.4.
Theorem 5.6. Let X and Y be finite sets and Γ : M XY → M XY be a linear map. The following are equivalent: Proof. (ii)⇔(iii) follows from the injectivity of the minimal tensor product.
Q bi qc ⊆ Q qc ∩ Q bi ns .
We do not know if equality holds in (34). The problem reduces to a question about the equality of canonical operator system structures. Indeed, it is not difficult to verify that the subspace J XY := T X,A ⊗ J Y + J X ⊗ T Y,B of the operator system T X,A ⊗ c T Y,B is a kernel, and that the states on (T X,A ⊗ c T Y,B )/J XY correspond precisely to the elements of Q qc ∩ Q bi ns . However, while there is a canonical bijective unital completely positive map (T X,A ⊗ c T Y,B )/J XY → T X ⊗ c T Y , it is unclear whether its inverse is completely positive. If this is the case then Theorem 5.5 will imply the reverse inclusion in (34).
Classical bicorrelations.
In this subsection, we consider a class of correlations that constitute a natural classical counterpart of the quantum bicorrelations defined in Subsection 5.1. We fix finite sets X and Y , and set A = X and B = Y .
We let ∆ : M XY → D XY be the canonical diagonal expectation. Given an NS correlation p over (X, Y, X, Y ), we let E p : D XY → D XY be the (classical) information channel, given by Further, for a given classical information channel E : D XY → D XY , let Γ E : M XY → M XY be the quantum channel, given by and set Γ p = Γ Ep for brevity. In the reverse direction, given a quantum channel Γ : M XY → M XY , let E Γ : D XY → D XY be the classical information channel, defined by letting E Γ (ω) = (∆ • Γ)(ω), ω ∈ D XY . We note the relation E Γ E = E. Proposition 5.9. Let p be an NS bicorrelation over (X, Y, X, Y ). Then Γ p * = Γ * p . Thus, if p ∈ C bi ns then Γ p ∈ Q bi ns .
Proof. For x, a ∈ X and y, b ∈ Y , we have completing the proof.
For t ∈ {loc, q, qa, qc}, let It is straightforward to verify that an NS bicorrelation p over (X, Y, X, Y ) belongs to C bi qc precisely when there exist a Hilbert space H, a unit vector ξ ∈ H and quantum magic squares (E x,a ) x,a∈X and (F y,b ) y,b∈Y with commuting entries, such that Similarly, p ∈ C bi q precisely when the representation (35) is achieved for loc precisely when p is the convex combinations of correlations of the form p (1) (a|x)p (2) (b|y), where (p (1) (a|x)) x,a and (p (2) (b|y)) y,b are (scalar) bistochastic matrices.
For a linear functional s : S X ⊗ S Y → C, let p s : X × Y × X × Y → C be the function given by p s (a, b|x, y) = s(e x,a ⊗ e y,b ), x, a ∈ X, y, b ∈ Y.
Theorem 5.10. Let X and Y be finite sets and p be an NS correlation over (X, Y, X, Y ). Consider the statements (i) p is an NS bicorrelation; (ii) there exists a state s :
Then (i)⇔(ii), (i')⇔(ii') and (i")⇔(ii").
Proof. (i')⇒(ii') Write ι X : S X → T X and ι Y : S Y → T Y for the inclusion maps and let p ∈ C bi qc . By Theorem 5.5, there exists a state s : ; thens is a state on S X ⊗ c S Y for which p = ps.
(ii')⇒(i') Let s : S X ⊗ c S Y → C be such that p = p s , and let β X : T X → S X (resp. β Y : T Y → S Y ) be the quotient map, as defined in the proof of Proposition 4.3. We have that (32), is a quantum commuting QNS bicorrelation. Since Γs = Γ p , we have that p ∈ C bi qc . (i")⇔(ii") follows in a similar way as the equivalence (i')⇔(ii'), using Theorem 5.6 in the place of Theorem 5.5.
Concurrent bicorrelations
Throughout the section, let X be a finite set and Y = A = B = X. Let J X = 1 |X| x,y∈X ǫ x,y ⊗ ǫ x,y be the canonical maximally entangled state in M XX . We specialise the definition of a concurrent QNS correlation from [11]: For t ∈ {loc, q, qa, qc, ns}, we let Q bic t be the set of all concurrent bicorrelations that belong to Q bi t . Remark 6.2. Note that if Γ ∈ Q bic ns , then Γ * ∈ Q bic ns as well. Indeed, since Γ is unital, its dual map Γ * : M AA → M XX is trace-preserving; thus, Tr(Γ * (J A )) = 1. Therefore The equality clause in the Cauchy-Schwarz inequality now implies that The universal C*-algebra generated by the entries of a unitary matrix (ũ a,x ) a,x∈X (known as the Brown algebra) was first studied by L. G. Brown [12]. We will introduce a subquotient of the Brown algebra, whose traces will be shown to represent concurrent bicorrelations of different types. First, setũ x,x ′ ,a,a ′ =ũ * a,xũa ′ ,x ′ , x, x ′ , a, a ′ ∈ X, and let U X,A be the C * -subalgebra of the Brown algebra, generated by the set {ũ x,x ′ ,a,a ′ : x, x ′ , a, a ′ ∈ X}. Lemma 6.3. If π : U X,A → B(H) is a unital *-representation then there exists a block operator unitary U = (U a,x ) a,x such that π(ũ x,x ′ ,a,a ′ ) = U * a,x U a ′ ,x ′ , x, x ′ , a, a ′ ∈ X.
Proof. Let V X,A be the universal TRO of an isometry (v a,x ) a,x , as defined in [52,Section 5]. In the sequel, we will consider products v ε 1 a 1 ,x 1 v ε 2 a 2 ,x 2 · · · v ε k a k ,x k , where ε i is either the empty symbol or * , and ε i = ε i+1 for all i, as elements of either V X,A , V * X,A , C X,A or the left C*-algebra corresponding to the TRO V X,A . Let J be the closed ideal of C X,A , generated by the elements x∈Xẽ y,x,b,aẽx,y,a,b −ẽ y,y,b,b , y, a, b ∈ X.
By [11,Lemma 4.2], the map ρ :ẽ x,x ′ ,a,a ′ →ũ x,x ′ ,a,a ′ , x, x ′ , a, a ′ ∈ X extends to a surjective *-homomorphism ρ : C X,A → U X,A with ker ρ = J . Let π : U X,A → B(H) be a *-representation. Then π • ρ : C X,A → B(H) is a *representation that annihilates J . By [52, Lemma 5.1], there exists a block operator isometry U = (U a,x ) a,x∈X , where U a,x ∈ B(H, K) for some Hilbert space K, x, a ∈ X, such that (π • ρ)(ẽ x,x ′ ,a,a ′ ) = U * a,x U a ′ ,x ′ , x, x ′ , a, a ′ ∈ X. By the definition of V X,A , the operator matrix U gives rise to a canonical ternary representation θ U : V X,A → B(H, K). Without loss of generality, we can assume that K = span(θ U (V X,A )H). The fact that (π • ρ)(J ) = {0} now implies that Since U U * ≤ I, we have that I − x∈X U a,x U * a,x ≥ 0, and hence (36) reads showing further that By polarisation, we have x∈X U a,x U * a,x = I, a ∈ X. As I−U U * is a positive block-diagonal operator with the zero diagonal, I − U U * = 0; thus, U is unitary. Since U * a,x U b,y = π(ũ x,y,a,b ), x, y, a, b ∈ X, the proof is complete. Recall thatẽ x,x ′ ,a,a ′ are the canonical generators of the C*-algebra C X,A (so that the matrix ẽ x,x ′ ,a,a ′ x,x ′ ,a,a ′ is a universal stochastic operator matrix). Letg x,x ′ y,z,b,c = δ x,x ′ẽ y,z,b,c − a∈Xẽ y,x,b,aẽx ′ ,z,a,c andh a,a ′ y,z,b,c = δ a,a ′ẽ y,z,b,c − x∈Xẽ y,x,b,aẽx,z,a ′ ,c , andJ 1 (resp.J 2 ) be the closed ideal of C X,A , generated byg x,x ′ y,z,b,c (resp. h a,a ′ y,z,b,c ), y, z, b, c, x, x ′ ∈ X (resp. y, z, b, c, a, a ′ ∈ X). Lemma 6.4. Up to a canonical *-isomorphism, C X,A /J 2 ≃ U X,A .
Proof. Denote byJ 0 2 the closed ideal of C X,A , generated by the elements h a,a y,y,b,b , where a, b, y ∈ X. It was shown in [11,Lemma 4.2] that C X,A /J 0 2 ≃ U X,A . Let ρ : C X,A → B(K) be a unital *-representation that annihilatesJ 0 2 , with the property that the corresponding induced representation of C X,A /J 0 2 is faithful. By Lemma 6.3, there exists a unitaryŨ = (Ũ a,x ) a,x∈X such that, But then, sinceŨ is unitary, = δ a,a ′Ũ y,z,b,c − δ a,a ′Ũ * b,yŨ c,z = 0. Thus, ρ automatically annihilatesJ 2 . The proof is complete.
We say that a block operator matrix U = (u a,x ) a,x ∈ M X (B(H)) is a bi-unitary if both U and U t are unitary. Let C(U + X ) be the universal C*algebra, generated by the entries of a bi-unitary (u a,x ) a,x∈X , and C(PU + X ) be the subalgebra of C(U + X ) generated by the length two words of the form u x,x ′ ,a,a ′ := u * a,x u a ′ ,x ′ , x, x ′ , a, a ′ ∈ X. Further, recall that e x,x ′ ,a,a ′ , x, x ′ , a, a ′ ∈ X, denote the canonical generators of the C*-algebra C X (so that (e x,x ′ ,a,a ′ ) x,x ′ ,a,a ′ is a universal bistochastic operator matrix), set and let J 1 (resp. J 2 ) be the closed ideal of C X , generated by the elements g x,x ′ y,z,b,c (resp. h a,a ′ y,z,b,c ), where y, z, b, c, x, x ′ ∈ X (resp. y, z, b, c, a, a ′ ∈ X). We note that the universal C * -algebra C(U + X ) and its subalgebra C(PU + X ) have been well-studied in the compact quantum group literature. The C *algebra C(U + X ) was introduced by Wang in [55], where it was shown to have the structure of a C * -algebraic compact quantum group. In particular, C(U + X ) comes equipped with a co-associative comultiplication making it into a non-commutative analogue of the C * -algebra of continuous functions of the unitary group U X . The structure of the quantum group C(U + X ) was later studied in detail by Banica in [2]. On the other hand, the subalgebra C(PU + X ) ⊆ C(U + X ) can be naturally interpreted as a non-commutative version of the space of continuous functions on the projective unitary group PU X /T. In the classical setting, the conjugation action of U + X on M X induces a group isomorphism PU X ∼ = Aut(M X ), where Aut(M X ) is the group of * -automorphisms of M X .
In the quantum setting, it is natural to expect that a similar identification between PU + X and quantum automorphisms of M X should hold, and indeed this is the case: In [56], the quantum automorphism group Aut + (M X ) was introduced by Wang (via an abstract universal C * -algebra C(Aut + (M X )) with generators and relations), and later Banica showed in [3] that the natural quantum group C * -algebra morphism C(Aut + (M X )) → C(PU + X ) is actually an isomorphism. In Lemma 6.5 below, we extend Banica's result by showing that in fact any "concrete" quantum automorphism of M X (that is, a * -homomorphism π : C(Aut + (M X )) ∼ = C(PU + X ) → B(H)) is implemented by a "concrete" conjugation of M X by a bi-unitary (that is, π is the restriction of a representation C(U + X ) → B(H)). Lemma 6.5. ( Proof. (i) Set J = J 1 + J 2 , recall thatJ X is the closed ideal of C X,A generated by the elements y∈Xẽ y,y,a,a ′ − δ a,a ′ 1, a, a ′ ∈ X (see the paragraph containing equation (21)) and, recalling the idealsJ 1 andJ 2 of C X,A defined before Lemma 6.4, let (39)J =J X +J 1 +J 2 .
According to Proposition 3.8, C X,A /J X ≃ C X ; thus, C X,A /J ≃ C X /J . Recall that U X,A is the universal C * -algebra with generatorsũ x,x ′ ,a,a ′ := u * a,xũa ′ ,x ′ , x, x ′ , a, a ′ ∈ X, where the matrix (ũ a,x ) a,x is unitary. By Lemma 6.4, we have the canonical *-isomorphism C X,A /J 2 ≃ U X,A .
Let ρ : U X,A ≃ C X,A /J 2 → B(K) be a unital *-representation that anni-hilatesJ /J 2 . By Lemma 6.3, there exists a unitaryŨ = (Ũ a,x ) a,x such that y∈XŨ * a,yŨa ′ ,y = δ a,a ′ I, a, a ′ ∈ X, By (40) Multiplying (43) byŨ * b,y ⊗ I on the right and adding up along the variable y, we obtainŨ tŨ t * = I; thus, U t is unitary. Therefore, U gives rise to a unital *-representation of C(U + X ) and, after restriction, to a unital *-representation of C(PU + X ). We have thus shown that every unital * -representation ρ : C X,A /J 2 → B(K) that annihilatesJ /J 2 induces a unital * -homomorphism from C(PU + X ) to B(K). By [52, Theorem 5.2], there exists a * -homomorphism ϕ : C X,A → C(PU + X ), such that ϕ(ẽ x,x ′ ,a,a ′ ) = u x,x ′ ,a,a ′ , x, x ′ , a, a ′ ∈ X. A straightforward verification shows that ϕ annihilatesJ 2 and hence gives rise to a * -homomorphism ϕ : C X,A /J 2 → C(PU + X ),ẽ x,x ′ ,a,a ′ +J 2 → u x,x ′ ,a,a ′ . It is easy to see that J /J 2 ⊆ kerφ. The previous paragraph shows that if T ∈ C X,A /J 2 then giving the inclusion ker(φ) ⊆J /J 2 and hence the equality ker(φ) =J /J 2 . Asφ is surjective we obtain the statement.
We recall that the opposite C*-algebra A op of a C*-algebra A has the same set, linear structure and involution as A, and multiplication given by u op v op = (vu) op , where u op denotes the element u ∈ A when viewed as an element of A op . Given a Hilbert space H, let H d denote its dual Banach space and, for an operator T ∈ B(H), let T d : H d → H d be its dual. We note the identity If π : A → B(H) is a faithful *-representation, then the map π op : A op → B(H d ), given by π op (u op ) = π(u) d , is a faithful *-representation.
The following result can be proved using the existence of the antipode for compact quantum groups together with the fact that PU + X , the antipode is known to be a * -anti-automorphism of C(PU + X ) (see e.g., [42, Proposition 1.7.9]). For the sake of those unacquainted with quantum group technicalities, we supply a self-contained proof. Lemma 6.6. Let X be a finite set. The map ∂(u x,x ′ ,a,a ′ ) = u op x ′ ,x,a ′ ,a , x, x ′ , a, a ′ ∈ X, extends to a *-isomorphism ∂ : C(PU + X ) → C(PU + X ) op . Proof. Let π : C(PU + X ) → B(H) be a faithful *-representation and U = (U a,x ) a,x ∈ M X (B(H)) be a bi-unitary such that π(u Set V a,x = U * d a,x , x, a ∈ X. We observe that V := (V a,x ) a,x is a bi-unitary. Indeed, using (44), we have that is, V * V = I and V t * V t = I; the relations V V * = I and V t V t * = I follow analogously. It follows that there exists a *-representation ρ : x,x ′ ,a,a ′ ), x, x ′ , a, a ′ ∈ X; note that ρ is a (well-defined) *-homomorphism from C(PU + X ) into C(PU + X ) op . By symmetry considerations, ρ is a *-isomorphism.
Before formulating the next theorem, we introduce some notation and terminology. If Φ : M X → M X is a quantum channel, we write Φ ♯ : M X → M X for the quantum channel given by We call a channel Φ : M X → M X a unitary channel if there exists a unitary U = (λ a,x ) a,x∈X ∈ M X , such that Φ(ω) = U * ωU , ω ∈ M X . Finally, a trace τ : B → C of a C*-algebra B is called abelian if there exists an abelian C*-algebra A, a *-homomorphism π : B → A and a state φ : A → C such that τ = φ • π.
Theorem 6.7. Let X be a finite set and Γ : M XX → M XX be a QNS bicorrelation. Then (i) Γ ∈ Q bic qc if and only if there exists a trace τ : C(PU + X ) → C such that (45) Γ(ǫ x,x ′ ⊗ ǫ y,y ′ ) = (τ (u x,x ′ ,a,a ′ u y ′ ,y,b ′ ,b )) a,a ′ ,b,b ′ , x, x ′ , y, y ′ ∈ X; (ii) Γ ∈ Q bic q if and only if (45) holds for a trace of C(PU + X ) that factors through a finite dimensional C*-algebra; (iii) Γ ∈ Q bic loc if and only if (45) holds for an abelian trace of C(PU + X ), if and only if there exist unitary channels Φ i , i = 1, . . . , k, such that Γ = k i=1 λ i Φ i ⊗ Φ ♯ i as a convex combination. Proof. (i) Let U := (u a,x ) a,x be the universal bi-unitary and Γ : M XX → M XX be given via (45). There exists a state ν : C(PU + X ) ⊗ max C(PU + X ) op → C, given by (46) ν implying that Γ = Γ s . By Lemma 6.5 (i) and Theorem 5.5, Γ ∈ Q bi qc . Since U is unitary, by the proof of [11,Theorem 4.3], Γ is concurrent.
Conversely, let Γ ∈ Q bic qc . By Theorem 5.5, there exists a state s : C X ⊗ max C X → C such that Γ = Γ s . Let V = (v a,x ) a,x be a universal bi-isometry (see Subsection 3.2) and denote by f y,y ′ ,b,b ′ the canonical generators of the second copy of C X in the tensor product. The concurrency of Γ implies the validity of the condition (47) x ′ ,y ′ ∈X s e x ′ ,y ′ ,a,b ⊗ f x ′ ,y ′ ,a,b = 1, a, b ∈ X.
Indeed, set Z y,z,b,c := v b,y ⊗ I X 0 0 v c,z ⊗ I X . After applying the canonical shuffle M 2 (M X (C X )) ≃ M X (M 2 (C X )), we obtaiñ x ′ e y,y,b,b − a e y,x,b,a e x ′ ,y,a,b δ x,x ′ e y,z,b,c − a e y,x,b,a e x ′ ,z,a,c δ x,x ′ e z,y,c,b − a e z,x,c,a e x ′ ,y,a,b δ x,x ′ e z,z,c,c − a e z,x,c,a e x ′ ,z,a,c x, x,x ′ ≥ 0, implying (49), along with the relations a∈X v a,x v * a,x ≤ 1, x ∈ X. Identity (49) now shows that G y,y,b,b ∈ M X (C X ) + , and hence (50) τ (X) (G y,y,b,b ) ∈ M + X . We have that and hence (52) x,y∈X a,b∈X e y,x,b,a e x,y,a,b ≤ x,y∈X b∈A e y,y,b,b = |X| 2 1; similarly, x,y∈X a,b∈X f x,y,a,b f y,x,b,a ≤ |X| 2 1.
Using (51), we now have that τ e y,y,b,b − a∈X e y,x,b,a e x,y,a,b = 0 for all x, y, b ∈ X.
Thus the diagonal entries of τ (X) (G y,y,b,b ) are zero; the positivity condition (50) implies that the off-diagonal entries of τ (X) (G y,y,b,b ) are also zero. Now the positivity condition (49) implies that Condition (49) and the Cauchy-Schwarz inequality imply τ (2X) QG 1/2 y,z,b,c = 0, for all Q ∈ M 2 (M X (C X )), and hence τ (2X) annihilates the closed ideal of M 2 (M X (C X )) generated byG 1/2 y,z,b,c . In particular, τ (2X) annihilates the closed ideal of M 2 (M X (C X )) generated byG y,z,b,c ; since C X is unital, this implies that τ annihilates the closed ideal of C X generated by the elements g x,x ′ y,z,b,c , x, x ′ , y, z, b, c ∈ X, that is, J 1 .
Similarly, observe that x∈X e y,x,b,a e x,y,a,b = By Remark 6.2, Using (48) It follows that τ annihilates the ideal J 2 , generated by h a,a ′ y,z,b,c , where a, a ′ , y, z, b, c ∈ X, and hence it annihilates J 1 + J 2 . Hence τ induces a tracial state (denoted in the same fashion) on the quotient C X /J . An application of Lemma 6.5 (i) completes the proof.
(ii) Suppose that Γ : M XX → M XX is a quantum concurrent QNS bicorrelation. By [11,Theorem 4.3], there exists a finite dimensional C*algebra A, a trace t on A, and a *-homomorphism α : U X,A → A, such that Γ = Γ t•α . After taking a quotient, we may assume that t is faithful. Let ρ : C X,A → U X,A be the canonical quotient map, whose existence is guaranteed by [11,Lemma 4.2]. Letτ : C X,A → C be the functional, given byτ (u) = (t • α • ρ)(u), u ∈ C X,A ; clearly,τ is a trace on C X,A . Note, further, that Γ = Γτ (for brevity here, and in the sequel, Γτ is used to denote Γ sτ , where sτ is the state, canonically associated with the traceτ ). By the proof of (i),τ annihilates the idealJ defined in (39); thus, as t is faithful, (α • ρ)(J ) = 0 and hence we get a * -homomorphismρ : C(PU + X ) → A and the trace τ = t •ρ on C(PU + X ) which factors through A. Conversely, suppose that B is a finite dimensional C*-algebra. Let π : C(PU + X ) → B be a unital *-homomorphism andτ : B → C be a trace such that, if τ =τ • π, then Γ = Γ τ . By Lemma 6.5 (ii), there exists a finite dimensional Hilbert space K and a bi-unitary matrix U = (U a,x ) a,x ∈ M X (B(K)), such that π(u x,x ′ ,a,a ′ ) = U * a,x U a ′ ,x ′ , x, x ′ , a, a ′ ∈ X. Now a straightforward verification shows that Γ ∈ Q bic q . (iii) Suppose that Γ ∈ Q bic loc . By [11,Theorem 4.3 (iii)], there exists an abelian C*-algebra A, a *-homomorphismπ : U X,A → A and a state φ : A → C such that, ifτ = φ •π then (τ is a trace on U X,A such that) Γ = Γτ . Realise A = C(Ω) for some compact Hausdorff space Ω and let µ be a regular Borel measure on Ω such that φ(h) = Ω hdµ. Writing As µ can be approximated by convex combinations of point mass evaluations, Γ can be approximated by convex Since the matrices M i give rise to (one-domensional) *-representations of U X,A , by Lemma 6.3, they admit factorisations of the form µ x,x ′ ,a,a ′ . By the Carathéodory Theorem and compactness, we have that Γ is itself a convex combination of this form. We further have that . . , k, and in particular Φ i is a unitary channel, i = 1, . . . , k.
Suppose that Φ : M X → M X is a unitary channel. Let U = (λ a,x ) a,x ∈ M X be a unitary (and hence a bi-unitary) such that Φ(ω) = U * ωU , ω ∈ M X . We have that Thus, Φ ⊗ Φ ♯ is a concurrent correlation and, since Φ is unital, it is a concurrent bicorrelation. Since Q bic loc is convex, we have that all convex combinations of elementary tensors of the form Φ ⊗ Φ ♯ belong to Q bic loc .
Since U i has scalar entries, it is automatically a bi-unitary, and hence gives rise to a canonical (one-dimensional) unital *-representation of C(PU + X ). A standard argument now shows that Γ = Γ τ for a trace on the (finite dimensional) abelian C*-algebra D k .
Finally, if Γ = Γ τ , where τ factors through an abelian C*-algebra then the argument in the first paragraph of (iii) shows that Γ ∈ Q bic loc .
Remark 6.8. Assume that τ is an amenable trace of C(PU + X ). By [13,Theorem 6.2.7], the functional µ : C(PU + X ) ⊗ min C(PU + X ) op → C, given by µ(u ⊗ v op ) = τ (uv), is a well-defined state. Letting s = µ • (id ⊗∂) (a state on C(PU + X )⊗ min C(PU + X ) op ), one can proceed similarly to the first paragraph of the proof of Theorem 6.7 to conclude that Γ ∈ Q bic qa . We do not know if, conversely, every Γ ∈ Q bic qa arises from an amenable trace on C(PU + X ). Recall [47] that an NS correlation p over (X, X, X, X) is called bisynchronous if p(a, b|x, x) = 0 =⇒ a = b and p(a, a|x, y) = 0 =⇒ x = y.
It was shown in [47, Remark 2.1] that bisynchronous correlations of type t = ns are (classical) bicorrelations. The next statement describes the relation between bisynchronicity and concurrency. Proposition 6.9. Let t ∈ {loc, q, qc}. If p ∈ C t is a bisynchronous NS correlation over the quadruple (X, X, X, X) then there exists Γ ∈ Q bic t such that Proof. We consider first the case t = qc. Let p ∈ C qc be a bisynchronous correlation. By [47,Theorem 2.2], there exists a tracial state τ : C(S + X ) → C such that (55) p(a, b|x, y) = τ (p a,x p b,y ), x, y, a, b ∈ X.
Let p x,x ′ a,a ′ := p * a,x p a ′ ,x ′ = p a,x p a ′ ,x ′ , x, x ′ , a, a ′ ∈ X, and let C(PS + X ) be the subalgebra of C(S + X ), generated by the elements of the form p x,x ′ ,a,a ′ , x, x ′ , a, a ′ ∈ X. Since every quantum permutation is a bi-unitary, there exists a unital *-homomorphism π : C(PU + X ) → C(PS + X ) with π(e x,x ′ ,a,a ′ ) = p x,x ′ ,a,a ′ , x, x ′ , a, a ′ ∈ X.
Letτ = τ • π; thus,τ is a tracial state on C(PU + X ) and hence, by Theorem 6.7, Γτ is a quantum commuting concurrent QNS bicorrelation. Moreover, if x, y ∈ X then x,a,a e y,y,b,b ) x,a,a p y,y,b,b ) and (54) follows. The cases t = q and t = loc are similar.
The quantum graph isomorphism game
In this section, we view the concurrent bicorrelations studied in Section 6 as strategies for the non-commutative graph isomorphism game. This allows us to define quantum information versions of quantum isomorphisms of non-commutative graphs of different types, which we characterise in terms of relations arising from the underlying graphs. 7.1. Quantum commuting isomorphisms. Several related concepts of quantum graphs have been studied in the literature (see [9,15,20]). Here we work with the notion that is used in [52], [51] and [11]. Let X be a finite set, H = C X , and recall that H d stands for the dual (Banach) space of H. Note that, as an additive group, H d can be identified with H; we writeζ for the element of H d , corresponding to the vector ζ in H (so thatζ : H → C is given byζ(ξ) = ξ, ζ ). Let θ : H ⊗ H → L(H d , H) be the linear map given by θ(ξ ⊗ η)(ζ) = ξ, ζ η, ζ ∈ H. We have (56) θ For a subspace U ⊆ C X ⊗ C X , set We let ∂ X : (C X ) d → C X be the linear mapping given by ∂ X (ē x ) = e x , x ∈ X, and we setS U := S U ∂ −1 X ; thus,S U ⊆ L(C X ). We denote by m : C X ⊗ C X → C the map, given by Let also f : C X ⊗C X → C X ⊗C X be the flip operator, given by f(ξ⊗η) = η⊗ξ.
In the sequel, for a subspace U ⊆ C X ⊗ C X , we denote by P U the orthogonal projection from C X ⊗ C X onto U ; thus, P U ∈ M XX . For a classical (simple, undirected) graph G with vertex set X, we use ∼ (or ∼ G when a clarification is needed) to denote the adjacency relation of G. The graph G gives rise to the quantum graph U G = span{e x ⊗ e y : x ∼ y}, and we write P G = P U G ; note that P G ∈ D XX , and that S U G = span{ǫ x,y : x ∼ y} is a traceless self-adjoint subspace of M X . More generally,S U ⊆ M X is always a traceless transpose-invariant subspace for any quantum graph U ; this is the suitable version arising in our setting of Stahlke's quantum graphs [51], where tracelessness and self-adjointness are assumed as part of the definition.
To motivate Definition 7.2 below, we first recall the graph isomorphism game [1] for graphs G and H, both with vertex set X. For elements x, y ∈ X, we denote by rel G (x, y) the element of the set {=, ∼, ≃}, which describes the adjacency relation in the pair (x, y), in the graph G. A correlation p ∈ C t is said to be a perfect t-strategy for the (G, H)-isomorphism game, provided p is bisynchronous and We note that, for a given correlation type t, two graphs G and H with vertex set X are t-isomorphic [1] if and only if there exists a bisynchronous bicorrelation p of type t over the quadruple (X, X, X, X), such that (58) ω ∈ D + XX and ω = P G ωP G =⇒ Γ(ω) = P H Γ(ω)P H and (59) σ ∈ D + XX and σ = P H σP H =⇒ Γ * (σ) = P G Γ * (σ)P G . Indeed, condition (58) is equivalent to requiring that p(a, b|x, y) = 0 if x ∼ G y but a ∼ H b, while (59) is equivalent to requiring that p(a, b|x, y) = 0 if a ∼ H b but x ∼ G y, in conjunction, these two conditions are equivalent to (57).
Recall [52,11] that, if U ⊆ C X ⊗ C X and V ⊆ C X ⊗ C X are quantum graphs, and P = P U and Q = P V , then the perfect strategies for the quantum homomorphism game U → V are the QNS correlations Γ : M XX → M XX such that ω ∈ M + XX and ω = P ωP =⇒ Γ(ω) = QΓ(ω)Q. Definition 7.2. Let t ∈ {loc, q, qa, qc, ns}. We say that U and V are tisomorphic, and write U ∼ = t V, if there exists Γ ∈ Q bic t such that (i) Γ is a perfect strategy for U → V, and (ii) Γ * is a perfect strategy for V → U . Remark 7.3. Although our main interest in this section lies in quantum graphs, it is important to note, for the development in Section 8, that Definition 7.2 can be stated in a greater generality, involving subspaces U and V of C X ⊗ C X that are not necessarily quantum graphs.
In the next theorem, we give an operator algebraic characterisation of the relation U ∼ = qc V. We recall the leg numbering notation: if for U = (U a,x ) a,x ∈ M X ⊗ B(H), we write U 2,3 = I X ⊗ U , and U 1,3 = F(I X ⊗ U ). Note that U 2,3 , U 1,3 ∈ M XX ⊗ B(H) and For the formulation of the next theorem, we setĀ = A t * , and call a von Neumann algebra tracial if it admits a tracial state. If H is a Hilbert space and N ⊆ B(H) is a von Neumann algebra, an operator matrix U = (U a,x ) a,x∈X will be called N -aligned if U * a,x U b,y ∈ N for all x, y, a, b ∈ X.
Theorem 7.4. Let U and V be quantum graphs in C X ⊗C X , and set P = P U and Q = P V . The following are equivalent:
exists a tracial von Neumann algebra N ⊆ B(H) and an Naligned bi-unitary
Proof. (i)⇒(ii) For a vector ξ = x,y∈X α x,y e x ⊗ e y ∈ C X ⊗ C X , let ξ = x,y∈X α x,y e x ⊗ e y and set note that Y ξ ∈ M X (and that the use of the notation ξ agrees, up to a canonical identification, with the definition in the beginning of Subsection 7.1). Let Γ : M XX → M XX be a concurrent quantum commuting bicorrelation satisfying conditions (i) and (ii) in Definition 7.2. By Theorem 6.7, there exists a tracial state τ : Let π τ be the *-representation, associated with τ via the GNS construction, and let ζ be the corresponding cyclic vector. Then N = π τ (C(PU + X )) ′′ is a finite von Neumann algebra, on which the vector state corresponding to ζ is faithful and tracial.
Let E = (π τ (u x,x ′ ,a,a ′ )) x,x ′ ,a,a ′ . As in the proof of [11, Theorem 5.5], we have that implying, by the faithfulness of τ , that By Lemma 6.5 (ii), there exists a bi-unitary U = (U a,x ) a,x , such that E = (U * a,x U a ′ ,x ′ ) x,x ′ ,a,a ′ . Writing ξ = x,y∈X α x,y e x ⊗ e y and η = a,b∈X β a,b e a ⊗ e b , we calculate .
Let t : M X → M X be the map, given by t(T ) = T t . Since the operators P ⊥ and Q are self-adjoint, (t ⊗ t)(Q) =Q and (t ⊗ t)(P ⊥ ) =P ⊥ . Thus, applying the map t⊗t⊗id to the relation (62), we obtain (P ⊥ ⊗I)F (Q⊗I) = 0 (ii)⇒(i) Assume that (P ⊗I)U t 1,3 U * 2,3 (Q ⊥ ⊗I) = 0 and (P ⊥ ⊗I)U t 1,3 U * 2,3 (Q⊗ I) = 0. By Theorem 6.7 (i), the linear map Γ, given by Γ(ǫ , is a concurrent quantum commuting bicorrelation. Reversing the arguments from the previous paragraphs and using the proof of [11,Theorem 5.5], we obtain that, if E = (U * a,x U a ′ ,x ′ ) x,x ′ ,a,a ′ then for all ξ ∈ U and all η ∈ V ⊥ . Similarly, Consider U 2,3Ū1,3 as a linear operator on C XX ⊗ B(H) by letting Fix ξ ∈ C XX . We have To see that U (S U ⊗ 1)U * ⊆S V ⊗ B(H), let ξ ∈ U , and fix orthonormal bases (η i ) i∈I and (ζ j ) j∈J of V and V ⊥ , respectively. Then From the previous arguments we obtain Let ω g,h be the vector functional on B(H), given by ω g,h (T ) = T g, h and, for η ∈ C XX , let ℓ η be the linear functional on C XX , given by ℓ η (ξ) = ξ, η .
Remarks. (i)
The arguments in the proof of Theorem 7.4 can be used to conclude that U → qc V if and only if there exists a tracial von Neumann algebra N ⊆ B(H) and an N - This complements the characterisation obtained in [11,Theorem 5.7].
(ii) Similar results to those of Theorem 7.4 hold for U ≃ q V, in which case the space H is finite-dimensional. A treatment of the case U ≃ loc V is presented in Subsection 7.2 below. (iii) there exists a tracial von Neumann algebra N ⊆ B(H) and an Naligned bi-unitary U = (U a,x ) a,x ∈ M X (B(H)) such that
exists a tracial von Neumann algebra N ⊆ B(H) and a bi-
Proof. We have U G = {e x ⊗ e y : x ∼ G y} and U H = {e a ⊗ e b : a ∼ H b}. As P G = P G andP H = P H , the conditions and also equivalent to U a,x U * b,y = 0 if either x ∼ G y and a ∼ H b or x ∼ G y and a ∼ H b. The statement now follows from Theorem 7.4.
Remark 7.6. The conditions on the bi-unitary U contained in Corollary 7.5 are equivalent to the conditions A H c * U (A G ⊗ I)U * = 0 and A G c * U t (A H ⊗I)Ū = 0, where G c is the complement to G and * denotes the Schur product. We can formulate a similar characterisation for types loc and q. In the case when the bi-unitary U is actually a quantum permutation (that is, the entries u i,j of U are all orthogonal projections), these conditions are equivalent to the condition that U (A G ⊗ I)U * = A H ⊗ I. Indeed, if U is a quantum permutation satisfying A H c * U (A G ⊗ I)U * = 0, then whenever i = j and i ∼ H j, we have Multiplying on the left by u i,k for any fixed k satisfying k ∼ G ℓ, we obtain u i,k u j,ℓ = 0 whenever i ∼ H j, i = j and k ∼ G ℓ. Similarly, if i = j and k ∼ G ℓ, then k = ℓ, so that u i,k u j,ℓ = 0.
Next, if we interchange the roles of G and H in the above argument and replace U with the magic unitary U t , the identity A G c * U t (A H ⊗ I)Ū = 0 yields u k,i u ℓ,j = 0 whenever i ∼ G j, i = j and k ∼ H ℓ or whenever i = j, and k ∼ H ℓ.
It follows that, if i ∼ H j, then (assuming that n = |X|) we have Similarly, if i ∼ H j, then either i = j or i ∼ H c j, and we obtain in either case It follows that U (A G ⊗ I)U * = A H ⊗ I. The converse is immediate.
7.2. Local isomorphisms. In this subsection, we restrict our attention to quantum graph isomorphisms of local type.
Proposition 7.7. Let X be a finite set, and U and V be quantum graphs in C X ⊗ C X . The following are equivalent: Proof. (i)⇒(ii) Let Γ ∈ Q bic loc is a correlation satisfying the conditions of Definition 7.2 for quantum graphs U and V. By Theorem 6.7 (iv), The monotonicity of the trace functional now implies that Φ i ⊗ Φ ♯ i satisfies the conditions in Definition 7.2 for every i = 1, . . . , k. We may thus assume that Γ = Φ ⊗ Φ ♯ , where Φ : M X → M A is a unitary quantum channel. Let U ∈ M X be a unitary such that Φ(ω) = U * ωU , ω ∈ M X . A direct verification shows that Thus, Γ(ω) = (U ⊗Ū ) * ω(U ⊗Ū ), ω ∈ M XX . The first condition in (66) now implies that, for every ξ ∈ U , we have On the other hand, and arguing by symmetry implies that (U ⊗Ū )(V) ⊆ U ; thus, (ii) follows.
Remark.
Proposition 7.7 can equivalently be seen as a consequence of Theorem 7.4. Indeed, note that, by Theorem 6.7 (iv) and its proof, Γ ∈ Q bic loc if and only if Γ = k i=1 λ i Γ i as a convex combination, where Γ i (e x,x ′ ⊗ e y,y ′ ) = (π i (u x,x ′ ,a,a ′ u y ′ ,y,b ′ ,b )) a,a ′ ,b,b ′ for some * -representation π i : C(PU + X ) → C. Using the fact that all Γ i are positive, it can be easily seen that one can assume that k = 1. Let U = (u a,x ) a,x ∈ M X be the unitary that corresponds to π 1 as in the proof of the implication (i)⇒(ii); we have that U satisfies the corresponding conditions (ii) and (iii). In particular, This gives in particular that (U ⊗Ū )(U ) = V. Proof. A graph isomorphism ϕ : X → X between G and H gives rise to a permutation unitary operator U ϕ : C X → C X ; letting Φ : M X → M A be the conjugation by U ϕ , we have that the correlation Φ ⊗ Φ ♯ implements an isomorphism U G ∼ = loc U H .
Conversely, suppose that U G ∼ = loc U H . By Proposition 7.7, there exists a unitary U ∈ M X such that (U ⊗Ū )(U G ) = U H . Letting Corollary 7.9. There exist quantum graphs U and V such that U ∼ = q V but U ∼ = loc V.
Proof. By [1, Theorem 6.4], there exists graphs G and H such that G ∼ = q H but G ∼ = loc H. By Proposition 7.8, U G ∼ = loc U H ; to complete the proof, we show that U G ∼ = q U H . By [36, Theorem 2.1], there exists a quantum permutation matrix (P x,a ) x,a , acting on a finite dimensional Hilbert space H, such that By Remark 7.6, U G ∼ = q U H 7.3. The quantum isomorphism algebra. Let X be a finite set, and U ⊆ C XX and V ⊆ C XX be quantum graphs. We will introduce a C*algebra whose tracial properties reflect the properties of the isomorphism game U ∼ = V. Let P (resp. Q) be the projection from C XX onto U (resp. from C XX onto V). For matrices S, T ∈ M XX , define a linear map a,a ′ ) x,x ′ ,a,a ′ ∈ M XX ⊗ C(PU + X ), and let be the closed ideal in C(PU + X ), generated by the elements γ P,Q ⊥ (W ⊗ W op ) and γ P ⊥ ,Q (W ⊗ W op ). Set A P,Q = C(PU + X )/I P,Q . We writeu for the image of an element u ∈ C(PU + X ) in A P,Q under the quotient map.
Theorem 7.10. Let X be a finite set, U ⊆ C XX (resp. V ⊆ C XX ) be a quantum graph and P ∈ M XX (resp. Q ∈ M XX ) be the projection onto U (resp. V). The following are equivalent for a QNS bicorrelation Γ : M XX → M XX : (i) Γ is a perfect quantum commuting (resp. quantum/local) strategy for the isomorphism game U ∼ = V; (ii) there exists a trace τ (resp. a trace τ that factors through a finite dimensional/abelian *-representation) of A P,Q such that Proof. (i)⇒(ii) We consider first the quantum commuting case. By Theorem 6.7, there exists a tracial state τ : By linearity, Since Γ is a perfect strategy for the game U ∼ = V, equation (68) implies that Set g = γ P,Q ⊥ (W ⊗ W op ); we claim that g ∈ C(PU + X ) + . To see this, let m : M XX (C(PU + X )) ⊗ max M XX (C(PU + X )) op → M XX (C(PU + X )) be the multiplication map, and note that, if u ∈ M XX (C(PU + X )) + and v op ∈ M XX (C(PU + X )) op+ then m(u ⊗ v op ) ∈ M XX (C(PU + X )) + (this can be seen by realising M XX (C(PU + X )) and M XX (C(PU + X )) op as mutually commuting C*-algebras acting on the same Hilbert space). We have that W ∈ M XX (C(PU + X )) + and, by Lemma 6.6, that W op ∈ M XX (C(PU + X )) op+ . It follows thatW ∈ M XXXX (C(PU + X )) + . Taking partial trace against the positive matrix P ⊗ Q ⊥ yields a positive operator; the claim is now proved after noticing that the latter operator coincides with g.
Similarly, h := γ P ⊥ ,Q (W ⊗ W op ) ∈ C(PU + X ) + . We have that τ (g) = τ (h) = 0; by a straightforward application of the Cauchy-Schwartz inequality, τ annihilated I P,Q and hence induces a trace (denoted in the same way) τ : A P,Q → C. The validity of equation (67) persists on A P,Q . Now consider the case where Γ is a quantum correlation. By Theorem 6.7, there exists a trace τ : C(PU + X ) → C that factors through a finite dimensional C*-algebra, such that Γ = Γ τ . By the previous paragraphs, τ annihilates J P,Q . Thus τ induces a trace (denoted in the same way) τ : A P,Q → C that factors through a finite dimensional C*-algebra and, as before, Γ = Γ τ . The case where Γ is of local type are similar.
Remark 7.11. It follows from identity (68) and the proof of Theorem 7.4 that A P,Q is be the universal C * -algebra generated by elements u * a,x u a ′ ,x ′ , where U = (u x,a ) a,x is a bi-unitary matrix, subject to the relations Remark 7.12. Let us consider the special case P = Q; this is the case of quantum automorphisms U → U . We would like to interpret A P,P as a quantum group of automorphisms of the quantum graph U ⊆ C X ⊗ C X . This intuition can be made precise by equipping A P,P with a natural coassociative comultiplication ∆ P : A P,P → A P,P ⊗ A P,P , which turns it into a C * -algebraic compact quantum group.
To construct such a comultiplication ∆ P on A P,P , we first consider C(U + X ), the universal C * -algebra generated by the entries of a bi-unitary U = (u x,a ) ∈ M X (C(U + X )). The C*-algebra C(U + X ) is well-known to be a compact matrix quantum group when equipped with the comultiplication ∆ : C(U + X ) → C(U + X ) ⊗ C(U + X ), given by ∆(u x,a ) = c∈X u x,c ⊗ u c,a on C(U + X ) [55]. Define a new C * -algebra B obtained from C(U + X ) by quotienting by the relations given in (69). Denote the canonical matrix of generators of B by V = (v x,a ) ∈ M X (B). (Note that, by definition, V is the universal X × X bi-unitary satisfying the relations (69).) We claim that the assignment ∆ B (v x,a ) := c v x,c ⊗ v c,a , (x, a ∈ X), determines a co-associative co-multiplication ∆ B : B → B ⊗ B, turning (B, ∆ B ) into a compact matrix quantum group. To see this, it suffices to check that matrixṼ ∈ M X ⊗B⊗B, given byṼ satisfies the defining relations for V (that is,Ṽ is bi-unitary and satisfies the equations (69) in M X ⊗M X ⊗B⊗B). Indeed, if the above is verified, then the co-multiplication ∆ on C(U + X ) will have been shown to factor the quotient C(U + X ) → B, proving that ∆ B is well defined and induces a quantum group structure on B.
First note that fact thatṼ is bi-unitary follows immediately from the formula forṼ and the bi-unitarity of V . To check (69), we first note that in Finally, we note that A P,P is, by construction, the C * -subalgebra of B generated by order two elements of B of the form v * x,a v x ′ ,a ′ , x, x ′ , a, a ′ ∈ X. The natural co-multiplication ∆ P on A P,P is then the restriction of ∆ B to A P,P (note that ∆ B (A P,P ) ⊆ A P,P ⊗ A P,P ).
Remark 7.13. Note that, by Proposition 7.7, any character on A P,P corresponds to a unitary U ∈ U X such that (U ⊗Ū )U = U . In other words, the abelianisation of A P,P corresponds via Gelfand duality to the classical compact group of unitary matrices The pair (A P,P , ∆ P ) is therefore the quantisation of this very natural matrix group of automorphisms of U .
Connection with algebraic quantum isomorphisms
The purpose of this section is to clarify the connection between the notion of a quantum graph isomorphism defined and characterised in Section 7 and the notion, defined and studied in [9]. Our main reference for the latter concept will be [15], and we follow its notation as closely as possible.
8.1. Algebraic isomorphism as a tighter equivalence. We fix throughout the section a finite set X and let n = |X|. We denote by tr the normalised trace on M X ; thus, tr = 1 |X| Tr. In order to simplify the notation, we will write 1 in the place of I X .
Denote by L 2 (M X ) the Hilbert space with underlying linear space M X and inner product arising from the GNS construction applied to the pair (M X , tr). More specifically, if Λ : M X → L 2 (M X ) is the GNS map, we set Λ(a), Λ(b) = tr(a * b) (note that the inner product is linear in the second variable). In what follows, we view M X as a subalgebra of B(L 2 (M X )), where an element a ∈ M X gives rise to the operator (denoted in the same way and given by) aΛ(b) = Λ(ab), a, b ∈ M X . Note that Λ(a) = aΛ(1), a ∈ M X .
Let m : L 2 (M X ) ⊗ L 2 (M X ) → L 2 (M X ) be the multiplication map, that is, the map, defined by letting m(Λ(a)⊗Λ(b)) = Λ(ab), and m * : L 2 (M X ) → L 2 (M X ) ⊗ L 2 (M X ) be its Hilbert space adjoint. For notational simplicity, we will often suppress the use of Λ, and consider m (resp. m * ) as a map from M X ⊗ M X to M X (resp. from M X to M X ⊗ M X ). We note that Indeed, for p, q, s, t = 1, . . . , n, we have tr(ǫ k,i ǫ p,q ) tr(ǫ j,k ǫ s,t ) (72) = n tr(ǫ s,i ǫ p,q ) tr(ǫ j,t ).
The right hand sides of (71) and (72) are thus equal, establishing (70) which, further, implies that Let η : C → L 2 (M X ) be the map, given by η(λ) = λΛ(1). Recall [15,Definition 2.4] that a self-adjoint linear map A : L 2 (M X ) → L 2 (M X ) is called a quantum adjacency matrix if it has the following properties: (3) m(A ⊗ 1)m * = 0. We stress that condition (3) reflects the fact that we work with a quantum version of graphs without loops (graphs with loops are quantised in this context by requiring the condition m(A ⊗ 1)m * = 1 instead of (3) [15, p. 6]). A triple G = (M X , tr, A), where A is a quantum adjacency matrix, is called in [9,15] a quantum graph. In order to distinguish this notion from the one used in the present paper, we will hereafter refer to it as an algebraic quantum graph.
We fix an algebraic quantum graph G = (M X , tr, A). We associate with G the M X -bimodule S ′ in B(L 2 (M X )) generated by A (its dependence on G is suppressed for notational simplicity); thus, recalling that the elements of M X are viewed as operators on L 2 (M X ), we have that If x, y ∈ M X , we write Θ Λ(x),Λ(y) for the rank one operator, given by Proof. (i) Note that { √ nΛ(ǫ i,j )} 1≤i,j≤n is an orthonormal basis for L 2 (M X ); thus, and the claim now follows from (75).
(iii) The claim follows from the fact that We set Λ ⊗2 = Λ ⊗ Λ and write U G = Λ ⊗2 (Ψ(S ′ )); thus, U G ⊆ L 2 (M X ) ⊗ L 2 (M X ) (we note that, in the case G is classical, the space U G is closely related to, although not identical, to the space denoted in the same way in Section 7). Throughout this section, we fix an orthonormal basis is also an orthonormal basis. Let ∂ : L 2 (M X ) → L 2 (M X ) be the linear operator with and setŨ G = (∂ ⊗1)(U G ). We next record the properties of the spaces of the formŨ G , akin to the properties of quantum graphs in the sense of Definition 7.1. We write d for the conjugate-linear map on L 2 (M X ) ⊗ L 2 (M X ), given by and recall that f is the flip map on L 2 (M X ) ⊗ L 2 (M X ). We note that the definitions of the maps ∂ and d depend on the basis, but the concrete basis we are working with will be fixed or clear from the context. The same comment applies for the notion we define next.
Proposition 8.4. Let G = (M X , tr, A) be an algebraic quantum graph. ThenŨ G is a quantum pseudo-graph.
and hence, for all y ∈ M X , we have Therefore, n 2 j=1 λ j x j x * j = 0. By the previous paragraph, we have showing thatŨ G is skew.
Remark 8.5. Proposition 8.4 shows that an algebraic quantum graph G = (M X , tr, A) gives rise to a canonical quantum pseudo-graphŨ G ⊆ L 2 (M X )⊗ L 2 (M X ). The reason we are led to work with quantum pseudo-graphs instead of quantum graphs in the sense of our Definition 7.1 lies in the setup of QNS correlations, which is borrowed from [20]. In defining QNS correlations, instead of no-signalling quantum channels Γ : one could start with no-signalling quantum channels Γ ′ : For the class of quantum commuting no-signalling correlations, this would lead to Choi matrices of the form (τ (e x,x ′ ,a,a ′ e y,y ′ ,b,b ′ )), as opposed to the matrices (τ (e x,x ′ ,a ′ ,a e y ′ ,y,b ′ ,b )) that arise through the current setup. As we will shortly see, in order to obtain a neat connection between the two types of quantum isomorphisms, one also needs to work with a slightly different concept of quantum isomorphism than the one employed in Section 7. We make this discussion rigorous in Theorem 8.9.
Let G r = (M X , tr, A r ) be an algebraic quantum graph, r = 1, 2. Let O(G 1 , G 2 ) be the universal (unital) C * -algebra with generators p i,j , i, j = 1, . . . , n 2 , and relations that turn the map ρ : M X → M X ⊗O(G 1 , G 2 ), given by into a unital * -homomorphism such that and (80) (tr ⊗id) • ρ = tr(·)1 Remark 8.6. It follows from the proof of [18,Theorem 4.7] that the matrix P = (p i,j ) n 2 i,j=1 ∈ M n 2 (O(G 1 , G 2 )) is automatically unitary. Identifying A i with its corresponding matrix in M n 2 with respect to the basis {f j } n 2 j=1 , one can further check that equation (79) is equivalent to Indeed, we have that Identity (81) now follows by comparing the corresponding coefficients. We note that reversing these arguments shows that relations (81) and (79) are equivalent. Note that if {Λ(g j )} n 2 j=1 ⊂ L 2 (M X ) is another orthonormal basis and U ∈ M n 2 is unitary such that Λ(g j ) = Λ(f j ), j = 1, . . . , n 2 , then ρ(g i ) = n 2 j=1 g j ⊗ ((U * ⊗ 1)P (U ⊗ 1)) j,i .
For the remainder of this section, we make the underlying assumption that the C * -algebra O(G 1 , G 2 ) is non-trivial.
Proof. We verify that P t = (p j,i ) n 2 i,j=1 is unitary. By the previous remark we may assume that if (i, j) = (k, l) and zero otherwise; thus, W * W = I n 2 . As ρ is * -preserving, we obtain for all i, l = 1, . . . , n 2 ; equivalently, P (W ⊗ 1) = (W ⊗ 1)P t * . It follows that P t * = (W −1 ⊗ 1)P (W ⊗ 1), and hence Let G r = (M X , tr, A r ) be an algebraic quantum graph, r = 1, 2. We will write S ′ r for the space corresponding to G r via (74), r = 1, 2. We say [9, Definition 4.4] that G 1 and G 2 are quantum commuting isomorphic, denoted G 1 ≃ qc G 2 , if the C * -algebra O(G 1 , G 2 ) admits a tracial state, say τ . We assume, unless specified otherwise, that G 1 ≃ qc G 2 . Let H be the Hilbert space, arising from the GNS construction applied to τ and, by abuse of notation, continue to write p i,j for the image of the corresponding canonical generator of O(G 1 , G 2 ) under the * -representation arising from τ . By (81), we have We view P = (p i,j ) n 2 i,j=1 as an operator on L 2 (M X ) ⊗ H and note that, by (78), we have Moreover, for a, d ∈ M X and ξ ∈ H we have and hence (84) P (a ⊗ 1)P = ρ(a), a ∈ M X , as maps on L 2 (M X ) ⊗ B(H). We defineP ∈ B(L 2 (M X ) ⊗ H) by letting Using leg-notation, we write P 2,3 and P 1,3 for the corresponding operators on L 2 (M X ) ⊗ L 2 (M X ) ⊗ H, arising from P .
Proof. Let x, y ∈ M X . We have and hence It follows that we obtain The statements now follow by linearity from the definition of U G 1 .
Let N ⊆ B(H) be a von Neumann algebra, equipped with a faithful tracẽ τ , and let U = (u i,j ) i,j ∈ M n 2 (N ) be a bi-unitary block operator matrix (with entries in N ). Suppose that Γ : M n 2 ⊗ M n 2 → M n 2 ⊗ M n 2 is a QNS correlation, given by We letΓ : M n 2 ⊗ M n 2 → M n 2 ⊗ M n 2 be the unital completely positive map, given byΓ ) and is hence a quantum commuting QNS correlation. We remark that, as can be verified in a straightforward way, if σ : M n 2 ⊗ M n 2 → M n 2 ⊗ M n 2 is the map, given by σ(ǫ k,k ′ ⊗ ǫ l,l ′ ) = ǫ k ′ ,l ⊗ ǫ l ′ ,k , thenΓ = σ • Γ * • σ.
We call two quantum pseudo-graphs W 1 and W 2 qc-pseudo-isomorphic if there exists Γ ∈ Q bic qc of the form described in the previous paragraph, such that (i) Γ is a perfect strategy for W 1 → W 2 , and (ii)Γ is a perfect strategy for W 2 → W 1 .
Let PŨ r be the projection ontoŨ r , r = 1, 2. Then condition (86) implies that The arguments in the proof of Theorem 7.4 (see also the subsequent Remark) now imply that Γ is a perfect strategy for the quantum graph homomorphism gameŨ 1 →Ũ 2 .
Remark 8.10. For a classical graph G with vertex set X, let A G : M X → M X be Schur multiplication map against the adjacency matrix of G. Then (M X , tr, A G ) is an algebraic quantum graph. Let Then W G is a quantum pseudo-graph in L 2 (M X ) ⊗ L 2 (M X ). Let G 1 , G 2 be classical graphs with vertex set X. We have the following three types of quantum commuting isomorphism for the graphs G 1 and G 2 : (a) quantum commuting isomorphism in the sense of classical non-local games [1]; (b) quantum commuting isomorphism of the algebraic quantum graphs (M X , tr, A G 1 ) and (M X , tr, A G 2 ); (c) quantum commuting isomorphism in the sense of quantum non-local games (Section 7), employing the quantum pseudo-graphs W 1 and W 2 .
We have that (a) implies (b), and that (b) implies (c). We do not know if these implications are reversible.
8.2.
A partial converse. In the remainder of this section, we discuss to what extent the implication established in Theorem 8.9 can be reversed. We first note that the quantum pseudo-graphs of the formŨ =Ũ G , for an algebraic quantum graph G = (M X , tr, A), automatically have some extra structure, and hence a full reversal of Theorem 8.9 cannot be expected. Indeed, let U = (∂ −1 ⊗ 1)(Ũ ), and recall that S ′ = Ψ −1 (U ) ⊆ B(L 2 (M X )) is an M X -bimodule. We first show that any quantum pseudo-graphŨ , for which Ψ −1 (U ) is a M X -bimodule, arises in this way. In what follows we fix a basis {Λ(f i )} i in L 2 (M X ) when define pseudo-graphsŨ . Let M op X be the opposite algebra to M X . For notational simplicity, we will consider M op X as having the same underlying vector space as M X , and will denote its product by · op ; thus, a · op b = ba, a, b ∈ M X . Let (L 2 (M op X ), Λ op ) be the GNS construction applied to (M op X , tr). As Λ op (a), Λ op (b) = tr(a * · op b) = tr(ba * ) = Λ(a), Λ(b) , we have that L 2 (M op X ) ⊗ L 2 (M X ) and L 2 (M X ) ⊗ L 2 (M X ) can be identified also as Hilbert spaces. Recall that L 2 (M X ) d is the Banach space dual of L 2 (M X ) (equivalently, the conjugate Hilbert space to L 2 (M X )). If A ⊆ B(L 2 (M X )) is a * -subalgebra, then the map T op → T d , where T d ξ = T * ξ, ξ ∈ L 2 (M X ), is a * -isomorphism. In what follows we will often identify T op with T d . For a linear operator T : L 2 (M X ) → L 2 (M X ), we definē T : L 2 (M X ) d → L 2 (M X ) d , by lettingTξ = T ξ, ξ ∈ L 2 (M X ). Proof. Let U = (∂ −1 ⊗ 1)(Ũ ) and S ′ = Ψ −1 (U ). By assumption, S ′ is an M X -bimodule and hence κ(S ′ ) is a π(M X ) ′ -bimodule. Under the canonical bijection between B(L 2 (M X )) and L 2 (M X ) d ⊗ L 2 (M X ), the π(M X ) ′bimodule κ(S ′ ) corresponds to the (π(M X ) ′ ) op ⊗ π(M X ) ′ -invariant subspace U ′ . Thus it gives rise to the projection e ∈ M op X ⊗ M X onto U ′ . By Lemma 8.3, S ′ is self-adjoint and hence so is κ(S ′ ), which implies, again by Lemma 8.3, that e = f(e) and J 0 (U ′ ) = U ′ .
Let A : L 2 (M X ) → L 2 (M X ) be the linear map corresponding to e as in Remark 8.12. We have that κ(S ′ ) is the π(M X ) ′ -bimodule generated by A. It follows that S ′ is the π(M X )-bimodule generated by A. In fact, since κ(π(M X ) ′ ) = π(M X ), it suffices to verify that JA * J = A. Write Thus Ψ(JA * J) = m i=1 λ i x i ⊗ x * i = f(e). As e = f(e), we get Ψ(JA * J) = Ψ(A), implying that JA * J = A.
Finally, reversing arguments in Proposition 8.4 we see that skewness of U implies that m(A ⊗ 1)m * = 0, showing that A is a quantum adjacency matrix. Letting G = (M X , A, tr), we have thatŨ =Ũ G .
Theorem 8.14. Let G r = (M X , tr, A r ) be an algebraic quantum graph, r = 1, 2. Let N be a tracial von Neumann algebra and U be a bi-unitary with entries in N giving rise, via (85), to a QNS correlation Γ that implements a qc-pseudo-isomorphism betweenŨ G 1 andŨ G 2 . Assume that U is the unitary implementation of a trace-preserving * -homomorphism ρ : M X → M X ⊗ N . Then U (A 1 ⊗ I) = (A 2 ⊗ I)U and hence G 1 ≃ qc G 2 .
The proof of Theorem 8.14 uses arguments from [15] and some auxiliary statements which we now establish. Set H = L 2 (M X ) (equipped with the inner product associated with tr). We identify L 2 (M op X ) with L 2 (M X ) d via the unitary map Λ op (x) → Λ(x * ).
We writeΛ : B(H) → L 2 (B(H)) for the GNS-map corresponding to (nonnormalised trace) Tr. We have We now fix algebraic quantum graphs, G r = (M X , tr, A r ), r = 1, 2, a von Neumann algebra N and a bi-unitary U as in the statement of Theorem 8.14. Assume that N acts on a Hilbert space K. Let e r ∈ M op X ⊗ M X be the projection associated with the adjacency matrix A r : M X → M X of G r via (75), r = 1, 2 (see the paragraph after the proof of Theorem 8.9), and let p r be the orthogonal projections from the Hilbert space L 2 (B(H)) (equipped with the inner product corresponding to Tr) onto its subspacẽ Λ(S ′ r ), r = 1, 2. The following lemma specialises [15,Lemma 9.17]; we include a direct proof for the convenience of the reader. where the latter action is that of M op X ⊗ M X on L 2 (M op X ) ⊗ L 2 (M X ). Thus (91) ω(Λ(S ′ r )) = (M op X ⊗ M X )(Λ op ⊗ Λ)(e r ). As e r ∈ M op X ⊗ M X , identifying it with its image under the map π op ⊗ π (which acts on L 2 (M X ) d ⊗ L 2 (M X )), for a ⊗ b, x ⊗ y ∈ M op X ⊗ M X , we obtain that LetŨ : L 2 (B(H)) ⊗ K → L 2 (B(H)) ⊗ K be the operator, given bỹ For a Hilbert space L, let j : L → L d the anti-linear isomorphism, given by j(g) = g, and R : B(L) → B(L d ) be the map, given by R(x) = jx * j, x ∈ B(L). Note that, if (g i ) i is an orthonormal basis for L, ǫ i,j ∈ B(L) are the matrix units corresponding to (g i ) i , and {ǭ j,i } is the matrix unit system for B(L d ) with respect to the orthonormal basis (ḡ i ) i , then R(ǫ i,j ) = j(g i g * j ) * j = j(g j g * i )j = j(g j )j(g i ) * =ǭ j,i .
In the following, we let V = (R ⊗ 1)(U * ). Thus, if U = (u i,j ) n 2 i,j=1 with respect to the orthonormal basis {Λ(f i )} n 2 i=1 of L 2 (M X ), then V is the operator on L 2 (M X ) d ⊗ K whose matrix with respect to the orthonormal basis is (v i,j ) n 2 i,j=1 := (u * i,j ) n 2 i,j=1 .
Proof. For 1 ≤ s, t ≤ n 2 and ξ ∈ K we have The statement follows by linearity.
Proof of Theorem 8.14. We recall that the von Neumann algebra N acts on the Hilbert space K, and that p r is the orthogonal projections from the Hilbert space L 2 (B(H)) (equipped with the inner product coming from Tr) ontoΛ(S ′ r ). By (90), and henceŨ (p 1 ⊗ 1) = (p 2 ⊗ 1)Ũ (p 1 ⊗ 1). | 2023-02-09T02:15:57.355Z | 2023-02-08T00:00:00.000 | {
"year": 2023,
"sha1": "55d26ab9065a63b351f5d37c1a2ecc66d6101efa",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "55d26ab9065a63b351f5d37c1a2ecc66d6101efa",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
56382312 | pes2o/s2orc | v3-fos-license | Trace Level Arsenic Quantification through Methyl Red Bromination
A simple protocol has been developed for the quantification of trace level arsenic through methyl red bromination. The proposed method is based on the oxidation of arsenic(III) to arsenic(V) by the bromine and the residual bromine’s reaction with methyl red to form colorless bromo methyl red. As the concentration of arsenic increases, the bleaching of the dye decreases due to bromine consumption. Measuring the intensity of the unreacted methyl red at 515 nm forms the basis of arsenic quantification. The molar absorptivity of this method has been found to be 2.25 × 10 L/mol/cm. The method obeys Beer’s law in the concentration range 0 0.25 μg/mL. The Sandell sensitivity and the limit of detection (LOD) were found to be 0.03 μg/mL/cm and 0.03 μg/mL respectively. The relative standard deviation has been found to be 0.35% at 1.0 μg/mL. The reaction conditions have been optimized and the interference due to various common cations and anions were studied. The proposed method has been successfully applied to the determination of trace level arsenic in various environmental samples like water, soil and vegetable samples.
Introduction
Arsenic is highly toxic and it has been identified as a public health problem due to its severe toxicity even at low exposure level and it is wide spread in the environment [1].This element has been classified as a group A carcinogen by USEPA (United States Environmental Protection Agency) as well as IARC (International agency for Research on Cancer) [2].Living organisms are generally exposed to this element primarily through food and water.The chronic exposure to arsenic can cause a variety of adverse health effects like respiratory, cardiovascular, genotoxic, mutagenic and carcinogenic effects as well as dermal changes like melanosis, leukomelanosis and hyperkeratosis [2,3].The major sources of this element include pigments, insecticides, herbicides, industrial production of metals, burning of coal and fossil fuels.Arsenic compounds are used in wood preservatives, glass manufacture, alloys, electronics, catalysts, feed additives and veterinary chemicals.Recent clinical investigations reveal that it can be used as in vitro antileukemic drug in the form of (2-phenyl-[3,2,1]dithiaarsolan-4-yl) methanol [4].Arsenic contamination in natural water is a world wide problem and it has become a challenge for its removal for the scientists in recent years.It has been reported as ground water contaminant in several countries including Mexico, Argentina, Poland, Canada, Hungary, Japan, Bangladesh and West Bengal of India [5][6][7].
The primary maximum contaminant level (MCL) for total arsenic in drinking water set by USEPA and World Health Organization (WHO) is as low as 10 ppb [8].Knowledge of the speciation of arsenic in natural waters is an important task because the bioavailability, physiological and toxological effects of arsenic depend on its chemical form.Some arsenic species identified in water are arsenite, arsenate, monomethyl arsenic acid, dimethyl arsenic acid etc. Speciation analysis involves the use of analytical methods that can provide information about the concentration of the different physico-chemical forms of the element and total concentration in the sample.Speciation of arsenic in environmental samples has gained very significance in recent years as the impact of toxic effects of arsenic are related to its oxidation state.The arsenic generally occur in the environment in different oxidation states such as As(V), As(III), As(0) and As(-III).Among these, As(III) is reported to be 25 -60 times more toxic than As(V) and several hundred times more toxic than organo arsenicals [9].This might be due to As(III) which can not be easily adsorbed or precipitated from natural waters because of its stability and solubility compared to As(V).Arsenic(III) ability to react with sulfahydryl groups thereby increasing the residence time in the body may be the reason, however organo arsenicals will get excreted easily out of the body.These facts indicate that monitoring As(III) at trace level would be of priority and also a challenging task to develop simple protocols for its quantification.Speciation of arsenic in water as well as in other environmental samples at trace level has been given a significant focus by the scientific community in recent years [2].
A wide variety of analytical methods for arsenic quantification have been reported.Among them, atomic absorption spectrophotometry (AAS) [10], Inductively Coupled Plasma (ICP), HPLC [11] are popular, but these methods require either expensive instrumentation or generation of highly toxic arsine gas.Other methods such as voltammetry [12], neutron activation analysis [13], Xray fluorescence [14], differential pulse polarography [15], Ion chromatography [16] methods etc., are not used in routine analysis.Moreover the viability of many of these techniques to separate and determine arsenic species suffers because of time consuming or relatively complicated sample preparation procedures.Some of the spectrophotometric reagents used for arsenic determination include silver diethyldithiocarbamate in which toxic arsine gas was generated and also toxic organic solvents like pyridine/chloroform have been used.The original arsenomolybdenum blue method is highly sensitive but it suffers severely from silicate and phosphate interferences [17].Complexation of arsenomolybdate with catechol or thiol and its subsequent ion pair formation with triphenylmethane or fluorescent dye facilitates the extraction of resulting complex into organic layer [18].Though these methods are highly sensitive but they involve the use of organic solvent like benzene which is highly carcinogenic in nature.Hence there is a need to develop a simple and sensitive method to quantify trace level arsenic from a variety of environmental matrices.
Here in we report a sensitive and simple method for the determination of different forms of arsenic mainly trivalent and pentavalent ionic forms based on the reaction of arsenic with bromine and the subsequent reaction of residual bromine with methyl red dye to give a colorless bromomethyl red.The proposed method has been successfully applied to determine trace level arsenic from water, soil and vegetable samples.
Instrumentation
Absorbance measurements were made using a Shimadzu scanning spectrophotometer (model UV-3101PC) with 1 cm quartz cuvettes and all pH measurements were carried out using Control Dynamics digital pH meter (model APX 175).ICP-AES analysis was carried out using Jobin Yvon Horiba ICP-AES (model Ultima 2).
Chemicals and Reagents
All chemicals used were of analytical grade.Distilled water distilled by Gram-miniquartz distillation unit was used.Sulfuric acid, nitric acid, hydrochloric acid, perchloric acid and hydrogen peroxide (30%) all purchased from Merck (AR grade) were used.Analar grade sodium arsenite, sodium arsenate, ascorbic acid, potassium iodide, potassium bromide, potassium bromated and methyl red were procured from SD Fine-Chem.Limited, Mumbai.Stock solutions of arsenic III and V (1000 ppm) were prepared by dissolving appropriate quanties of sodium arsenite and sodium arsenate using double distilled water.Working standards solutions were prepared by appropriate dilution of stock solution.Sulphuric acid (4.25 M) was prepared by diluting 59 mL of concentrated acid into 250 mL.Methyl red (0.01%) was prepared by dissolving 0.1 g of the dye in 1 mL of 4.25 M sodium hydroxide and diluting it to 100 mL. 10 mL of this solution was diluted to 100 mL after acidifying it by adding 1 mL of 4.25 M sulphuric acid.Bromate-bromide mixture for bromine generation was prepared by dissolving 0.05 g of potassium bromate and 0.5 g of potassium bromide and diluting to 500 mL with water.To generate 0.014 mM of bromine, 40 ml of 4.25 M sulphuric acid was added to 10 mL of above bromate-bromide mixture and diluted to 100 mL.Ascorbic acid (1%) was prepared weekly by dissolving 1 g in 100 mL distilled water and stored in refrigerator.Potassium iodide (10%) was prepared by dissolving 10 g of salt in 100 mL distilled water.
Arsenic in Water Samples
The ground water contamination with arsenic mainly depends on the nature of soil as well as the human activity within that region.Arsenic based paints have been extensively used in painting clay idols throughout the world.These idols were submerged in the waters of lakes or specified ponds after their procession during the selective festival season in India and some other parts of the world.When these clay idols were submerged the water bodies as well as the soil sludges gets contaminated with arsenic.The water samples from these ponds or lakes (contaminated lake) were collected and analyzed for arsenic content.
Arsenic in Soil Samples
The soil samples were collected from the agricultural field as well as from the pond bed where painted clay idols were immersed.The soil sludge samples collected from contaminated lake bed were analyzed.
Arsenic in Vegetable Samples
The plant uptake capacity for arsenic depends mainly on the level of arsenic present in the soil as well as the use of arsenic contaminated water.So the arsenic content in the tomato leaves and spinach leaves collected from the field were analyzed for their arsenic content.
Recommended Procedure
Aliquots of standard arsenic(III) solutions (overall concentration should be in the range 0.05 -0.25 μg/mL) were transferred into 10 mL calibrated flasks.Then 3 mL of 0.014 mM bromine solution and 1.2 mL of 4.25 M sulphuric acid were added to these flasks and the reaction mixture was shaken gently.Then 0.4 mL of 0.01% methyl red was added and diluted up to the mark with distilled water.The absorbance values were measured at 515 nm against a reagent blank.
ICP-AES Method
The arsenic content in the natural samples have been determined by the ICP-AES technique in order to compare the results of the proposed method.Aliquots of standard arsenic(III) solutions (concentration range 0.01 -1.0 μg/mL) were transferred into 10 mL calibrated flasks and made up to the mark with distilled water and analyzed by the ICP-AES method for the construction of calibration plot.
Arsenic in Spinach Leaves (Spinacea oleracea)
The spinach leaves were dried under sun light and grinded into fine powder.100 g of the powdered and sieved sample was placed in a beaker.10 mL each of nitric acid and sulfuric acids were added and heated to 100˚C for 20 min in fume hood.The solution was cooled and 10 mL of perchloric acid was added and heated again in fume hood for 5 min.until the dense fumes of sulphur dioxide disappear completely.The sample was then cooled and 1 mL of HCl was added to remove any heavy metal ions present in the sample.The solution was heated for 15 min and washed with distilled water.Then it was transferred to a 50 mL volumetric flask and diluted to the mark with distilled water.Aliquots of 5 mL of the sample were used for the estimation of As(III) and As(V) analyzed by the recommended procedure as well as by ICP-AES method [19].
Arsenic in Tomato Leaves (Lycopersicum
esculentum) About 100 g of tomato wet leaves were acid digested by following the procedure given above and the solution was diluted to 100 mL.This solution was evaporated to reduce the volume to 10 mL for preconcentration of As.Arsenic species i.e. trivalent and pentavalent forms from these vegetable samples were estimated by following the procedure described as above [20].
Arsenic in Water Samples
Water samples from tube wells as well as ponds were collected in polyethylene containers.The collected water samples were filtered using whatman filter paper to remove any suspended matter and analyzed for arsenic(III) by the proposed as well as standard methods.Similarly another aliquot of water sample was treated with few drops of potassium iodide (10%) in presence of hydrochloric acid (5 M HCl) to reduce pentavalent arsenic to trivalent form.Then total arsenic content was analysed and the liberated iodine was destroyed by the addition of ascorbic acid.The difference between these two measurements provide As(V).
Agricultural Soil
Soil samples collected from agricultural field were milled to break the lumps and sieved.The sieved samples were grinded to make it more homogeneous powderded form.Then 1 g of powdered sample was weighed and transferred into 100 mL beaker.To this 2 mL of water in which 0.5 gram of KOH pellets dissolved were added and the mixture was heated on hot plate until the water evaporates and fuses for 2 min.Then it was diluted up to 50 mL.5 mL aliquot of diluted sample solution was used to determine As by following the procedure given as above [20].
Soil Sludge
Soil sample was collected from the pond bed where painted clay idols were immersed after festival procession and stored in polyethylene bags.The collected soil sludge was air dried and known weight (100 g) of sample was placed in a 250 mL beaker and extracted four times with 5 mL portions of concentrated HCl each time.The combined extract was boiled for about 30 min; the solution was cooled and diluted to 50 mL with distilled water.5 mL aliquot of a sample was used for As(III) determination by the proposed method and also by the standard method.Another aliquot of 5 mL of water sample was used to determine As(V) by reducing it to As(III) by the addition of few drops of 10% KI and 5 M HCl.The iodine liberated was destroyed by the addition of ascorbic acid and analyzed by the procedure discussed as above [21].
Results and Discussion
The bromine reacts with the methyl red dye to form colorless bromomethyl red.The reaction of bromine with dye has been quantitative in acidic condition.Hence this property has been exploited to develop a simple and sensitive method for the quantification of arsenic at trace level.Initial studies were carried out by oxidizing As(III) to As(V) by the bromine and the residual bromine has been made to react with methyl red dye to form bromo substituted dye which was colorless.Due to the increase in the As(III) concentration the consumption of bromine increases which causes the decrease in the bromination of the dye.Hence the absorbance of the methyl red increases proportionately with the arsenic concentration.Preliminary studies were carried out by taking 25 mL calibrated flasks containing 5 mL of 0.014 mM of bromine and 3 mL of 4.25 M sulfuric acid.10 µg of arsenic (III) was added and gently shaken for one minute followed by the addition of 1 mL of 0.01% of methyl red solution.Then it was made up to the mark with distilled water and the absorbance values were measured against the reagent blank at 515 nm.The experimental parameters like bromine concentration, reaction acidity, dye concentration, the effect of interfering ions in the determination of arsenic by the bromination of methyl red was optimized to get the maximum sample absorbance and minimum blank value.
Effect of Reaction Acidity
The bromination reaction of methyl red to form colorless bromomethyl red was quantitative in acidic medium.Hence sulphuric acid was used to provide the required acidity in order to optimize the bromination of the dye.Varying volumes of 4.25 M sulphuric acid were used to provide an overall acidity ranging from 1.8 to 3 M in 10 mL of the solution.In these experiments, 2 mL of bromine, varying volumes of 4.25 M sulfuric acid, 1 µg of arsenic solution and 0.4 mL of (0.01%) dye were added.The absorbance of the solutions was measured at 515 nm.Constant absorbance value for the sample with minimum blank was observed in the acidity range between 2.8 to 3.6.Hence an overall acidity of 3 M was maintained by the addition of 1.5 mL of 4.25 M sulfuric acid (Figure 1).
Effect of Bromine
The effect of bromine concentration was studied using 0.014 mM bromine solution.The bromine solution was prepared by using bromate-bromide mixture and sulphuric acid.In order to establish the optimum concentration of bromine required for the reaction was studied by taking 1.5 mL of 4.25 M sulphuric acid and varying volumes of 0.014 mM bromine, 1 µg of arsenic solution and 0.4 mL of dye (0.01%) in 10 mL volumetric flasks.The measured absorbance values were found to be constant in the bromine concentration range between 2.5 -4.0 mL.Hence 3 mL of bromate-bromide mixture was sufficient enough to provide the required bromine to brominate the dye to form colorless dye (Figure 2).
Effect of Reaction Time and Temperature
The bromination of methyl red was instantaneous and the reaction was carried out at room temperature, hence the variation of time for the bromination of methyl red was studied up to 30 min.There was no variation in the sample absorbance value during the time interval studied, hence the effect of reaction time on sample absorbance has not been described here.In order to evaluate the suitability of the proposed method for the determination of arsenic species in water samples, the interference study was carried out.The interfering ions were added in the form of their respective salts to find the interference.Initially the interference of several anions like chloride, fluoride, sulfate, nitrate, nitrite, phosphate was studied.The tolerance limit for chloride was found to be 3000 µg, where as for all other ions except fluoride was 1000 µg.The interference due to fluoride was overcome by precipitating as silver fluoride and removing through centrifugation up to 200 μg level.The tolerance limits of various cations like calcium, magnesium, ferrous, ferric iron and other toxic metal ions like lead and chromium were studied.The tolerance limit for calcium was above 3000 µg, where as for magnesium, ferrous, ferric iron, lead and chromium was up to 1000 µg level.The effect of other cations like, zinc, nickel, cobalt, aluminium, cadmium, copper, potassium and silver were also studied.The tolerance limits of various cations and anions were given in Table 1.
Species Responsible for Color
Bromine oxidizes arsenic(III) to arsenic(V) and the residual bromine reacts with the methyl red forming colorless bromomethyl red.As the arsenic concentration increases, the residual bromine concentration decreases there by the decrease in the bromomethyl red formation.Hence the absorbance of the dye linearly increases with the increase in arsenic concentration.The absorbance of the unreacted methyl red has been measured at 515 nm and it was correlated with the arsenite concentration (Scheme 1 & Figure 3).
Analytical Merits of the Method
The proposed method is simple and doesn't require any extraction step to lower the detection limit unlike other methods.It has least interference from common ionic species like phosphate, chloride, fluoride etc.The Sandell sensitivity and the limits of detection of the method were found to be 0.03 µg/mL/cm and 0.03 µg/mL respectively.The method obeyed Beer's law in the concentration range 0 -0.25 µg/mL.The sensitivity and detection limit of the method is very good and can be used as an alternative method to estimate the arsenic at trace level (Table 2).The proposed method has been compared with some of the reported methods (Table 3).
Application Study
In order to check the validity of the proposed method, it has been applied to determine arsenic levels from natural samples like water samples, soil samples and vegetable Wave length (nm) samples.The efficiency of the % recovery of the spiked samples were also carried out.The results obtained by the proposed method are in good agreement with the results of the ICP-AES method (Table 4).
Conclusion
Methyl red has been used as a chromogenic reagent for the first time to quantify trace level arsenic.The proposed method is simple because it does not require any heating, solvent extraction and has less interference from most of the common cationic and anionic species.The method tolerates fluoride up to 50 μg level which is a common contaminant in ground waters.It has been successfully applied in the determination of trace level arse-
Table 1 .
Interference study.Treated with AgNO 3 before adding bromine solution to precipitate fluoride as silver fluoride and it was removed by centrifugation. a | 2018-12-18T03:14:01.265Z | 2012-07-17T00:00:00.000 | {
"year": 2012,
"sha1": "9339e88bfe92b7e7ff0dc70183292575bef7c9a5",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=21031",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "9339e88bfe92b7e7ff0dc70183292575bef7c9a5",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
51975180 | pes2o/s2orc | v3-fos-license | Self-organization of active particles by quorum sensing rules
Many microorganisms regulate their behaviour according to the density of neighbours. Such quorum sensing is important for the communication and organisation within bacterial populations. In contrast to living systems, where quorum sensing is determined by biochemical processes, the behaviour of synthetic active particles can be controlled by external fields. Accordingly they allow to investigate how variations of a density-dependent particle response affect their self-organisation. Here we experimentally and numerically demonstrate this concept using a suspension of light-activated active particles whose motility is individually controlled by an external feedback-loop, realised by a particle detection algorithm and a scanning laser system. Depending on how the particles’ motility varies with the density of neighbours, the system self-organises into aggregates with different size, density and shape. Since the individual particles’ response to their environment is almost freely programmable, this allows for detailed insights on how communication between motile particles affects their collective properties.
O ne of the most intriguing properties of living matter is its ability to spontaneously organise from random into complex structures on different length scales such as biofilms 1 , swarms 2,3 and flocks [4][5][6][7] . This requires communication between individual group members, which is typically realised by complex internal signal pathways. In the case of cells or bacterial colonies, communication can be achieved by extracellular signalling molecules which are produced, released and sensed by all group members 8,9 . Such biochemical communication, socalled quorum sensing, enables organisms to measure their local population density and to regulate their response accordingly (quorum sensing should be distinguished from chemotaxis describing the response of organisms in concentration gradients). The first example of quorum sensing was observed in the bioluminescent bacteria Aliivibrio fischeri which start to luminesce once their population density exceeds a certain density threshold 10 . By now, many other examples of quorum sensing have been found and it is considered to be a generic cell-to-cell communication mechanism, which is relevant, e.g., for the secretion of virulence factors 11 , biofilm formation 12 and motility control 10,13 .
In contrast to living systems, where the organism's response to a molecular concentration is determined by internal signal pathways, the motility of synthetic self-propelling active particles (APs) can be externally adjusted, e.g. by optical 14,15 , electrical 16,17 or thermal 18,19 fields. Experiments with such systems show a wealth of dynamical states ranging from living crystals 20 to phase separation [20][21][22] and swarming 16 that can be controlled by geometry 23 or boundary conditions 24 . In addition to suspensions with homogeneous motility, theoretical studies also considered APs whose motility and orientation changes upon variations of their local density [25][26][27] or their own chemical concentration gradients 28 . Under such conditions, not only motility-induced phase separation but also the occurrence of moving clumps, lanes and asters has been observed.
Here we present an experimental realisation of an active suspension, whose individual particle motion varies depending on its neighbouring density. This is achieved using APs whose individual motility, i.e. magnitude of propulsion, is controlled by the intensity of an incident focused laser beam. In contrast, the propulsion direction, which is given by the particle orientation, remains unaffected by the laser illumination and undergoes free Brownian rotational diffusion. With an external feedback-loop which consists of a real-time optical particle detection algorithm which controls the position of a scanned laser beam, we are able to explore how a specific choice of a density-dependent particle motility affects the cooperative behaviour. With experiments, numerical simulations and theory we demonstrate that small variations on how particles change their motility in response to their environment can strongly affect their self-organisation.
Results
Experimental realisation. Active particles are made from silica spheres with diameter σ = 4.4 μm which are half-coated by a 30 nm carbon film and suspended in a critical mixture of water-lutidine several degrees below its lower demixing temperature T c . Upon laser illumination, the carbon caps are selectively heated above T c , which leads to local demixing of the solvent near the caps 14 . As a result of compositional flows within the solvent, the particles self-propel opposite to the cap with their speed determined by the incident laser intensity (Methods section). Because particles are separately illuminated by a scanned focused laser beam, we are able to dynamically assign each AP an individual propulsion velocity. It should be mentioned that the direction of the propulsion velocity is opposite to the cap orientation which is undergoing free Brownian rotational diffusion. Accordingly, in our experiments, only the magnitude but not the direction of motion is controlled externally. The entire suspension is contained in a thin sample cell with height 200 μm, where particles form a two-dimensional system due to gravitational forces. To avoid variations of the total particle density within our field of view, we have applied a lateral circular confinement with reflective boundary conditions and radius R = 65 μm (Fig. 1a). However, cluster formation by quorum-sensing rules, as reported here, is also observed using periodic boundary conditions (Methods section). In the following, we investigate a suspension with constant particle density ρ 0 = 0.0092 μm −2 (corresponding ' 15% of close packing). To obtain steady-state conditions, we have allowed the system to evolve in time for at least 30 min before taking data. For further details of the experimental setup, (see Methods section).
Quorum sensing in living systems requires communication between individuals, which is achieved by the release of signalling molecules with production rate γ 29 . Because such molecules have a finite lifetime, each organism senses the molecular concentration Here, r ij ¼ jr i À r j j is the distance to the neighbour labelled j, σ the linear size of the individuals, and λ ¼ ffiffiffiffiffiffiffi ffi D c τ p (with D c the diffusion coefficient of the signalling molecules and τ their lifetime) a decay length, defining the range of the concentration profile and thus the distance over which particles communicate. The prefactor is given byc ¼ γ= 4πD c σ ð Þ. To externally introduce particle communication in a suspension of synthetic APs that are lacking the ability to release or detect such molecules, we first determine the APs' positions in our sample at periodic time intervals. Next, we use Eq. (1) and calculate the hypothetical molecule concentration c i (t) at each AP To trigger a density-dependent motility response, we apply the following rule: when the concentration c i (t) 'sensed' by an AP exceeds a threshold, i.e. c i > c th , it becomes non-motile (i.e. the laser illumination is set to zero and thus no self-propulsion occurs, v = 0), otherwise it is motile and propels with velocity v = v 0 (Fig. 1c, d). Such a sharp threshold is in agreement with the conditions of many living systems exhibiting quorum sensing 10,13 . In the following, the propulsion velocity was set to v 0 = 0.2 μm s −1 (Methods section). Due to diffusive and active motion, particles undergo configurational variations over time which lead to continual changes of the APs' behaviour from motile to non-motile. Particle motilities are updated every 500 ms (Methods section). During this time interval, particles typically move by a distance <5% of their diameter. Note that the reduction of the motility update time by a factor of 100 yields identical results as confirmed by numerical simulations (Supplementary Figure 1).
For c th = 0 (i.e. an entirely Brownian suspension), particles are homogeneously distributed. Increasing c th leads to an inhomogeneous particle distribution and formation of clusters with growing density (Fig. 2a-c). In the following, clusters are defined by regions where the particle density ρ is 20% larger than ρ 0 . When c th = ∞ (i.e. permanently motile APs), cluster formation again disappears. This is in contrast to APs with constant motility, where cluster formation is observed at much higher velocities and densities of APs 22,31 (Supplementary Figure 2). This clearly demonstrates that here the organisation into densely packed regions is entirely due to the presence of quorum-sensing rules. The formation of clusters under such conditions is qualitatively understood as follows: isolated and motile particles approaching dense regions slow down; as a result, they become entirely diffusive since they sense super-threshold concentrations. This facilitates the aggregation of particles leading to cluster growth. Because particle self-propulsion is turned off when joining a cluster (in contrast to permanently motile particles), the density within clusters is rather loose as confirmed by the radial density profiles ρ(r), where r is measured relative to the clusters' centre of mass (Fig. 2e). Such loose packing is in strong contrast to the closely packed (even crystalline) aggregates which are observed in dense suspensions of APs with constant motility 32 . Our experimentally measured density profiles are in excellent agreement with those obtained by numerical simulations (solid lines in Fig. 2e) of a simple model neglecting hydrodynamic interactions (Methods section).
Numerical and analytical analysis. To gain further insights into the influence of the quorum-sensing rule on the collective particle behaviour, we performed numerical simulations where the concentration threshold c th and the decay length λ were systematically changed. In addition, we developed a mean-field theory from which analytical results can be obtained. It describes a passive, circular homogeneous cluster in coexistence with a motile gas due to the balance of a diffusive current away and an active current towards the cluster (Methods section). Our findings are summarised in Fig. 3a. In agreement with Fig. 2a-d, clustering occurs only within a well-defined range of concentration thresholds c th . Our numerical simulations predict cluster formation between the solid red and blue line (the latter also in agreement with the onset of cluster formation within the meanfield calculations). With increasing decay length λ, each particle senses a larger number of neighbours, i.e. a higher concentration (Eq. (1)). Accordingly, the maximal threshold c th for which only non-motile particles are observed, becomes larger. The same trend also applies to the conditions where only motile particles exist. As can be seen, cluster formation occurs only within these limiting cases. With increasing c th (λ = const.) the clusters are compressed, i.e. they become denser and smaller, as seen by the open symbols in Fig. 3b, c. This behaviour is in good agreement with our experiments (closed symbols) and qualitatively reproduced by the mean-field calculations (solid line) in which fluctuations and the excluded volume are neglected. As expected, the agreement with mean-field generally becomes better with increasing decay length since particles sense over larger distances and are thus less sensitive to fluctuations of the particle Figure 3). The upperboundary for cluster formation suggested by mean-field theory is shown as a dashed red line and largely overestimates the limit where stable clusters are formed according to our experimental and numerical data. The reason is that clusters shrink as we increase the threshold, and clusters composed of a few particles become unstable with respect to fluctuations (which are neglected in mean-field). A small density fluctuation (e.g. because one particle leaves the cluster) might lead to a drop of the chemical concentration below the threshold for other particles in the cluster, which then become motile and also leave the cluster. For small clusters, this positive feedback between density and motility fluctuations is sufficient to dissolve the entire cluster while larger clusters remain stable. In the simulations, we observe that clusters with less than N p ≈ 65 passive particles spontaneously dissolve (and also re-form) independent of λ (Supplementary Figure 4). This effect lowers the upper boundary in Fig. 3a to much smaller concentration thresholds. Larger global densities stabilise clusters at larger thresholds and move the upper boundary closer to the theoretical limit of closely packed clusters (Supplementary Figure 5). The actual number below which clusters dissolve depends on the system size (Supplementary Figure 6). Our experiments and simulations suggest that cluster formation resulting from quorum-sensing rules requires not only the coexistence of motile and non-motile particles (Fig. 3a), but also the ability to change their motility (Fig. 4a, b). To investigate the importance of such motility changes in more detail, we performed simulations with mixtures of motile and non-motile particles but without the possibility of motility changes. Independent of the mixing ratio, no clustering is observed for our packing fraction (data not shown).
In order to quantify the motility changes, we introduce the motility change density _ n p$a ðrÞ, which is the number of motilepassive change events within a circular ring at distance r from the centre of the cluster per time and area. Figure 4c compares how _ n p$a ðrÞ and the particle density profile ρ(r) depend on the radial position within a cluster for λ = 10σ and c th = 8.6c. The rate _ n p$a ðrÞ becomes largest at the interface between the dense cluster and the (dilute) gas because configurational changes of the APs are most pronounced near the interface. Accordingly, the distributions are shifted to smaller radii (Fig. 4d) when the cluster size becomes smaller by increasing the sensing threshold c th (cf. Figure 2a-d). In Fig. 4e, we have plotted the total rate of motility changes _ N p$a (density _ n p$a ðrÞ integrated over the whole area of the system) vs. c th . Obviously, induced cluster formation requires a minimal rate of motility changes. Starting from a homogeneous particle distribution (c th = 0), a growing threshold c th leads to clusters with increasing density (cf. Fig. 3a). Accordingly, the deviation of the particle distribution compared to thermal equilibrium increases. Sustaining such nonequilibrium conditions requires an increasing amount of motility control, i.e. switching rate _ N p$a . As the clusters become unstable with respect to fluctuations, the switching rate drops and becomes again zero when all particles remain active. The behaviour shown in Fig. 4e is robust with respect to changing of λ (Supplementary Figure 7) and thus demonstrates that the dynamics of motility changes plays an important role for cluster formation induced by quorum-sensing rules.
We have also studied how the collective behaviour changes upon further variations of the particles' response to their environment. Figure 5a shows a cluster that is formed by introducing a second concentration threshold c th,2 > c th above which particles recover their motility 27 . This leads to enhanced active particle motion near the cluster centre and preferential escape from this region. As a result, the particle density near the centre decreases which leads to a 'ring'-like structure.
We also considered the situation where the concentration profile of signalling molecules becomes non-isotropic, e.g. due to thermophoretic forces induced by external temperature gradients. To account for the angle-dependence of the concentration profile around each particle j, we have modified Eq. (1) by an angle- where Θ ij denotes the angle between the connecting vector of particles i and j and the x-axis. For f(Θ) = cos 4 (Θ) we find an elliptically elongated shape, whose axis is rotated by 90°for f(Θ) = sin 4 (Θ) (Fig. 5b, c). Choosing f(Θ) = cos 4 (2⋅Θ) we obtain an almost quadratically shaped cluster as shown in Fig. 5d.
Discussion
We have demonstrated the collective behaviour of synthetic particles which interact via quorum sensing rules. Contrary to living systems, where the interpretation of signalling molecules can be rather complex 33 , in our experiments well-defined perception-response relations are imposed externally by a feedbackloop. Because our approach does not rely on specific particle interactions, this enables us to freely vary not only the type of stimuli but also how particles respond to them. In addition to a density-dependent incentive which controls the particles' motility, other types of particle responses including active torques 34 but also a time-delay between a stimulus and the particles' response and non-reciprocal interaction rules can be realised.
Finally, since our experiments are performed at low Reynolds number akin to the surroundings of bacteria, comparison of the collective behaviour of synthetic with living systems may allow to unveil what type and over what distances information must be exchanged to initiate collective behaviour.
Methods
Experimental details. In our experiments we use silica particles with diameter σ = 4.4 μm and a 30-nm-thick carbon film on one hemisphere, which are suspended in a critical mixture of water-lutidine at temperature T = 25°C. Particles are individually illuminated with a scanned laser beam (beam waist w = 5 μm) aiming at the centre of each particle. Under such conditions, particles self-propel with velocity v opposite to the orientation of the capped hemisphere which is undergoing rotational Brownian diffusion. It is important to notice that this reorientation is not altered by the illumination. The translational and rotational diffusion coefficients have been determined to D 0,exp = (0.0208 ± 0.0012) μm 2 s −1 and D R,exp = (112.6 ± 26.1) −1 s −1 by evaluating the translational mean-square displacement in a dilute system 35 . While D R,exp agrees well with the theoretical value, D 0,exp is about 50% below the bulk Stokes-Einstein value. The reduction is due to the enhanced viscous friction near a surface 36 and in agreement with previous studies 14 .
To avoid changes in the particle density during experiments, we employ reflective boundary conditions at the edge of a circular confinement with radius R = 65 μm by the application of an active torque which leads to a particle reorientation 14,34 . This is accomplished by displacing the illuminating laser beam relative to the particle centre by ≈2.6 μm resulting in a local intensity gradient. This causes an effective torque and thus particle reorientation. Such torques are applied to particles when leaving the circular confinement until their swimming direction Cluster formation by motility switching. a Schematic illustration of the influence of motility switches. An active particle (red) is approaching a cluster of three passive particles (blue) and becomes passive which favours joining the cluster. On the opposite, a passive particle slightly diffusing away from a cluster becomes active which facilitates leaving it. b Same situation without motility changes. c Numerically obtained radial density profile ρ(r) and motility change density _ points towards the confinement centre. To reduce variations of the calculated concentration by particles leaving and re-entering the confinement, we take all particles into account which leave the confinement by <10 μm. The confinement contains on average 122 particles, corresponding to a density of ρ 0 = 0.0092 μm −2 .
Experimental realisation of feedback-controlled particle motility. The propulsion velocity of the particles is controlled independently from each other by a feedback loop. A video camera acquires images with a repetition rate of 2 Hz which are then evaluated on a computer by a real-time particle detection algorithm to determine the particle positions. With an acousto-optical deflector (AOD) a laser beam, with beam waist w = 5 μm, is consecutively directed to the prior determined particle positions and each particle is illuminated for a period of 8 μs which is repeated every 4 ms. Since the remixing timescale of the binary mixture is on the order of 100 ms 14 , the repetition is fast enough to produce stable particle selfpropulsion conditions. With this approach, the motilities of up to 400 particles can be controlled independently.
The time-averaged illumination intensity of each particle is set to either I = 0.2 W mm −2 (resulting in a propulsion velocity v 0 = 0.2 μm s −1 ) or to I = 0 (diffusive motion) (Fig. 6). Particle configurations are updated every 500 ms, typical positional changes during this time interval are below 5% of the particle diameter. Therefore, the feedback-loop can be considered as quasi-instantaneous.
Illumination intensity corrections. To avoid velocity changes due to the overlap of the illuminating Gaussian beams of neighbouring particles, we have additionally adjusted the laser intensity depending on the relative particle positions. The intensity profile of the laser beam illuminating particle i is given by where w = 5 μm is the beam waist, I 0,i its intensity and r the radial distance. Accordingly, when particles (diameter 4.4 μm) get close to each other, they will also receive light from the beams centred on their neighbours. This leads to an increased intensity at the centre of particle i by with r ij the distance between particles i and j. To avoid such configurationdependent variations in the effective illuminating intensity (and thus of the propulsion velocity), we corrected the illuminating intensity by reducing the intensity of beam i toÎ For computational reasons, we only consider particles j with r ij < 2w in the calculation of ΔI i . Using this empirical relationship, the illumination integrated over each particle becomes independent of the positional configuration. To demonstrate the validity of Eq. (5), we have numerically tested this for a huge number of arbitrary particle configurations including situations with and without applied quorum sensing interaction. Figure 7 shows an example (corresponding to the particle configuration in Fig. 1a-c) with motile and non-motile APs. We have considered an illumination intensity of non-motile and motile particles of I = 0 and I = 0.2 W mm −2 , respectively. Without the correction discussed, we obtain a bimodal illumination intensity distribution of the particles as shown in Fig. 7b. Obviously, some of the motile particles receive up to 0.3 W mm −2 , i.e. 50% more than the nominal illumination intensity. After application of the correction procedure, this unwanted effect is almost completely suppressed (Fig. 7c). Note, that the correction is less effective for passive particles. However, due to the presence of a minimal intensity to initiate active motion (dotted line, I = 0.1 W mm −2 ), this will not change the behaviour of non-motile particles.
Simulations. In our numerical simulations, we integrate the coupled equations of motion (assuming overdamped dynamics and neglecting hydrodynamic interactions) for N particles at positions r i . A particle is propelled along its orientation vector e i with velocity v 0 = 0.2 μm s −1 if it senses a concentration c i < c th (with c i determined by Eq. (1)), for c i > c th the propulsion is set to zero and the particle is non-motile. The particle orientations undergo rotational diffusion with rotational diffusion coefficient D R = (1/120) s −1 . Translational diffusion is modelled by the random force ξ i with zero mean and variance <ξ i (t)ξ j (t′) > = 2D 0 δ ij δ(t − t′) with translational diffusion coefficient D 0 = 0.02 μm 2 s −1 . We model steric particle interactions via the repulsive Weeks-Chandler-Andersen potential cut off at r cut = 2 1/6 σ wca . We set ϵ ¼ 100 k B T and σ wca = 3.98 μm, which implies an effective (Barker-Henderson) particle diameter σ = 4.4 μm 37 . The particles are randomly initialised and the equations of motion are integrated with time step Δt = 40 ms. The concentrations c i are updated every time step. To compare the rate of motility changes to the experiment, motility changes are recorded every 480 ms. We have performed two types of simulations: employing periodic boundary conditions (shown in Fig. 8) and modelling the experimental system through a circular confinement with N = 132. For the latter, instead of applying a torque to particles reaching the boundary, in the simulations their orientation vectors are instantaneously reoriented towards the centre of the confinement with R = (65 + 10) μm. As shown in Supplementary Figure 1 Analytical theory. Neglecting the excluded volume of particles, some insights can be obtained from a simplified mean-field theory of our model extending a previous approach 27 . The evolution of an ensemble of APs with joint probability ψ(r, φ, t) is governed by with scalar speed v(c) and orientation e = (cos φ, sin φ) T . We strongly simplify the experimental situation and assume that APs interact only through the chemical concentration profile c(r) generated by the APs, which we assume to adapt instantaneously to a change of particle positions (exploiting the huge difference between colloidal, D 0 , and molecular, D c , diffusion coefficients). This is the standard model of active Brownian particles (ABPs) extended by interactions through an additional scalar field c(r).
It is sufficient to consider only the first two moments We consider stationary profiles with rotational symmetry and vanishing angular polarisation. Switching to polar coordinates with distance r from the origin (the centre of the cluster), we obtain with density profile ρ(r) and radial polarisation p(r). The first equation expresses the balance between active and diffusive particle currents. Eliminating the density, we obtain the Bessel differential equation for the radial polarisation, the solution of which is a function of r/ξ with the length We have two solutions, one for the inner passive region with speed v = 0 and radius r * , the other for the active gas with speed v = v 0 . In the inner region, from Eq. (10) we find that the density gradient is zero and thus the density ρ = ρ c is constant with vanishing polarisation. In the outer region r > r * , the polarisation decays with p(r) = −bK 1 (r/ξ) with integration constant b and modified Bessel function of the second kind K n (x). Through integration, we obtain the density profile for r > r * . Equation (10) also implies that the density is continuous at r * and thus the polarisation has to jump. The jump condition and conservation of the total density allow to determine ρ c and b.
The remaining unknown r * is determined by the concentration threshold. The concentration profile reads c r ð Þ ¼ Z dr′ρ r′ ð Þu r À r′ j j ð Þ ; uðrÞ ¼c e Àr=λ r=σ ð13Þ with condition c(r * ) = c th , which yields r * . We solve the resulting system of equations iteratively to obtain the cluster size r * and density ρ c for given threshold c th and interaction range λ (Fig. 3).
Data availability. The experimental and numerical data that support the findings of this study are available from the corresponding author upon reasonable request. Cluster formation with periodic boundary conditions. a Snapshot of a simulation with N = 1000 particles with density ρ 0 = 0.0075 μm −2 (λ = 5σ, c th = 5.5c). As in the confined system, we observe the formation of a cluster of non-motile particles (blue) surrounded by a dilute gas of motile particles (red). b Blue: Corresponding radial density profile ρ(r)/ρ 0 (with respect to the particles' centre of mass) with R half the box length. The cluster density is the same as for a circularly confined system with N = 1000 particles at the same density (dashed line). For the smaller system with N = 132, the cluster density is slightly higher (dotted line). The reason is that due to the smaller confinement, quorum concentrations are lower and thus the particles have to form a denser cluster to overcome the threshold | 2018-08-14T13:30:23.672Z | 2018-08-13T00:00:00.000 | {
"year": 2018,
"sha1": "b06d283fbd7208805624a3afb24040038dedfb3e",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-018-05675-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b06d283fbd7208805624a3afb24040038dedfb3e",
"s2fieldsofstudy": [
"Physics",
"Biology"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
258634837 | pes2o/s2orc | v3-fos-license | Short-Term TERT Inhibition Impairs Cellular Proliferation via a Telomere Length-Independent Mechanism and Can Be Exploited as a Potential Anticancer Approach
Simple Summary Blocking telomerase to drive telomere erosion-dependent antiproliferative effects in cancer cells appears impractical. However, the evidence of extra-telomeric functions of TERT, the catalytic component of telomerase, in promoting tumour growth/progression strongly supports the potential telomere length-independent therapeutic effects of TERT inhibition. The mechanism(s) underlying these effects need to be explored to identify cellular pathways being (de)regulated by telomerase during the oncogenic process to establish how the selective targeting of TERT can rapidly interrupt the expansion of tumour cells, regardless of telomere length and erosion. Using in vitro models of B-cell lymphoproliferative disorders and B-cell malignancies, we found that TERT inhibition impairs the NF-κB p65 pathway, resulting in decreased MYC expression and a consequent P21-mediated cell cycle arrest. The in vivo results in the zebrafish model confirm the in vitro data and prompt an evaluation of strategies combining TERT inhibition with chemotherapeutic agents to enhance the therapeutic benefits of current treatment modalities. Abstract Telomerase reverse transcriptase (TERT), the catalytic component of telomerase, may also contribute to carcinogenesis via telomere-length independent mechanisms. Our previous in vitro and in vivo studies demonstrated that short-term telomerase inhibition by BIBR1532 impairs cell proliferation without affecting telomere length. Here, we show that the impaired cell cycle progression following short-term TERT inhibition by BIBR1532 in in vitro models of B-cell lymphoproliferative disorders, i.e., Epstein-Barr virus (EBV)-immortalized lymphoblastoid cell lines (LCLs), and B-cell malignancies, i.e., Burkitt’s lymphoma (BL) cell lines, is characterized by a significant reduction in NF-κB p65 nuclear levels leading to the downregulation of its target gene MYC. MYC downregulation was associated with increased expression and nuclear localization of P21, thus promoting its cell cycle inhibitory function. Consistently, treatment with BIBR1532 in wild-type zebrafish embryos significantly decreased Myc and increased p21 expression. The combination of BIBR1532 with antineoplastic drugs (cyclophosphamide or fludarabine) significantly reduced xenografted cells’ proliferation rate compared to monotherapy in the zebrafish xenograft model. Overall, these findings indicate that short-term inhibition of TERT impairs cell growth through the downregulation of MYC via NF-κB signalling and supports the use of TERT inhibitors in combination with antineoplastic drugs as an efficient anticancer strategy.
Introduction
Telomerase is a ribonucleoprotein complex composed of two core components: a non-coding telomerase RNA (telomerase RNA component, TERC) and a catalytic subunit (telomerase reverse transcriptase, TERT) with reverse transcriptase activity for telomeres. Telomeres are repetitive (TTAGGG) DNA structures present at the end of chromosomes, essential for maintaining the genomic integrity of cells [1,2]. The main function of the telomerase is to compensate for the loss of telomeric ends, which occurs during each cell division because of end-replication problems in DNA polymerase [3]. Thus, TERT, by stabilizing telomere length, prevents cell senescence and apoptosis. TERT is the ratelimiting component of the telomerase complex [4], and its expression, usually absent in normal somatic cells, is detectable in~90% of human malignancies [5], allowing cancer cells to overcome the replicative crisis caused by telomere attrition [6,7]. In addition to its role in telomere maintenance, a growing body of evidence has ascribed various telomere length-independent functions to this enzyme [8]. These functions include regulation of gene expression [9], enhancement of cell proliferation kinetics [10], modulation of DNA damage responses (DDR) [11,12], and resistance to apoptosis [13,14], all of which can contribute to tumour formation and progression. BIBR1532 (BIBR) is a non-competitive non-nucleoside small molecule that selectively inhibits telomerase catalytic activity by binding to a hydrophobic pocket, conserved across species, on the superficial region of TERT, preventing proper telomerase ribonucleoprotein assembly and enzymatic activity [12,[15][16][17][18][19][20]. In particular, BIBR binds to the N-terminal domain (TEN) of TERT close to the enzyme catalytic core and blocks its conformation in a closed state, consequently disturbing the active loop conformation and the enzyme processivity [16]. Our previous in vitro studies demonstrated that short-term telomerase inhibition by BIBR impaired cell proliferation with an accumulation of cells in the S-phase and induced apoptosis associated with the activation of DDR via a telomere length-independent mechanism, both in Epstein-Barr virus (EBV)-immortalized lymphoblastoid cell lines (LCL) and in Burkitt lymphoma (BL) cells [12]. Moreover, TERT inhibition by BIBR in LCL cells enhanced the pro-apoptotic and anti-proliferative effects of fludarabine (FLU) and cyclophosphamide (CY), two chemotherapeutic agents frequently used to treat B-cell malignancies [12]. Recently, we demonstrated that short-term telomerase inhibition, without effects on telomere length, negatively impacts cell proliferation and viability, both in the in vivo system and in human malignant B cells xenografted in zebrafish [21].
In cancer, telomerase reactivation often exists in parallel with MYC overexpression [38]. Notably, MYC is involved in oncogenic processes through the activation of pro-tumorigenic genes, including TERT [34,39]. In turn, TERT has been shown to have a direct role in the MYC pathway, either by regulating the wild-type MYC promoter by binding the MYC transcription factor NME/NM23 nucleoside diphosphate kinase 2 (NME2) [40] or through its interaction with MYC at the protein level [29,30]. MYC is essential for the proliferation of both immortalized LCL and BL cells, where MYC downregulation leads to an S-phase cell cycle arrest [41]. TERT has also been shown to functionally intersect with NF-κB signalling. NF-κB is a well-known transcriptional factor and is involved in the transcriptional regulation of both wild-type and translocated MYC promoters [42,43] and can drive TERT promoter activation either directly [34] or through transcriptional activation of MYC [35,44]. On the other hand, it has been demonstrated that TERT can directly regulate NF-κB p65 nuclear levels, with the subsequent activation of a subset of NF-κB target genes [23,24]. Consistently, silencing of TERT or chemical telomerase inhibition reduces p65-mediated transcription of NF-κB target genes [23][24][25][26].
Currently, no data is available on the mechanism(s) involved in the cell cycle arrest induced by TERT inhibition in EBV-immortalized and fully transformed B cells. In the present study, we analysed the mechanisms involved in the non-canonical functions of TERT. In this regard, we studied the interactions of TERT with NF-κB p65 and MYC in in vitro LCL and BL cell lines. Furthermore, we also explored the mechanisms through which Tert inhibition affects cell proliferation in an in vivo zebrafish model. Zebrafish (Danio rerio) proved to be a useful model for studying several areas of cancer research [45], including the characterization of the non-canonical functions of Tert [21,46]. The availability of zebrafish telomerase mutants (tert hu3430/hu3430 or tert−/−) [47] also makes this model relevant to studying in vivo the specific impact of telomerase-targeted therapies. Furthermore, the zebrafish model serves as a bridge between in vitro assays and mammalian in vivo studies [48] and is considered a valuable in vivo tool for preliminary drug screening [49]. Therefore, in light of the possible integration of TERT inhibitors in chemotherapeutic regimens, we evaluated the possible therapeutic application of the combined treatment with TERT inhibitors and antineoplastic drugs frequently used to treat B-cell malignancies to counteract tumour growth in vivo.
Compounds
A stock solution of BIBR (Selleck Chemicals LLC, Houston, TX, USA) at a concentration of 50 mM was prepared by dissolving the compound in sterile dimethyl sulfoxide (DMSO) and stored in small aliquots at −80 • C until use. Ammonium pyrrolidine dithiocarbamate (PDTC) (P8765; Sigma-Aldrich, Saint Louis, MO, USA) was prepared by resuspending the compound in sterile water at a concentration of 10 mM divided into aliquots and stored at −20 • C until use. Fludarabine (FLU, F9813; Sigma-Aldrich) was prepared by resuspending the compound in DMSO at a concentration of 10 mM. Cyclophosphamide (CY, 0768; Sigma-Aldrich) was prepared by dissolving the compound in sterile water at a concentration of 143.3 mM.
Cell Cultures
The 4134/Late LCL was derived from late passages of peripheral blood mononuclear cells from a normal donor infected with the B95.8 EBV strain and expresses high endogenous levels of TERT [12,50]. In agreement with Faumont et al., we employed this cell line as an in vitro model of EBV-driven post-transplant lymphoproliferative disorders (PTLD) [41]. BL41 is an EBV-negative Burkitt's lymphoma cell line with translocated MYC gene (kindly provided by Martin Rowe, Cancer Centre, University of Birmingham, Birmingham, UK). Protein expression and mRNA levels of TERT and telomerase activity have already been evaluated in these cell lines [12,50,51]. BIBR efficiently inhibits telomerase activity in these cell lines [12]. LCLs and BL41 cells were cultured in RPMI-1640 medium (Euroclone, Milan, Italy), supplemented with 4 mM L-glutamine, 50 mg/mL gentamycin (Sigma-Aldrich) and 10% heat-inactivated fetal bovine serum FBS (Gibco, Milan, Italy). The human osteosarcoma cell line (U2OS) was obtained from the American Type Culture Collection (Rockville, MD, USA) and was maintained in McCoy's 5A modified medium (Thermo Scientific, Waltham, MA, USA), supplemented with 50 mg/mL gentamycin and 10% FBS (Gibco). All cell lines used were maintained in culture at 37 • C in a 5% CO 2 incubator and tested negative for mycoplasma contamination.
Animals
All experiments were performed in accordance with European and Italian legislation and with permission for animal experimentation from the Local Ethics Committee of the University of Padova and the Italian Ministry of Health (protocol numbers 569/2018-PR and 259/2020-PR). Zebrafish were maintained in a temperature-controlled (28.5 • C) environment and fed as described by Kimmel et al. [52].
For Tert inhibition experiments, wild-type (WT) and tert mutant (tert hu3430/hu3430 ; tert−/−) zebrafish embryos were treated at the stage of 12 h post-fertilization (hpf), when Tert expression is high in WT zebrafish embryos [46], with 2 µM BIBR or DMSO as a control, and samples were analysed after 12 h of treatment, i.e., at 24 hpf. Two µM of BIBR has been demonstrated to reduce telomerase activity in WT zebrafish [21] and the treatment has proven to be effective in halting viability and proliferation in WT embryos without affecting the telomerase-negative ones employed as controls [21]. The telomerase mutant zebrafish line (allele tert hu3430 ) has already been described [47] and no evidence of telomerase activity was observed in protein extracts from tert−/− zebrafish samples [21,47].
To test the highest tolerable dose of the chemotherapeutic agents (CY and FLU) that does not alter zebrafish viability, 72 hpf casper zebrafish embryos were exposed to different doses of FLU or CY, and viability was analysed after 72 h of treatment. As shown in Supplementary Figure S1, 8 µM FLU ( Figure S1A) and both 2 and 4 mM CY ( Figure S1B) significantly increased embryonic lethality compared to those in the untreated control embryos. Conversely, 5 µM FLU ( Figure S1A) and 1 mM CY ( Figure S1B) did not alter the viability of the embryos compared to the controls; thus, these concentrations were employed in the xenograft experiments.
Plasmids and Transfection
The plasmids employed were the following: a plasmid expressing human TERT (pBABE-hTERT) [50], a plasmid expressing a derivative of the human TERT protein with a hemagglutinin (HA) epitope tag to its C terminus (hTERT-HA), and the empty control vector (pBABE) (gifts from Bob Weinberg, Addgene, Watertown, MA, USA). The transfections were performed using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA), according to the manufacturer's instructions.
Reverse Transcription and Quantitative Real-Time PCR
Total cellular RNA was extracted from 5 × 10 6 cells using 750 µL Trizol reagent (Invitrogen), according to the manufacturer's instructions, and quantified using Nanodrop One (Thermo Scientific). For quantitative real-time PCR experiments, 1 µg RNA was retrotranscribed into cDNA using SuperScript III RNA Reverse Transcriptase (Invitrogen) following the manufacturer's instructions. For the in vivo experiments, total RNA was extracted from 20 WT and tert−/− embryos treated for 12 h, from 12 to 24 hpf, with BIBR or DMSO. Embryos were manually dechorionated, collected in 1.5 mL tubes, and washed twice with phosphate-buffered saline (PBS). Seven hundred and fifty µL of Trizol reagent were added to each sample, and RNA was extracted and retrotranscribed into cDNA as described above.
Quantitative Real-time PCR reactions were performed in duplicate in Platinum SYBR Green qPCR SuperMix (Thermo Scientific) in an ABI PRISM 7900HT Sequence Detection System (PE Biosystems, Foster City, CA, USA). Hypoxanthine phosphoribosyltransferase 1 (HPRT1) and glyceraldehyde-3-phosphate dehydrogenase (gapdh) were employed as in vitro and in vivo internal controls, respectively. The amount of target gene, normalized to the housekeeping gene and relative to a calibrator (DMSO-treated sample), was given by the arithmetic formula: 2 -∆∆CT [53]. TERT transcripts were quantified using the AT1/AT2 primer pair as previously described [50,54].
The sequences of the primers used for real-time PCR are listed in Table S1.
Immunoblot and Co-Immunoprecipitation
Whole-cell lysates were prepared in radioimmunoprecipitation assay (RIPA) buffer (Cell Signaling Technology, Danvers, MA, USA) containing 1× Halt protease and phosphatase inhibitor cocktail (Thermo Scientific) for 30 min, followed by centrifugation at 14,000 rpm. The proteins in the supernatants were harvested and quantified using the Pierce BCA protein Assay kit (Thermo Scientific).
For in vivo experiments, protein lysates were prepared from 50 WT and tert−/− embryos treated with BIBR or DMSO at 24 hpf, as previously described [21].
For co-immunoprecipitation, cells were lysed in immunoprecipitation (IP) cell lysis buffer (9803, Cell Signaling) on ice, followed by 10 min of centrifugation at 14,000 rpm. The proteins in the supernatants were quantified using the Pierce BCA protein assay kit (Thermo Scientific). Immunoprecipitation was performed using 1 mg total proteins in 1 mL cell lysate. Following pre-clearing, antibodies were added following the manufacturer's instructions and incubated overnight at +4 • C on a rotary mixer with gentle rocking. The next day, Protein A/G Sepharose (ab193262, Abcam) 50% bead slurry was added, and samples were incubated for 2 h at +4 • C with gentle rocking. Beads were harvested by slow speed centrifugation at +4 • C and washed five times with 1× cell lysis buffer. Following the final wash, immunocomplexes were eluted using 3× blue loading buffer (Cell Signaling). The eluted proteins were analysed by immunoblotting as described above.
Nuclear and Cytoplasmic Fraction
Subcellular fractionation was performed with NE-PER Nuclear and Cytoplasmic Extraction Reagent (Thermo Scientific) following the manufacturer's instructions. Briefly, cells were collected in ice-cold PBS, suspended in CER-I buffer, and incubated on ice for 10 min. The CER-II buffer was added, and cells were vortexed for 5 s twice with a 1 min interval, followed by immediate centrifugation at 14,000 rpm for 5 min. The supernatant was collected as a cytoplasmic fraction in separate tubes. The cell pellet was lysed in NER buffer on ice for 40 min, followed by 15 s vortexing every 10 min, centrifugation at 14,000 rpm for 10 min, and the supernatant was collected as a nuclear fraction. The cytoplasmic and nuclear protein fractions were quantified and immunoblotted using the protocol described above.
Immunofluorescence
Cells treated with BIBR or DMSO as a control were harvested in ice-cold PBS at approximately 1 × 10 6 cells/mL. Two mL cell suspension were added to each well of the P6 cell culture plate (P6) containing coverslip, and cells were allowed to attach through gravity sedimentation at 37 • C for 30 min, as previously described [55]. Following cell adhesion, PBS was slowly aspirated, and the attached cells were fixed in 10% formalin for 10 min at room temperature. Cells were washed in PBS for 5 min and then permeabilized for 10 min at room temperature using 0.5% Triton X-100 in PBS. Cells were blocked in 1% BSA for 30 min and incubated with rabbit monoclonal P21 (ab109520, Abcam) antibody following the manufacturer's instructions, overnight at +4 • C, followed by three PBS washes at room temperature. The coverslips were then incubated with Alexa Fluor Donkey 488 anti-rabbit secondary antibody (Thermo Scientific) at room temperature for 1 h in the dark, washed three times, and cell nuclei counterstained with propidium iodide (PI) (1 µg/mL) for 10 min. Finally, coverslips were mounted inverted on clear glass slides using ProLong Gold Antifade Mountant (Thermo Scientific). Slides were air-dried for 15 min and then visualized using ZEISS LSM 900 with an Airyscan 2 confocal fluorescence microscope (Carl Zeiss Microscopy GmbH, Jena, Germany).
Cell Viability and Cell Cycle Analysis
Cell viability was determined by trypan blue cell exclusion using a Countess automated cell counter (Invitrogen). Cell cycle analysis of cells treated with either BIBR, PDTC, or DMSO was performed by PI staining, as previously described [12]. Samples were analysed using a FACS Calibur Flow Cytometer (BD Biosciences, Franklin Lakes, NJ, USA), and cell cycle distribution was measured using ModFit LT Cell Cycle Analysis software version 2 (Verity Software House, Topsham, ME, USA).
Telomere Length Measurement
DNA was extracted from 5 × 10 6 cells using the QIAmp DNA Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. Relative telomere lengths were determined by quantitative multiplex PCR assay, as described by Cawthon with a few modifications [56,57]. In particular, each PCR reaction was performed in a final volume of 25 µL, containing a 5 µL sample (10 ng DNA) and a 20 µL master-mix readyto-use 1× Light Cycler 480 SYBR Green I (Roche Diagnostic, Mannheim, Germany), containing 900 nmol/L of each primer. The primer pair employed for telomere amplification were the following: TELG 5 -ACACTAAGGTTTGGGTTTGGGTTTGGGTTTGGGTTAGTGT-3 , and TELC 5 -TGTTAGGTATCCCTATCCCTATCCCTATCCCTATCCCTAACA-3 . The primer pair for amplification of single-copy gene albumin were the following: ALBU 5 -CGGCGGCGGGCGGCGCGGGCTGGGCGGAAATGCTGCACAGAATCCTTG-3 and ALBD 5 -GCCCGGCCCGCCGCGCCCGTCCCGCCGGAAAAGCATGGTCGCCTGTT-3 . The thermal cycling profile was 15 min at 95 • C, two cycles of 15 s at 94 • C, and 15 s at 51 • C, followed by 40 cycles of 15 s at 94 • C, 10 s at 62 • C, 15 s at 74 • C, 10 s at 84 • C, and 15 s at 89 • C, with signal acquisition at the end of both the 74 • C and 89 • C steps. After cycling, a melting curve program was run, starting with a 95 • C incubation for 1 min, followed by continuous acquisitions every 0.2 • C for 45 • C to 95 • C (ramping at 0.11 • C/s). A standard curve was generated at each PCR run, consisting of DNA from the RAJI cell line serially diluted from 20 to 0.08 ng/µL. All DNA samples and reference samples were run in triplicate. LightCycler raw text files were converted using the LC480Conversion free software (http://www.hartfaalcentrum.nl/index.php?main=files&-fileName=LC480Conversion. zip&description=LC480Conversion:%20conversion%20of%20raw%20data%20from%20LC480 &sub=LC480Conversion (accessed on 15 August 2012, version 2)), and the converted data were analysed using LinRegPCR free software version 2012.3.2.0 to obtain the Ct values. Mean Ct values were used to calculate the relative telomere length using the telomere/singlecopy-gene ratio (T/S) according to the formula: ∆Ct sample = Ct telomere − Ct albumin , ∆∆Ct = ∆Ct sample − ∆Ct reference curve (where ∆Ct reference curve = Ct telomere_RAJI − Ct albumin_RAJI ) and then T/S = 2 −∆∆Ct [58].
Xenotransplantation of LCL and BL Cells in Zebrafish Embryos
Xenograft experiments were performed as previously described [21]. Briefly, approximately 300 4134/Late or BL41 cells, pre-treated for 16 h with 30 µM BIBR (pre-BIBR) or DMSO (pre-DMSO) as a control, were fluorescently labelled with the vital cell tracker red fluorescent chloromethylbenzamido derivative of octadecylindocarbocyanine (CM-DiI) (Invitrogen), followed by microinjection into the yolk sac of 72 hpf transparent casper zebrafish embryos that were subsequently transferred to 32 • C. Twenty-four h post-xenotransplantation (hpx), the embryos were selected according to the intensity of the engrafted mass; only embryos with similar fluorescence intensity were chosen, while non-fluorescent embryos or embryos with fluorescent cells outside the place of injection were discarded. The chosen xenografted embryos were divided into 6 experimental groups, and drugs were added to the medium as follows: pre-DMSO xenograft embryos without treatment (pre-DMSO NT), pre-DMSO CY embryos (pre-DMSO CY), pre-DMSO FLU embryos (pre-DMSO-FLU), pre-BIBR drugs unexposed embryos (pre-BIBR NT), pre-BIBR CY embryos (pre-BIBR CY), and pre-BIBR FLU embryos (pre-BIBR FLU). Zebrafish were kept at 32 • C until the end of the experiments. The percentage of labelled cells in the engrafted embryos was determined at 24, 48, and 72 h post-treatment (hpt) in enzymatically dissociated embryos by flow cytometric analysis (see below).
Embryo Dissociation and Flow Cytometric Analysis
The dissociation of zebrafish embryos in a single-cell suspension was performed as previously described [21]. Cell suspensions obtained from 10 embryos per condition were employed to monitor fluorescent cells for proliferation by flow cytometric analysis in an LSR II cytofluorimeter (Becton-Dickinson, San Jose, CA, USA). Xenografted humanlabelled LCL or BL cells were detected based on the fluorescence intensity signal of the CM-DiI fluorochrome. Non-xenografted embryos, included in each experiment, were employed to set the threshold as previously described [21]. Data were processed with FACSDiva Software (Becton-Dickinson) and analysed using Kaluza Analyzing Software v.1.2 (Beckman Coulter, Fullerton, CA, USA).
Statistical Analyses
Statistical analyses were performed with Prism software version 9 (GraphPad Software Inc.; La Jolla, CA, USA). Results were analysed with the Student's t-test, and p-values < 0.05 were considered statistically significant.
TERT Inhibition Reduced Nuclear Levels of p65
We have previously shown that short-term TERT inhibition by BIBR impairs cellular proliferation in in vitro models of post-transplant lymphoproliferative disorders (i.e., LCL cell lines) and Burkitt's lymphoma (i.e., BL cell lines) without any detectable change in telomere length, thus suggesting a druggable extra-telomeric function of TERT involved in cellular proliferation [12]. Interestingly, a telomere-independent role of TERT has been demonstrated as a transcriptional modulator of the NF-κB signalling pathway [23][24][25][26]. NF-κB is a well-known transcriptional factor having critical functions in B-cell malignancies; in particular, enhanced proliferation of LCLs and BL cells was found to be dependent on NF-κB signalling as NF-κB inhibition decreased cellular proliferation in both cell types [59,60]. Therefore, to shed light on the possible mechanism underlying impaired proliferation upon short-term TERT inhibition, the effect of different doses (30,45, and 60 µM) of BIBR treatment on NF-κB p65 expression was investigated in LCL and BL cells. Similar to the previous results obtained with 30 µM BIBR treatment [12], treatment with 45 or 60 µM BIBR also resulted in decreased proliferation rates, starting from 24 h of exposure, in both 4134/Late ( Figure S2A) and BL41 ( Figure S2B) cells. At 24 h of treatment, we previously observed a strong cell cycle arrest with an accumulation of cells in the S-phase [12]; therefore, we chose this time point for the subsequent analysis. In addition, short-term treatment with different doses of BIBR did not affect the telomere length of 4134/Late and BL41 cells, as measured by quantitative multiplex PCR at 24 h of exposure ( Figure S2C,D). Twentyfour h of TERT inhibition by BIBR altered p65 protein levels without any change in p65 transcription in both 4134/Late ( Figure 1A) and BL41 ( Figure 1B) cells. Particularly, BIBR significantly reduced p65 nuclear expression in a dose-dependent manner: from 17 ± 2% to 39 ± 2.5% at 30 µM and 60 µM, respectively, (p < 0.01), in 4134/Late cells ( Figure 1C), and from 12 ± 1% to 40 ± 3% at 30 µM and 60 µM respectively, (p < 0.01) in BL41 cells ( Figure 1D), without any significant change in its cytoplasmic levels. Interestingly, the BIBR treatment even reduced the phosphorylated active form of p65 (p-p65) in the nuclear fraction from 19.5 ± 2.5% at 30 µM to 40 ± 2% at 60 µM in 4134/Late cells ( Figure 1C) and from 18 ± 3% at 30 µM to 44.5 ± 3.5% at 60 µM in BL41 cells ( Figure 1D). Notably, in both cellular models, the NF-κB pathway was active under maintenance conditions, showing high expressions of nuclear p65 at the basal level ( Figure 1C,D), likely dependent on the CD40 signalling [61][62][63]. In addition, co-immunoprecipitation assays show that TERT and p-p65 were associated in complexes in 4134/Late cells under maintenance conditions ( Figure S3) and that BIBR treatment reduced the levels of both p-p65 and TERT in the TERT/p-p65 complex ( Figure S3). ment with different doses of BIBR did not affect the telomere length of 4134/Late and BL41 cells, as measured by quantitative multiplex PCR at 24 h of exposure ( Figure S2C,D). Twenty-four h of TERT inhibition by BIBR altered p65 protein levels without any change in p65 transcription in both 4134/Late ( Figure 1A) and BL41 ( Figure 1B) cells. Particularly, BIBR significantly reduced p65 nuclear expression in a dose-dependent manner: from 17 ± 2% to 39 ± 2.5% at 30 µM and 60 µM, respectively, (p < 0.01), in 4134/Late cells ( Figure 1C), and from 12 ± 1% to 40 ± 3% at 30 µM and 60 µM respectively, (p < 0.01) in BL41 cells ( Figure 1D), without any significant change in its cytoplasmic levels. Interestingly, the BIBR treatment even reduced the phosphorylated active form of p65 (p-p65) in the nuclear fraction from 19.5 ± 2.5% at 30 µM to 40 ± 2% at 60 µM in 4134/Late cells ( Figure 1C) and from 18 ± 3% at 30 µM to 44.5 ± 3.5% at 60 µM in BL41 cells ( Figure 1D). Notably, in both cellular models, the NF-κB pathway was active under maintenance conditions, showing high expressions of nuclear p65 at the basal level ( Figure 1C,D), likely dependent on the CD40 signalling [61][62][63]. In addition, co-immunoprecipitation assays show that TERT and p-p65 were associated in complexes in 4134/Late cells under maintenance conditions (Figure S3) and that BIBR treatment reduced the levels of both p-p65 and TERT in the TERT/p-p65 complex ( Figure S3). were processed to obtain cytoplasmic and nuclear extracts. Representative Western blots showing cytoplasmic and nuclear protein levels of p65, phospho-p65 (p-p65), telomeric repeat binding factor 2 (TRF2), and α-tubulin in 4134/Late (C) and BL41 (D) cells are shown. α-tubulin and TRF2 were used as loading controls for the cytoplasmic and nuclear fractions, respectively. The original Western blots are shown in File S1. Graphs next to the blots show the values in arbitrary units of densitometric analysis performed with ImageJ software. Data represent the mean and SD (bar) from three separate experiments. A significant difference between values in BIBR-treated vs. DMSO-treated cells is shown: * p < 0.05; ** p < 0.01; ns: not significant.
TERT Inhibition by BIBR Suppressed Transcription of a Subset of NF-κB Target Genes, including MYC
As both the phosphorylation and nuclear localization of NF-κB p65 are important for its transcriptional function [64], the transcription levels of NF-κB target genes in LCL and BL cells following treatment with TERT inhibitor were analysed. BIBR treatment, even at a low concentration of 30 µM, caused a significant decrease in the transcription of NF-κB target genes MYC, nuclear factor of kappa light polypeptide gene enhancer in B-cells inhibitor, alpha (IκBα), BCL2 apoptosis regulator (BCL2), and Survivin, involved in cellular proliferation, DNA replication, and apoptosis in both 4134/Late ( Figure S4A) and BL41 ( Figure S4B) cells.
Given the pivotal role of MYC in EBV-driven B-cell proliferation and Burkitt's lymphoma, its expression following treatment with TERT inhibition was analysed in more detail. Results show that the significant decrease in MYC mRNA level induced by BIBR treatment (Figure 2A,B) was paralleled by a concomitant decrease in its nuclear protein expression: from 42 ± 2% at 30 µM to 39 ± 3% at 60 µM (p < 0.01) in 4134/Late cells ( Figure 2C) and from 44 ± 3% at 30 µM to 64.5 ± 4.5% at 60 µM (p < 0.01) in BL41 cells ( Figure 2D).
BIBR treatment per se does not directly impact the TERT transcriptional level. Nonetheless, BIBR treatment decreases both NF-κB p65 and MYC nuclear levels, transcriptional factors of the TERT promoter [34,35,39,65]. In line with these observations, short-term BIBR treatment reduced TERT mRNA levels in both 4134/Late ( Figure 2A) and BL41 ( Figure 2B) cells. Consistently, TERT protein was also expressed at low levels in both 4134/Late ( Figure 2C) and BL41 ( Figure 2D) BIBR-treated cells.
MYC Deregulation Mediated by TERT Inhibition Was Independent of WNT/β-Catenin Signalling, and TERT and MYC Did Not Interact at the Protein Level
Besides NF-κB signalling, the MYC oncogene is transcribed by the WNT/β-catenin pathway [66]. TERT's involvement in the regulation of WNT/β-catenin has been extensively documented [30,32,33,67,68]; thus, the possibility that MYC transcriptional downregulation following TERT inhibition could be mediated via WNT/β-catenin was investigated by evaluating the transcriptional levels of β-catenin (CTNNB1) and the WNT/β-catenin target genes axin 2 (AXIN2) and cyclin D1 (CCND1) following BIBR treatment. Results show that there was no significant change in the mRNA levels of CTNNB1 and AXIN2 in both 4134/Late ( Figure S5A) and BL41 ( Figure S5B) BIBR-treated cells compared to controls. Similarly, the small increase in CCND1 transcription observed in 4134/Late cells ( Figure S5A) was not significant. As expected, the expression of CCND1 in BL41 cells was not found, in agreement with the data that most B-cell lymphomas do not express cyclin D1 [69].
It has been suggested that TERT functions as a cofactor in MYC-dependent transcription by binding MYC protein and consequently improving MYC stability and accessibility in its target promoters [29,30]. To assess this possibility in our cellular models, the association between MYC and TERT proteins was checked through a co-immunoprecipitation assay. As shown in Figure S5, no binding interaction was found between endogenous MYC and TERT proteins in 4134/Late cells ( Figure S5C) or BL41 cells ( Figure S5D).
Ectopic TERT Expression Activated Transcription of NF-κB Target Genes
To further elucidate the role of TERT in the NF-κB transcriptional program, the effects of ectopic TERT expression on NF-κB and WNT/β-catenin target genes were examined in U2OS cells. U2OS cells lack endogenous TERT expression with no detectable telomerase activity and maintain telomere length through the Alternative Lengthening of Telomeres mechanism [70]. TERT transfection efficiently increased its expression in the U2OS cell line (Figure 3), and ectopic TERT expression was accompanied by a significant increase in transcription of a subset of NF-κB target genes ( Figure 3) but did not induce any change in transcription of the known WNT/β-catenin target genes CCND1 [71] and AXIN2 [72] ( Figure 3). tion, DNA replication, and apoptosis in both 4134/Late ( Figure S4A) and BL41 ( Figure S4B) cells.
Given the pivotal role of MYC in EBV-driven B-cell proliferation and Burkitt's lymphoma, its expression following treatment with TERT inhibition was analysed in more detail. Results show that the significant decrease in MYC mRNA level induced by BIBR treatment (Figure 2A,B) was paralleled by a concomitant decrease in its nuclear protein expression: from 42 ± 2% at 30 µM to 39 ± 3% at 60 µM (p < 0.01) in 4134/Late cells ( Figure 2C) and from 44 ± 3% at 30 µM to 64.5 ± 4.5% at 60 µM (p < 0.01) in BL41 cells ( Figure 2D). To analyse whether the selective link between TERT and the NF-κB dependent transcriptional program was unrelated to the telomere maintenance mechanism, U2OS cells were also transfected with pBABE-puro-hTERT-HA, a plasmid expressing a derivative of the TERT protein that had been modified through the attachment of an HA epitope tag to its C terminus (hTERT-HA). hTERT-HA retains telomerase activity but lacks the ability to maintain telomere length [73,74]. Interestingly, as observed with the wild-type TERT (pBABE-hTERT), the ectopic expression of hTERT-HA also significantly increased the expression of the NF-κB target genes MYC, IkBα, interleukin 6 (IL6), and tumour necrosis factor (TNFα) (Figure 3) without any change in transcription of WNT/β-catenin target genes CCND1 and AXIN2. These results further indicate that TERT might have other cellular functions, e.g., modulation of NF-κB target genes, unrelated to its activity on telomeres.
R PEER REVIEW 11 of 23 (TNFα) (Figure 3) without any change in transcription of WNT/β-catenin target genes CCND1 and AXIN2. These results further indicate that TERT might have other cellular functions, e.g., modulation of NF-κB target genes, unrelated to its activity on telomeres.
p65 Inhibition Recapitulated the Effects of TERT Inhibition
To determine the role of NF-κB p65 on MYC regulation in our in vitro models, the effects of the selective NF-κB p65 activity inhibitor (PDTC) [75] were evaluated on MYC expression. PDTC is a well-known NF-κB p65 inhibitor that efficiently reduces p-p65 accumulation [76,77]. As expected, PDTC, in a dose-dependent manner, reduced the p-p65 levels in both 4134/Late ( Figure 4A) and BL41 ( Figure 4B) cells. NF-κB signalling inhibition by PDTC decreased MYC protein expression in both 4134/Late ( Figure 4A) and BL41 (Figure 4B) cells in a dose-dependent manner. Interestingly, both BIBR and PDTC treatment altered the cell cycle profile in 4134/Late ( Figure 4C) and BL41 ( Figure 4D) cells, with a significant accumulation of cells in the S-phase (p < 0.01 for both cell lines). Forty-eight h after transfection, RNA was harvested and mRNA levels for the genes indicated were determined by quantitative RT-PCR. Data represent the mean and SD (bar) from three separate experiments. A significant difference between values in pBABE-hTERT or pBABE-hTERT-HA transfected cells vs. control pBABE-transfected cells is shown: * p < 0.05; ** p < 0.01; ns: not significant.
p65 Inhibition Recapitulated the Effects of TERT Inhibition
To determine the role of NF-κB p65 on MYC regulation in our in vitro models, the effects of the selective NF-κB p65 activity inhibitor (PDTC) [75] were evaluated on MYC expression. PDTC is a well-known NF-κB p65 inhibitor that efficiently reduces p-p65 accumulation [76,77]. As expected, PDTC, in a dose-dependent manner, reduced the p-p65 levels in both 4134/Late ( Figure 4A) and BL41 ( Figure 4B
TERT Inhibition Promoted P21 Expression and Nuclear Localization
P21 is an important cell-cycle inhibitor [78]. As MYC is a repressor of P21 transcription [78] and MYC is downregulated in BIBR-treated cells, the effects of short-term TERT inhibition on P21 expression were investigated. The results show that the S-phase cell cycle arrest induced by TERT inhibition was characterized by a significant increase in P21 mRNA expression in both 4134/Late (p < 0.001, Figure 5A) and BL41 (p < 0.01 Figure 5C) cells. As shown in Figure 5, a significantly increased P21 nuclear accumulation was observed in both 4134/Late ( Figure 5B) and BL41 ( Figure 5D) cells. Consistently, P21 expression and accumulation in the nuclear compartment were further confirmed by immunofluorescence in 4134/Late ( Figure 5E) and BL41 ( Figure 5F) BIBR-treated cells. This is of
TERT Inhibition Promoted P21 Expression and Nuclear Localization
P21 is an important cell-cycle inhibitor [78]. As MYC is a repressor of P21 transcription [78] and MYC is downregulated in BIBR-treated cells, the effects of short-term TERT inhibition on P21 expression were investigated. The results show that the S-phase cell cycle arrest induced by TERT inhibition was characterized by a significant increase in P21 mRNA expression in both 4134/Late (p < 0.001, Figure 5A) and BL41 (p < 0.01 Figure 5C) cells. As shown in Figure 5, a significantly increased P21 nuclear accumulation was observed in both 4134/Late ( Figure 5B) and BL41 ( Figure 5D) cells. Consistently, P21 expression and accumulation in the nuclear compartment were further confirmed by immunofluorescence in 4134/Late ( Figure 5E) and BL41 ( Figure 5F) BIBR-treated cells. This is of interest, given that P21 nuclear accumulation is associated with cell cycle inhibitory functions, whereas its cytoplasmic localization is often associated with pro-oncogenic activities [78,79]. interest, given that P21 nuclear accumulation is associated with cell cycle inhibitory functions, whereas its cytoplasmic localization is often associated with pro-oncogenic activities [78,79].
Short-Term Tert Inhibition by BIBR in Zebrafish Reduced Myc and Increased p21 Expression
Previously, we used the zebrafish model to confirm in vivo our in vitro data on the telomere length-independent anti-proliferative effect of telomerase inhibition [21]. Therefore, we evaluated whether the modulatory role played by Tert during cell cycle progression could be associated with the variation in myc and p21 expression in the zebrafish model as observed in vitro. As shown in Figure 6, in WT zebrafish embryos, 12 h of treatment with 2 µM BIBR, from 12 to 24 hpf, induced a modest but significant decrease of both zebrafish MYC orthologs [80] myca (19.3 ± 8.4%, p < 0.001) ( Figure 6A) and mycb (17.7 ± 10.9%, p = 0.017) ( Figure 6B) expression compared to the controls. Conversely, BIBR treatment shows no effect on myca ( Figure 6D) or mycb ( Figure 6E) expression in tert−/− embryos. Consistently, Myc protein was also decreased in BIBR-treated WT embryos but not in tert−/− ones ( Figure 6G).
Similar to the in vitro results, 12 h of treatment with 2 µM BIBR shows a significantly increased p21 expression compared to the DMSO-treated controls in WT (33.8 ± 18.2%; p = 0.01) but not in tert−/− embryos ( Figures 6C,F, respectively).
Anti-Proliferative Effects of Combined Treatment with BIBR and FLU or CY in EBV-Immortalized and Fully Transformed B Cells Xenografted in Zebrafish
The previous in vitro observation that TERT inhibition by BIBR in combination with FLU or CY (two of the agents most frequently used to treat B-cell malignancies) showed a significant alteration of cell growth with respect to treatment with chemotherapeutic agents alone [12] prompted us to investigate whether TERT inhibition also increased susceptibility to antineoplastic drugs in an in vivo context. To this end, labelled EBV-immortalized and fully transformed B cells were treated or untreated with BIBR, xenografted in casper zebrafish embryos, and subsequently exposed or unexposed to chemotherapeutic agents. The number of injected cells was monitored by flow cytometry analysis in short-term experiments to avoid the expected telomere shortening influence on proliferation due to the inhibition of canonical TERT activity on telomeres widely demonstrated under long-term BIBR treatment [81,82].
telomere length-independent anti-proliferative effect of telomerase inhibition [21]. Therefore, we evaluated whether the modulatory role played by Tert during cell cycle progression could be associated with the variation in myc and p21 expression in the zebrafish model as observed in vitro. As shown in Figure 6, in WT zebrafish embryos, 12 h of treatment with 2 µM BIBR, from 12 to 24 hpf, induced a modest but significant decrease of both zebrafish MYC orthologs [80] myca (19.3 ± 8.4%, p < 0.001) ( Figure 6A) and mycb (17.7 ± 10.9%, p = 0.017) ( Figure 6B) expression compared to the controls. Conversely, BIBR treatment shows no effect on myca ( Figure 6D) or mycb ( Figure 6E) expression in tert−/− embryos. Consistently, Myc protein was also decreased in BIBR-treated WT embryos but not in tert−/− ones ( Figure 6G).
Similar to the in vitro results, 12 h of treatment with 2 µM BIBR shows a significantly increased p21 expression compared to the DMSO-treated controls in WT (33.8 ± 18.2%; p = 0.01) but not in tert−/− embryos ( Figure 6C and 6F, respectively).
Discussion
Telomerase, given its telomeric function that provides unlimited replicative potential, plays a critical role in tumour formation and progression. Nevertheless, many studies have highlighted the importance of this enzyme in several other pro-tumourigenic processes, independently of telomere maintenance [23,24,29,30,32,33,83,84]. However, the molecular mechanism(s) by which telomerase may contribute to oncogenesis beyond telomere maintenance have not been fully clarified and probably depend on the cellular context.
Here we show that in in vitro models of B-cell lymphoproliferative disorders, i.e., LCL, and B-cell malignancies, i.e., BL, TERT inhibition impairs the transcription of a subset of NF-κB target genes, i.e., MYC, IκBα, BCL2, and Survivin. NF-κB is one of the wellknown transcriptional regulators of TERT either directly binding to the TERT promoter [34] and/or indirectly through transcriptional activation of MYC [35]. We show a link between TERT and NF-κB p65, whereby TERT, through extra-telomeric function, regulates
Discussion
Telomerase, given its telomeric function that provides unlimited replicative potential, plays a critical role in tumour formation and progression. Nevertheless, many studies have highlighted the importance of this enzyme in several other pro-tumourigenic processes, independently of telomere maintenance [23,24,29,30,32,33,83,84]. However, the molecular mechanism(s) by which telomerase may contribute to oncogenesis beyond telomere maintenance have not been fully clarified and probably depend on the cellular context.
Here we show that in in vitro models of B-cell lymphoproliferative disorders, i.e., LCL, and B-cell malignancies, i.e., BL, TERT inhibition impairs the transcription of a subset of NF-κB target genes, i.e., MYC, IκBα, BCL2, and Survivin. NF-κB is one of the well-known transcriptional regulators of TERT either directly binding to the TERT promoter [34] and/or indirectly through transcriptional activation of MYC [35]. We show a link between TERT and NF-κB p65, whereby TERT, through extra-telomeric function, regulates nuclear levels of p65, a phenomenon that promotes the NF-κB signalling pathway. The evidence that only the nuclear protein levels of p65 are affected by TERT inhibition, without any change in mRNA expression, suggests that TERT favours the stability of nuclear DNA-bound p65 by reducing its ubiquitination and proteasomal degradation, which are known to constitute downstream events playing major roles in limiting the intensity and duration of NF-κB p65 activity [85]. On this basis, we show that TERT forms a complex with p-p65 under maintenance conditions, thereby putting forward an extra-telomeric function of TERT in NF-κB p65 signalling that can be targeted by BIBR. Interestingly, Ghosh and colleagues demonstrated that the TERT inhibitor MST-312, which, similar to BIBR, disturbs the TEN domain conformation of TERT [16], reduced levels of p65 occupancy at NF-κB target sites [23]. Thus, it is conceivable that BIBR, similarly to MST-312, alters the TERT/p65 complex at NF-κB target genes, consequently affecting their transcriptional expression. Indeed, our results demonstrate that TERT inhibition by BIBR leads to a decrease in nuclear p65 levels, thus altering the transcription of a subset of NF-κB targets, including MYC. The finding that the NF-κB p65 inhibitor, i.e., PDTC, led to a decrease in MYC protein level in both 4134/Late and BL41 cells and an altered cell cycle profile in these cellular models, leading to an S-phase cell cycle arrest, supports the idea that the cell cycle arrest induced by TERT inhibition in vitro [12] occurs through mechanism(s) involving NF-κB and MYC. Notably, 4134/Late cells were found to be more resistant to NF-κB p65 inhibition compared to BL41 cells, an effect that is likely due to EBV infection, as the viral latent membrane protein 1 (LMP1) regulates NF-κB p65 activation [86] and nuclear translocation [87].
The involvement of TERT in NF-κB signalling has been further sustained by the results from experiments with ectopic TERT expression, which increased transcription of several NF-κB p65 target genes in a telomerase-negative cell line. Finding that a biologically inactive TERT overexpression (hTERT-HA) also increased NF-κB target genes transcription in telomerase-negative cells further sustains that TERT has a non-canonical function in modulating NF-κB signalling regardless of its ability to promote telomere lengthening. Notably, the link between TERT and the NF-κB pathway we observed in LCL cells is in agreement with our previous observation in the context of the crosstalk between EBV and telomerase, as we demonstrated that in LCLs, TERT induces notch receptor 2 (NOTCH2) expression through NF-κB signalling, and NOTCH2, in turn, through basic leucine zipper ATF-like transcription factor (BATF) expression, represses BamHI Z fragment leftward open reading frame 1 (BZLF1), the master regulator of the EBV lytic cycle [51,88].
TERT has been reported to be involved in the transcriptional regulation of WNT target genes, including MYC, through its interaction with the chromatin regulator BRG1 [28]. We did not observe transcriptional downregulation of known WNT target genes AXIN2 and CCND1 after TERT inhibition by BIBR treatment in our in vitro models. Furthermore, we show that ectopic over-expression of TERT did not alter the transcription of WNT target genes, further confirming that under our experimental conditions, TERT was not involved in WNT signalling regulation. Our results agree with those of Ghosh et al. [23], who, following TNFα stimulation, observed no association of TERT with BRG1 but reported that the telomerase inhibition significantly limits TNFα-mediated p65 binding to a subset of NF-κB dependent promoters [23].
We also found that cell cycle arrest induced by TERT inhibition is characterized by increased levels of P21, a well-known cell cycle inhibitor. As MYC is a down-regulator of P21 [89,90], the increase of this protein may be linked to the downregulation of MYC induced by TERT inhibition. In the nucleus, P21 exerts its cell cycle inhibitory function by interfering with proliferating cell nuclear antigen (PCNA)-dependent DNA polymerase activity and/or inhibiting cyclin dependent kinase 2 (CDK2)-dependent replication origin firing, ultimately inhibiting DNA replication, and prolonging the S-phase [78]. S-phase lengthening indicates replication fork stalling, which activates DDR, which is crucial for fork protection [91]. These results provide a conceivable explanation of how telomerase inhibition can lead to cell cycle arrest with the activation of telomere length-independent DDR that we previously observed in in vitro models [12].
Importantly, using the zebrafish model, we confirm that also in the in vivo system, Tert inhibition is associated with downregulation of Myc and increased expression of p21, thus accounting for the impaired proliferation and cell cycle arrest with the activation of telomere length-independent DDR, which we have previously observed upon Tert inhibition in this animal model [21]. Notably, these effects were specifically related to Tert inhibition since BIBR treatment shows no effect on Myc and p21 expression in tert−/− embryos. The lack of difference in Myc expression between untreated tert−/− and WT embryos suggests that Tert function(s) is partially compensated in zebrafish tert−/−. While alternative pathways might compensate for the non-canonical functions of telomerase in tert−/− embryos, as suggested in the context of Tert−/− mice [33], the acute inhibition of Tert in WT embryos causes the significant effects we observed in our experiments. Altogether, these findings enforce the concept that telomerase per se seems to exert growth-promoting activities that are independent of its canonical role in telomere length maintenance and sustain the interest in TERT inhibition as an anticancer strategy.
The evidence that TERT inhibition in combination with both CY and FLU shows a cumulative inhibitory effect on the proliferation of both EBV immortalized and fully transformed B cells xenografted in vivo compared to the single agent treatment strongly sustains the validity of this strategy to counteract tumour growth. This combined strategy, which rapidly impairs tumour cell proliferation and sensitizes cancer cells to cytotoxic effects of chemotherapeutic agents, could achieve superior therapeutic outcomes compared to current treatment modalities. Targeting TERT for cancer treatment is not a novel concept given the specificity of TERT expression in tumour cells for the maintenance of telomere length and the replicative potential. However, strategies selectively inducing shortening or damaging telomeres may be limited by normal tissue toxicity inherent to the longterm treatment required before anti-tumour effects caused by critical telomere attrition are exerted, as observed in clinical trials with Imetelstat, an oligonucleotide complementary to the TERC template region that competitively inhibits telomerase activity at telomeres [92,93]. Despite the lack of efficacy in targeting telomere maintenance and concern about side effects of long-term telomerase inhibition, transiently inhibiting TERT non-canonical function(s) to impact tumour growth and survival may offer an opportunity for tumour-specific sensitization to therapy, as we observed in the present study.
Under normal conditions, somatic cells have no detectable telomerase activity, but in cancer cells, TERT is reactivated and can contribute to oncogenesis by establishing a feed-forward signalling loop with NF-κB signalling: TERT is transcriptionally upregulated by NF-κB either directly and/or through MYC; in turn, TERT facilitates p65 in its transcriptional program by enhancing p65 nuclear levels, thereby leading to enhanced expression of NF-κB p65 target genes, including MYC, which is a repressor of P21, a pivotal inhibitor of the cell cycle. This leads to the formation of a multifaceted regulatory loop between TERT, NF-κB p65, and MYC. This regulatory loop, once activated, can contribute to tumour progression through the regulation of multiple hallmarks of cancer.
A schematic model summarizing the mechanism of TERT inhibition-mediated cell cycle arrest in EBV-immortalized and fully transformed B cells is in Supplementary Figure S6. The TERT inhibition by BIBR downregulates NF-κB p65 nuclear localization, reducing the availability of p65 on its target promoters and thereby decreasing the transcription of a subset of NF-κB p65 target genes, including MYC, IκBα, BCL2, and Survivin. The decreased NF-κB p65 and MYC protein levels compromise TERT promoter activation, reducing TERT expression. Furthermore, MYC downregulation compromises cellular proliferation by upregulating P21 expression and its nuclear localization, thereby leading to cell cycle arrest, which may ultimately contribute to the activation of DDR.
Conclusions
In conclusion, our results provide insight into the interdependency of various tumourpromoting factors and how short-term targeting of TERT can contribute to therapeutic effects beyond telomere maintenance. These results strongly support the evidence that TERT has telomere length-independent non-canonical functions in NF-κB p65 signalling and that telomerase inhibition can directly inhibit the transcription of NF-κB target genes, including MYC. Although this study implies the involvement of NF-κB and MYC as mediators of TERT extra-telomeric functions, other mechanism(s) and signals could be relevant in different cellular contexts. Furthermore, TERT inhibition in combination with both CY or FLU shows a cumulative inhibitory effect on the proliferation of EBV immortalized and fully transformed B cells in vivo, thus sustaining the concept that targeting TERT can be exploited as an efficient anticancer approach to enhance the therapeutic benefits of existing chemotherapeutic protocols regardless of telomere length and erosion. Given the potential therapeutic impact of these results, further studies with other specific TERT/telomerase inhibiting strategies and using patient-derived xenograft models should be undertaken to extend and validate these findings.
Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/cancers15102673/s1, File S1: Full-length blots; File S2: Supplementary Table and Figures: Table S1: q-PCR primers for gene expression analysis; Figure S1: Effect of treatment with different doses of drugs on zebrafish embryo viability; Figure S2: Effects of different doses of BIBR on proliferation rate and telomere length in LCL and BL cells; Figure S3: BIBR treatment reduced the levels of TERT/p-p65 complexes; Figure S4: BIBR treatment downregulated the expression of a subset of NF-κB target genes; Figure S5: MYC downregulation in BIBR-treated cells is independent of WNT/β-catenin signalling; Figure S6: Schematic graph of the consequences of short-term TERT inhibition in EBV-immortalized and fully transformed B cells.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-05-12T15:02:17.672Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "b67a9bdbcc0c22e8544d920b98bfae467f759997",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/cancers15102673",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "46f578f6887a6bbcd5a3e02ee884634297145dcd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248062461 | pes2o/s2orc | v3-fos-license | Eating Competence, Food Consumption and Health Outcomes: An Overview
Eating Competence (EC) is one behavioral perspective of eating practices that has been associated with a healthy lifestyle. It emphasizes eating pleasure, self-regulation of eating, body weight satisfaction, and regular meal frequency that includes food variety without focusing on dietary guidelines. EC is composed of four components (Eating Attitude, Food Acceptance, Internal Regulation, and Contextual Skill), and its assessment is performed using the Eating Competence Satter Inventory (ecSI2.0™), developed and validated in English for an adult population. EC has been associated with diet quality and health indicators for various population groups and the development of skills that increase EC might be a strategy to improve nutritional health, and prevent obesity and other chronic diseases. In this sense, this study presents an overview of the background, concepts, features, and possible associations among EC, food consumption, and health outcomes. The high prevalence of diseases associated with food/nutrition draws attention to the necessity to broaden the view on food and its relationship with health and well-being, considering not only nutrients and food combinations but also the behavioral dimensions of eating practices. Healthy nutritional recommendations that take into account attitudes and behaviors are in accordance with the EC behavioral model. Studies on eating behavior emphasize the need to better understand attitudes towards food and eating in the general population using validated instruments. In this context, measuring EC and its association with health outcomes seems to be relevant to nutritional health. The complexity of food choices has been examined in social, behavioral, and biological sciences, representing a great challenge for applying unique and simple theoretical models. Multiple methods are required, as no single theory can fully explain food selection.
Introduction
Eating is not only a basic human need for survival and health, but also a global activity that involves both internal and external aspects of the individual [1]. The main purpose of eating is to sustain life [2]; nonetheless, human nutrition reflects a lifestyle and is modeled by the social, cultural, and economic scenarios that depend on how food is generated, distributed, acquired, sold, prepared/cooked, and consumed [3].
The relationship between food and health seems quite obvious; however, the adoption of healthy eating practices is not restricted to the scope of individual choices, as physical, economic, political, cultural, and social factors influence people's diet positively or negatively [3,4]. Unhealthy eating can be modified by encouraging changes in the individual's behavior [5]. In this sense, concerning individual choices, emerging behavioral approaches show promise in improving eating patterns [6].
One behavioral perspective that has been associated with a healthy lifestyle is Eating Competence (EC), based on The Satter Eating Competence Model (ecSatter), which emphasizes eating pleasure, eating self-regulation, body weight contentment, and a regular meal routine that includes a range of foods without a focus on dietary guidelines [7]. EC involves four fundamental components (eating attitudes, food acceptance, internal regulation of food intake, and management of eating context) and was developed for nutrition education and to characterize eating-related behavior [2,7]. Several studies show that EC is related to diet quality and health indicators for various population groups [8][9][10][11], associated with an increased consumption of fruits and vegetables (FV) [8], adherence to the Mediterranean diet [12], and greater skills in managing the food context [13]. These characteristics meet the Food and Agriculture Organization of the United Nations (FAO) and World Health Organization (WHO) criteria regarding recommendations for healthy eating, which state that the development of skills to make decisions for a healthy diet is part of nutrition education strategies, as it supports the individual in understanding the determining factors of eating practices and encourages the adoption of health-promoting behaviors [3]. Furthermore, studies revealed that individuals with higher EC tend to be more physically active [14], have lower cardiovascular risk [11,12], and have better sleep quality [15,16]. EC is not restricted to the individual scope but is associated with positive eating habits by parents when feeding school-age children [17] and preschool children [18]. Findings also reveal that EC is linked to psychological and behavioral aspects, such as greater satisfaction with body weight and lower frequency of behaviors associated with eating disorders [9,19,20].
Considering that EC is associated with health indicators in several population groups, developing skills that increase EC might be a strategy to improve diet quality and prevent obesity and other chronic diseases. In the present review, we aimed to provide an insight into the background, concept, features, and potential associations among EC, food consumption, and health outcomes.
Behavioral Approaches Applied to Eating
The science of nutrition has been developed based on identifying and isolating nutrients present in foods and their effects on the incidence of certain diseases. Yet, it has been progressively insufficient to elucidate the reasons that motivate food choices. Thus, the knowledge about the nutritional aspect of food does not seem to be the biggest influencer of food consumption [21,22]. Emotional and behavioral responses, whose construction depends on different internal and external experiences, affect how individuals feed [23,24]. In this sense, the understanding of food choices goes beyond the definition of biological and nutritional needs. It is known that food choices are not always conscious; they often happen automatically and habitually, associated with what is available, considering economic aspects and food accessibility [24].
To develop eating practices consistent with health recommendations, behavioral aspects have been explored and valued, seeking to understand how individuals eat, serving as tools to modify their diet. A systematic review that selected 16 articles comparing traditional interventions (caloric restriction and focus on weight loss) with behavioral interventions, noted that the behavioral-based interventions resulted in statistically significant improvements in eating disorder patterns [25]. Even without the prescription of a meal plan, or significant weight loss, individuals undergoing behavioral approaches showed improvements in terms of health as they did not increase their weight in the long term; did not have any worsening in blood pressure, blood glucose patterns, or blood cholesterol levels; and also presented a significant improvement in biomarkers [25]. A limiting aspect of studies with alternative approaches is the natural difficulty in defining what is an approach without a dietary prescription, as some studies included nutritional counseling without specifying how they were carried out.
One of the behavioral approaches is Intuitive Eating, a concept first introduced in 1995 by the American nutritionists Evelyn Tribole and Elyse Resch [26]. It is characterized by teaching individuals to become aware of their bodies, basing food choices on the physiological signs of hunger and satiety and not on the emotional responses that lead to inadequate food consumption [27]. The intuitive approach discourages the practice of diets as a possible source of behavior change and advocates three pillars: unconditional permission to eat; eating to meet physiological and non-emotional needs, and relying on internal hunger and satiety signals to establish what and how much to eat [28,29].
The ability to eat intuitively can be measured using the Intuitive Eating Scale, which assesses the individual's ability to watch out for internal signs of hunger and satiety [27]. The intuitive eating model and its health-promoting benefits have been extensively studied [30]. Recent findings show that individuals with higher scores on the Intuitive Eating Scale are successful in maintaining a stable weight over time, even though they do not have a strong desire to change their weight [31], which seems to be healthier than repeated cycles of weight loss and regain [32,33]. Learning to eat intuitively is a challenge, mainly due to the premise of having unconditional permission to eat [34], which can represent a barrier to its application in different population groups.
Another behavioral proposal is Mindful Eating, which is based on eating with a focus on physical and emotional sensations aroused during eating, without judgment or criticism [35,36]. The state of attention is not restricted to the food choice, but it also means being aware of the present time, paying attention to the meal, including the biological signs of hunger and satiety, and the emotions and environmental stimuli that lead to eating automatically and whilst distracted [37]. Increased attention allows conscious changes in eating habits, helping to eliminate negative eating patterns that tend to be repeated automatically and unconsciously in people's daily lives. Thus, mindfulness in eating promotes a satisfactory and healthy relationship with "what", "how much", "where", and "how" to eat [37]. Mindfulness is an ability learned [35] through meditation and practical exercises that have been widely publicized in the media, showing benefits in reducing stress, anxiety, insomnia, and binge eating [38,39]. The Mindful Eating Questionnaire (MEQ), developed and validated in the United States, is a tool designed to measure mindfulness in eating has been used by various researchers in different countries [35].
Mindfulness in eating has been shown to be a negative predictor of eating disorders in college students and is inversely related to binge eating [40]. An integrative literature review on the influence of mindfulness eating on weight loss, weight return, and weight maintenance evaluated 12 articles, showing positive weight loss outcomes when mindfulness strategies were employed [41]. Mindful Eating has been shown to lessen the harmful behaviors associated with overweight and obesity, emphasizing the act of tasting food andeating slowly and only when hungry [42]. As a learned skill, it requires engagement and discipline from its practitioners. However, a recent study showed that, in mindfulness weight loss programs, only a third of participants successfully practiced mindfulness exercises regularly [43]. This might become a limiting factor for the popularization of such a practice. A more encompassing model with a rigorous examination is the concept of Eating Competence (EC), based on The Satter eating competence model (ecSatter), introduced in 2007 by the American nutritionist Ellyn Satter. According to this approach, eating is a multi-faceted activity involving learned behaviors, social expectations, lifelong preferences, and attitudes and sentiments about food and eating [7]. The ecSatter is based on the fact that hunger and satiety signals in the organism, when properly perceived, are reliable and must be attended to, serving to guide food selection, providing energy balance, and leading to stable body weight [7]. Skills and resources support this internal process for regulating the food environment to provide reliable and regular diets [7].
As a biopsychosocial approach, EC is not concerned with nutrients, portion sizes, or food groups, but rather appreciating food and eating, paying attention to diet diversity, hunger, and satiety sensations, and often making meals while keeping nutrition and the food consumption environment in mind [2]. Such skills are encompassed in four components: Food Acceptance; Eating Attitude; Internal Regulation; and Contextual Skills [2,7]. Individuals who have a higher EC tend to deal better with food and eating, showing self-confidence in relation to food choices, as well as willingness and openness to new food experiences, achieving a balance between desires, choices, and quantities to be ingested [2].
Components of Eating Competence
EC is the successful result of the four proposed components [9] described below and summarized in Figure 1. As a biopsychosocial approach, EC is not concerned with nutrients, portion sizes, or food groups, but rather appreciating food and eating, paying attention to diet diversity, hunger, and satiety sensations, and often making meals while keeping nutrition and the food consumption environment in mind [2]. Such skills are encompassed in four components: Food Acceptance; Eating Attitude; Internal Regulation; and Contextual Skills [2,7]. Individuals who have a higher EC tend to deal better with food and eating, showing selfconfidence in relation to food choices, as well as willingness and openness to new food experiences, achieving a balance between desires, choices, and quantities to be ingested [2].
Components of Eating Competence
EC is the successful result of the four proposed components [9] described below and summarized in Figure 1.
Eating Attitude
Eating attitude involves the beliefs, thoughts, and feelings that lead to behaviors that affect how individuals relate to food and influence food choices and, consequently, health [44]. The construction of eating attitude begins in childhood and is an aspect of great relevance for health promotion [45]. According to the concept of eating competence, the ideal eating attitude refers to being positive about food and eating, that is, enjoying eating and feeling comfortable about it [7], as well as having an interest in food, and showing selfconfidence and tranquility concerning their food choices [2].
Excessive worries about food and eating, whether for aesthetic or health reasons, favor the development of distorted beliefs and feelings about food, leading to the continuous manifestation of a negative eating attitude [46]. In turn, these can lead to the search for an eating pattern out of the individuals' reality [6,25,31,41]. In addition, individuals with negative attitude toward food and eating are more likely to demonstrate body
Eating Attitude
Eating attitude involves the beliefs, thoughts, and feelings that lead to behaviors that affect how individuals relate to food and influence food choices and, consequently, health [44]. The construction of eating attitude begins in childhood and is an aspect of great relevance for health promotion [45]. According to the concept of eating competence, the ideal eating attitude refers to being positive about food and eating, that is, enjoying eating and feeling comfortable about it [7], as well as having an interest in food, and showing self-confidence and tranquility concerning their food choices [2].
Excessive worries about food and eating, whether for aesthetic or health reasons, favor the development of distorted beliefs and feelings about food, leading to the continuous manifestation of a negative eating attitude [46]. In turn, these can lead to the search for an eating pattern out of the individuals' reality [6,25,31,41]. In addition, individuals with negative attitude toward food and eating are more likely to demonstrate body dissatisfaction, as those who usually feel "very fat" or "very thin" or simply uncomfortable with their weight are more likely to feel ashamed of what they eat [7].
A New Zealand study on food habits and body weight (n = 294) found a significant link between the desire to lose weight and the belief in the association between chocolate cake and guilt. Participants who felt guilty about eating the cake had more difficulty maintaining or losing weight than those who associated the cake with celebration [47]. The association of chocolate cake with festivity was linked to better weight maintenance, consistent with the eating attitude proposed in the ecSatter model [1,6], emphasizing the relevance of eating attitude in food selection and health.
In 2006, a telephone survey with a representative sample of the American adult population (n = 2250) showed that individuals who consider themselves overweight report a lack of pleasure associated with eating compared to individuals who feel good about their body weight [48]. Six out of ten adults affirmed that they eat more quantity than they should. However, this type of statement was associated with behavioral aspects, being more present among individuals with higher scores on the stress assessment scale, among those who were overweight, who reported some concern with weight, or who were dieting to lose weight [48]. It is not possible to determine whether this judgment concerning overeating is a genuine result of high food consumption or whether it is a personal impression resulting from strict norms regarding body weight, food, and health.
EC is associated with relevant behaviors in the context of nutritional health, such as greater satisfaction with body weight and a lower frequency of behaviors associated with eating disorders [9,19,20,25,49]. Body dissatisfaction seems to be a risk factor for overweight and eating disorders [20]. Queiroz et al. noticed that Brazilian adults who thought their body size was acceptable had higher EC scores than those who thought it was excessive (EC total score = 33.63 ± 7.56 vs. 27.7 ± 9.02; p < 0.000) [50]. Other studies on EC found a link between EC and body satisfaction. For example, among American college students (n = 1720), body mass index (BMI) was not as good a predictor of EC as weight satisfaction and desire to decrease weight [19]. Among low-income women, the lowest EC score is related to body weight dissatisfaction, a proclivity to overeat in reaction to external emotional stimuli, and eating disorder-related behaviors [9]. In a survey of 557 university students enrolling in an introductory nutrition course, researchers discovered that individuals who had never had an eating disorder had a higher average ecSI score than those who had a present or past eating disorder [49].
Eating-competent individuals usually tend to have less body dissatisfaction and less expression of weight control, as well as lower psychosocial characteristics related to disordered eating, fewer food dislikes, and greater food acceptance [20]. As EC increased, decreases were observed in the tendency towards bulimic thoughts, drive for thinness, and body dissatisfaction [20]. Regarding Eating Attitude, this component was inversely associated with restrained eating, body dissatisfaction, and desire to be thin [20]. Bulimic thoughts and feelings of uncontrolled hunger significantly increased as internal regulation decreased [20].
Individuals with positive eating attitude do not usually blame themselves for eating unhealthy foods [48]. In the psychological aspect, individuals with higher EC experience a positive and rewarding food context, so they feel able to eat what they like, according to their accessibility and sufficient quantity to meet their nutritional needs [2]. Furthermore, the higher the EC, the lower the food restrictions and the greater the acceptability of food [20], resulting in a more varied and healthy diet.
Everything individuals think about certain foods and eating behaviors can influence their choices [51]. In this sense, it is important to better understand attitudes towards food and eating in the entire community, not just among people with eating disorders [44]. Considering that eating attitude influences food choices, understanding this component of EC seems to be relevant for nutrition diagnoses and health interventions.
Food Acceptance
The sensory characteristics of food, especially the taste, are identified determinants of food intake [24]. However, according to the EC concept, enjoyment and pleasure are important motivators for food selection [7].
Food choices are inserted in the priority order of human needs that can be understood from the perspective of Satter's Hierarchy of Food Needs [52]. According to this proposal, the first basic need is to have enough food, which means food security from an economic and social standpoint. In this first level, individuals are driven by hunger and anxiety about getting enough to eat. The second need considers the subjective issue of acceptability, linked to food culture, social norms, and rules. In third comes the guarantee of having food availability for the next meals, indicating the possibility of planning the stock and budget for foods purchase. Following the hierarchy, the flavor comes fourth, after the first three basic needs are satisfied, and then the possibility of opening up new food experiences and eating unfamiliar foods. Finally, after all the above needs are met, the individual can consider instrumental reasons, such as searching for physical results (health and/or aesthetic) or cognitive and spiritual reasons [52]. At each level, needs must be satisfied before those at the next higher level can be experienced and addressed. The first three stages of the food need hierarchy are linked to issues involving the concept of Food and Nutritional Security [3].
Individually, the tendency to make food choices can be triggered by relatively simple stimuli, such as the content of sugar or fat in the food, the individual's gastric capacity, the size of the portion presented, and even how much others eat [53]. However, external aspects affect those neurophysiological mechanisms, causing sensations such as pleasure and satiety, which are exacerbated or inhibited through logical reasoning about the consequences related to weight and health [53]. Moreover, concerning external issues, it is important to consider the various environments that influence food choices, as these environments make food available to the consumer [54]. In this sense, daily contemporary life is characterized by the abundance of attractive and energy-dense foods, which, combined with less need for physical activity, results in an "obesogenic" environment [53]. In many contemporary societies, some foods have become almost universally available and accessible, being purchased in many places, at any time, and by anyone [3,4,45]. The profusion of eating opportunities leads individuals to make numerous choices throughout the day, including the choice not to eat [1].
The acceptability of food is also related to the affective and symbolic value that the food represents, influencing the construction of preferences and aversions [3,45]. The taste construction and food preferences is a process that starts in early childhood; thus, the adequate development of this skill depends on the feeding practices from an early age. In this sense, among the feeding behaviors, it is recommended that the diet of the infant or young child be diversified, with repeated exposure to healthy foods and drinks, avoiding the offer of foods rich in salt, sugar, and flavor additives to provide the construction of healthy food preferences [55].
Food acceptance, in addition to being associated with economic, environmental, social, and cultural factors, is a determinant of food variety. It is known that dietary variety contributes to more complete and healthy nutrition, as no single food can provide all the nutrients that the human body needs [56]. A study of adults (4964 men and 4797 women) participating in the Continuing Survey of Food Intakes by Individuals demonstrated that dietary variety was strongly associated with better nutritional adequacy [57].
Food acceptance, according to the ecSatter model, highlights attitudes and behaviors such as: feeling calm in the presence of new and unfamiliar foods; feeling comfortable about food preferences (including foods with sugar, salt, fat, or other ingredients recognized as unhealthy); being able to make choices, accepting or refusing food offered, without constraints; having the ability to eat foods that you do not like very much if the situation demands it; and demonstrate curiosity about food, with an inclination to try new foods and, eventually, include new options in the food repertoire [2,7]. Thus, this component of EC seems to be an important aspect of eating behavior for health promotion.
Internal Regulation
This component is related to identifying the physical signs of hunger, appetite, and satiety, which will guide the amount of food to be eaten to contribute to the natural maintenance of healthy and stable body weight [7]. People with better internal regulation tend to have more regular meals, as they are naturally aware of the signs of hunger and can maintain a predictable rhythm of meals [8]. In addition, internal regulation allows confidence in the experience of satiety, which contributes not only to weight stability but also to satisfaction with the body shape [2]. Internal regulation is part of the central idea of intuitive eating and is also highlighted in mindful eating, as the improvement in awareness of internal and external experiences allows the individual to make more rational and less impulsive choices [6].
Internal regulation was linked to BMI, body size perception, and food consumption among Brazilian adults [2,7]. The absence of internal regulation is linked to the inability to recognize sensations related to hunger/satiety, as well as bulimic thoughts and feelings of uncontrollable hunger [20].
According to studies, a lack of EC is linked to bulimic thoughts, a feeling of uncontrollable eating, and a higher frequency of binge-eating episodes [9,20]. Generally, the more subtle bodily sensations happen automatically and, when such sensations are ignored, the signs of hunger and satiety end up being perceived later, when they are exacerbated [58]. This leads the individual to experience extremes of hunger and fullness. People who become used to external control over the amount they should eat-for example, restrained eaters and individuals who are dieting-may feel unable to trust their own ability to decide how much to eat [7]. Eating disorders can be triggered by dieting, but are not caused by it; however, it seems that self-imposed dietary restrictions are associated with lower levels of internal regulation [20].
According to the Satter proposal, to work on internal regulation, it is necessary to seek a balance between discipline and permission [2]. The discipline includes having regular meals in an appropriate environment, and permission involves the possibility of choosing the foods that will make up each meal, within the context limited by economic and social factors, but with the freedom to be able to eat the amount that satisfies hunger [13].
Contextual Skills
Food choices are situational and part of a process that requires multiple and interrelated decisions [1], e.g., a decision regarding what to eat is frequently tied to where to get it and how to prepare it, and a purchase decision might be linked to additional considerations including where to maintain food and how to deliver it [1]. In this sense, eating involves a series of actions and behaviors that include a variety of food handling steps, each of which requires distinct decision-making procedures, such as acquiring, preparing, and changing raw materials into meals [1].
In the ecSatter model, this component is linked to the ability to manage the food context, that is, develop food shopping strategies, plan meals, have cooking skills that enable food autonomy, and manage the time dedicated to preparing and consuming meals [2,20].
Food preparation skills have been positively associated with diet variety, increasing diet quality [59,60]. Cooking contributes to a healthier diet as it develops multiple knowledge about the different properties of food, making the cook acquire a conscious and objective attitude towards food, favoring the preparation of palatable, attractive, and good quality meals [53].
To minimize the replacement of fresh foods and regional culinary preparations by industrialized and ready-to-eat products, food systems should aim to preserve food cultures, encouraging the development of culinary skills to favor the consumption of artisanal and homemade meals [61]. The habit of cooking and preparing meals at home has shown a positive association with EC and healthy eating. Krall and Lohse, in a study to validate the ecSI with American women (n = 507; 18-45 y/o), found a positive relationship between EC and the habit of cooking at home. In addition, women classified as competent eaters (EC ≥ 32) reported that they like to cook and demonstrated more practical skills in managing their meals, including healthy aspects in food planning. As expected, women with greater contextual skills had lower BMI and presented a higher intake of FV [9]. Among Brazilian adults (n = 1810; 75% female), the contextual skills were positively associated with education level, age, BMI, food consumption, and income [62].
An American survey that interviewed 764 men and 946 women between the ages of 18 and 23 showed that those who reported more frequent preparation of their meals consumed amounts of fat, fruits, vegetables, whole grains, and calcium closer to dietary goals [63]. A recent Brazilian study showed that parents with autonomy and confidence in their cooking skills provide their children with a diet with fewer artificial and ultra-processed foods, indicating the importance of managing the food context in promoting the individual's and families' nutritional health [64]. Previous studies have revealed that parents with cooking practice behaviors play an important role in mediating their children's consumption of FV [17,18].
According to Taylor, pleasure in eating is associated with pleasure in cooking [48]. To improve diet quality, interventions among young adults should invest in teaching practical and healthful food preparation skills [63]. Therefore, the development of food preparation skills using cooking classes has been considered a way to develop healthy eating and enable people to combine healthy habits with a wider variety of meals, resulting in increased independence in the implementation of healthy behaviors [6,65].
The ability to manage the food context proved to be especially important during the COVID-19 pandemic, where routines were entirely changed. A cross-sectional study performed in Brazil from 30 April to 31 May 2021 among a convenience sample of the Brazilian adult population (n = 302; 76.82% female) found that the measure of the contextual skills component decreased after the pandemic among those who gained weight (9.56 ± 3.43vs. 7.50 ± 3.95; p < 0.005); those who decreased the consumption of vegetables (9.78 ± 2.79 vs. 6.63 ± 3.40; p < 0.005), and those who increased the consumption of sugary beverages (9.19 ± 3.16 vs. 6.92 ± 3.70; p < 0.005) [66]. Moreover, individuals who used to buy ready-to-eat meals during the pandemic showed a reduction in total EC and all components (p < 0.005). On the other hand, there was no reduction in the contextual skill component among those who reported the habit to prepare their food [66].
The weakening of the transmission of cooking skills between generations favors the consumption of industrialized and unhealthy foods. Developing contextual skills allows the understanding that dealing with the food context is not a waste of time; on the contrary, it is an essential activity for life and for the promotion and maintenance of health, which can become a source of pleasure. Contextual skills are directly associated with the habit of planning meals, using all food groups and nutrition facts labels, as well as preparing meals, and eating at home more frequently [20].
Eating Competence Inventory
EC can be evaluated using the Satter Eating Competence Inventory (ecSI™2.0), a tool consisting of a 16-item self-administered questionnaire that assesses overall EC and its four components: eating attitude, composed of six items; food acceptance (three items); internal regulation (two items); and contextual skills (five items) [67].
Items are answered with the options: always; often; sometimes; rarely; and never. The score is obtained by the sum of the answers (always = 3; often = 2; sometimes = 1; rarely = 0; and never = 0), thus, the scores of the ecSI2.0™ can range from 0 to 48 [68]. The cutoff for the definition of eating competence is 32 and above [20,68]. The higher the ecSI2.0™ score, the higher the eating competence. There is no defined cutoff point for each of the four components [68]. However, in individualized approaches, to the extent that the score is deficient in one of the four components, it is possible to predict which skill the individual needs more attention and reinforcement.
The ecSI [69] was initially validated in 2007 with a sample of 832 US adult respondents (mean age 36.2 ± 13.4 years) without eating disorders, 78.7% female, white, educated, overweight, physically active, and food secure, providing support of content and construct validity, as well as internal consistency [20]. Its reliability was examined with 259 white females (26.9 ± 10.4 years), mostly food secure, with some college education, providing psychometric evidence about the reliability of the ecSI to measure EC but suggesting the revision of some items, as individuals with lower income tended to score lower on the ecSI [69]. In 2011, researchers revised the tool and made changes in the text of four items to favor the understanding of the content by individuals with lower income [9]. Construct validity of this instrument was demonstrated in a larger sample of 507 low-income women [9], aged 18 to 45 y/o, and results originated the ecSI/Low Income (ecSI/LI) [9]. The ecSI/LI was tested again in 2015 with 127 adults (35.8 ± 5.3 years) and proved to also be valid for higher-income groups [70]; this tool was named ecSI2.0™. A Confirmatory Factor Analysis was conducted by Godleski et al., to affirm factor structure resulting in the movement of one item from the Internal Regulation to the Eating Attitude subscale in the current version of the inventory (ecSI2.0™) [67].
Investigators and educators, under permission, can use the ecSI2.0™ to investigate the EC construct and track intervention outcomes with individuals of different levels of income and education [68]. Originally formulated in English, the ecSI2.0™ is translated into German, Arabic, Finnish, Japanese, Estonian, Spanish [71], and Brazilian-Portuguese [62]. Table 1 summarizes the main findings of the studies that measured EC using the ecSI in different population groups. Students who were eating competent were more satisfied with their body weight, less likely to report the desire to lose weight, and had lower BMIs than students who were not eating competent. Weight satisfaction and desire to lose weight were better predictors of EC ** than BMI.
Krall and Lohse, 2011 [9] USA 507 low-income females (18 to 45 years) Evaluate the construct validity of a version of the ecSI, as adapted for use in a low-income population.
Food acceptance, FV intake, food management, and self-reported physical activity were positively related to ecSI/LI scores. BMI, dissatisfaction with body weight, tendency to overeat, and disordered eating were negatively associated with ecSI/LI scores. Eating-competent parents demonstrated more modeling behaviors related to food preparation and fruits/vegetables; greater self-efficacy/outcome expectancies and greater in-home fruit/vegetable availability. Measuring EC may contribute to understanding parent behavior as a mediator in school-based nutrition interventions. To examine changes in EC in a 12-month weight-loss intervention.
Weight loss interventions that introduce concerns about eating attitudes, behaviors, and foods can reduce EC. Extending the measurement range is more appropriate as it allows sufficient time for the individual to acquire self-efficacy, better reflecting the intervention's impact on EC. Investigate the associations of changes in EC with changes in lifestyle, anthropometrics and biomarkers of glucose and lipid metabolism.
EC was associated with an increase in diet quality, high-density lipoprotein cholesterol, and with decreased BMI and waist circumference. EC could be a potential target in lifestyle interventions to improve the cardiometabolic health of people at type 2 diabetes risk.
Eating Competence and Health
EC has been evaluated in several countries and is associated with health indicators, such as food consumption, maintenance of body weight, and the occurrence of diseases, and factors related to other health aspects, such as sleep quality, physical activity, stress management, and behaviors linked to eating disorders.
Eating Competence and Diet Quality
Despite not focusing on quantities or specific nutrients, the EC behavioral model is associated with diet quality [7]. Diet quality refers to the degree of adequacy of a dietary pattern compared to recommendations for healthy eating. Such recommendations are defined based on minimum parameters so that the diet provides all the necessary nutrients to promote and maintain health [56,78]. Among the dietary patterns explored in studies on EC, the consumption of FV is highlighted, following the recommendations for healthy eating [3,5]. The intake of FV is considered adequate when the usual rate is a minimum of five servings a day, totaling 400 g/day [79].
Positive associations between EC, food acceptability, and FV consumption have been described. For example, an American study, carried out with a convenience sample of 863 adults, compared the ecSI score with the responses of other instruments to investigate aspects of eating behavior, food acceptability, FV consumption, and sociodemographic data [20]. Among the instruments used, there were two related to food consumption and diet quality: Food preference survey (is an alternative to food frequency surveys, with a list of 62 food items, judged on a scale ranging from "dislike extremely" and "like extremely" with separate choices for "never tried" or "would not try"); and Fruit and vegetable stage of change algorithm (measures the stages of change-pre-contemplation, contemplation, preparation, action, and maintenance-for FV intake through responses that indicate the current intake and the intention to modify it). Regarding food consumption, individuals in the pre-contemplative stages of increased consumption of FV had lower scores on the ecSI than those who were already in the action and maintenance phases. Individuals with higher ecSI scores also had greater food acceptability and fewer dietary restrictions than those with lower EC [20]. The positive association between EC, food acceptability, and FV consumption was also confirmed in subsequent studies with low-income women [9,10,72]. EC also showed a positive association with diet quality in another study with American women (n = 149; age 18-50 y). Through a telephone interview, the researchers collected three days of 24 h dietary recall and the ecSI on the third day of the interview. The results showed that women classified as competent eaters (ecSI scores ≥ 32), had higher ingestions of fiber and vitamins A, E, C, and complex B, as well as magnesium, zinc, iron, and potassium. This study divided the group of women according to dietary patterns. The Prudent pattern, defined by the consumption of nutritious foods such as FV, and low-fat dairy products, was more prevalent among women classified as competent eaters. On the other hand, the Western pattern, associated with fatty, salty, and sugary foods, was observed more among women with lower scores on the ecSI [8,9].
A study in Brazil that looked at the relationship between EC and food intake and health outcomes among adults (n = 1810; 75% females) found that FV ingestion was strongly related to overall EC and its components [62]. The findings show that EC is linked to higher consumption of FV, which is related to improved health and protection against overweight [62].
Lohse et al. [12] investigated the relationship of EC with food consumption in 638 elderly at cardiovascular risk, participating in the Spanish clinical trial called Prevención con Dieta Mediterranea (PREDIMED). Comparing people with higher and lower EC, those with higher EC ingested more fruit and fish, consumed fewer dairy products, and consistently adhered to the Mediterranean diet [12]. In Finland, a recent cross-sectional study with 3147 adults (18 to 74 years old) at great risk for type 2 diabetes investigated whether EC is associated with lifestyle and metabolic risk factors for type 2 diabetes. The study showed that eating competent individuals (with a score ≥ 32 on the ecSI2.0™) had an improved diet quality, measured with an 18-item food intake validated questionnaire [76].
Among Brazilian adults (n = 1810) artificial juice (fruit-flavored soft drinks not made from fresh fruit) or soda consumption was found to be inversely related to EC [62]. Regular soda drinking is connected with a decreased intake of fruits and fiber, as well as a higher intake of junk foods and meals with a greater glycemic index [79]. Sugary drinks are also linked to increased energy consumption, a greater BMI, and a higher risk of medical complications [79].
EC is also associated with parental food-related actions that are favorable and mediate healthful food habits in young children, such as self-efficacy to serve FV and FV availability at home. In the USA, a study with parents of 4th-grade children (n = 339; 78% Hispanic), found that eating competent parents demonstrated more modeling, greater self-efficacy/outcome expectancies, greater in-home FV availability, and a higher frequency of eating breakfast and dinner with their children [17]. This research model was repli-cated in 2019 by Lohse et al. with a population of mostly white, non-Hispanic parents of 4th graders (n = 424; 94% white) and confirmed that the availability of FV continued to be greater in the homes of parents with higher EC, with results maintained even after adjustments for educational level [80].
Tylka et al. [18] looked at intuitive eating and EC as predictors of feeding practices (restriction, monitoring, pressure to eat, and dividing feeding responsibilities with their child), they found that mothers who allowed themselves unrestricted eating were less likely to inhibit their children's food consumption; mothers who usually ate for physical (instead of sentimental) purposes and had contextual eating skills (e.g., mindful eating, planning regular and nutritious meals) were less likely to restrict their children's food intake [18]. In the behavioral aspect, parents with higher EC had more dietary practices associated with the prevention of childhood obesity than parents classified as non-competent eaters [80].
In Finland, a study with adolescents that measured EC (using the ecSI translated into Finnish) found that EC was linked with a higher regularity of meals, a higher frequency of FV consumption, and more familiar healthy eating patterns [74].
Eating Competence and Risk Factors for Overweight and Non-Communicable Chronic Diseases (NCDs)
Overweight and obesity are important risk factors for developing NCDs, so the control and maintenance of an adequate weight have been recommended as a health goal. Lohse et al. [20] observed the relationship between EC and BMI in the ecSI validation study, with 863 healthy adults, aged between 18 and 71 years. On that occasion, individuals from the group of competent eaters reported a smaller lower percentage of BMI ≥ 25 compared to those from the group of non-competent eaters [20]. This association was also found in another study with low-income North American women, in which a lower ecSI score was related to a higher BMI [9]. A sampling of the adult population in Brazil yielded similar results (n = 11,810; 75% females) with high educational levels and high income, where eating competent individuals showed smaller BMI than non-eating competent ones [50].
The study with a sub-sample of 638 elderly participants at cardiovascular risk participating in the Spanish clinical trial Prevención con Dieta Mediterránea (PREDIMED) showed that individuals classified as competent eaters had lower BMI, higher High-density Lipoproteins (HDL), lower Low-density Lipoprotein (LDL), and lower fasting glucose rates, and participants with higher EC, despite reporting higher caloric intake, had lower BMI [12]. The association between EC and cardiovascular risk biomarkers was also documented in a smaller sample (n = 48), composed of men and women between 21 and 70 y/o, with dyslipidemia. Subjects classified as non-competent eaters had considerably higher levels of triglycerides and LDL compared to the competent eaters' group [11]. In a Finnish study with participants in the StopDia (Stop Diabetes) survey, EC was linked to a lower rate of type 2 diabetes, visceral obesity, metabolic syndrome, hypertriglyceridemia, and greater insulin sensitivity [76]. These findings support the hypothesis that developing skills that increase EC may be a strategy to help control body weight, prevent metabolic syndrome and cardiovascular disease, and may help to prevent type 2 diabetes in the long term [76].
Subsequently, another Finnish study monitored 2291 individuals at high risk for type 2 diabetes to see if there were any links between changes in EC and changes in lifestyle, anthropometry, and glucose and lipid metabolism biomarkers [77]. During the intervention, participants were divided into three groups: (group i) received guidance to change their lifestyle through digital means; (group ii) received a lifestyle intervention based on group encounters, associated with digital guidance; and (group iii) control, who received written instructions about changing their diet and lifestyle. EC's total score was 29.7 ± 7, with no difference in the score among the study groups. The EC total score increased among participants independent of the intervention type, being 0.4 in the digital (group i), 0.5 in the combined digital and group-based (group ii), and 0.7 in the control. Altogether, 40% of the participants were classified as competent eaters at the beginning, and 43% after one year (without differences among groups). Independent of initial EC, an improvement in EC was linked to enhanced HDL levels and a reduction in BMI and waist circumference. Among the components of EC, contextual skills, food acceptability, and eating attitude were associated with several of these changes, suggesting that EC may be a potential goal for lifestyle interventions to improve the health of people at risk for type 2 diabetes [78].
Regarding the management of body weight, recent findings show that in interventions for weight loss, the measurement of EC can be reduced at the beginning of treatment (up to four months), probably due to the concern with a restrictive diet. However, when the diet is accompanied by educational approaches, focusing on behavior change and increased physical activity, EC increases in the long-term (12 months) [75].
Eating Competence and Health-Related Aspects
Additional health-related aspects, including sleep quality and physical exercise, also show some association with EC. For example, individuals with higher EC tend to perceive themselves as being physically more active [14,20], and the relationship between low EC and low levels of physical activity is reported among low-income women [9].
Regarding sleep quality, a study with young university students found that overweight and obesity were linked to poor sleep quality and low EC, with results maintained after adjustments for the sociodemographic variables of the sample [73], suggesting that obesity prevention interventions for college students should include education components to improve EC and emphasize the importance of sleep quality [73]. Another research with university students evaluated the association between the number of sleep hours and EC, comparing students that slept eight hours or more per night with those who slept less than eight hours. The results show that those who slept less had even more poor eating habits, weaker internal food regulation, and more binge eating behaviors [15].
Conclusions
The rising prevalence of diseases linked to food and nutrition highlights the need to widen one's perspective on food and its impact on health and well-being, considering not only nutrients and food combinations but also the behavioral dimensions of eating practices. Making food choices is a common and expected part of daily life, and it is an essential factor in everyone's life [1]. Recent reports published by FAO and WHO [3,4] include healthy nutritional recommendations that take into account attitudes and behaviors that are in accordance with the behavioral model proposed by Satter [7]. In addition, researchers in the area of eating behavior emphasize the need to better understand attitudes toward food and eating in the general public by employing validated instruments to achieve this [44]. Considering that EC has been associated with diet quality and health outcomes, developing skills that increase EC might be a strategy to improve nutritional health and prevent obesity and other chronic diseases. The complexity of food choices has been examined on many fronts such as social, behavioral, and biological sciences, representing a great challenge for applying unique and simple theoretical models [1]. Multiple perspectives are suggested, as no single theory can clearly explain food decision making. Further studies are necessary to evaluate eating competence among different population groups and identify factors that might affect it, to stimulate policies and actions to improve EC among population groups. | 2022-04-10T15:23:00.402Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "28769b4610de6fdca6913841f872578f782c7e84",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/8/4484/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5167b8bce8158941bb8dee2d179de205ba6a05b7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5008594 | pes2o/s2orc | v3-fos-license | Elder abuse prevalence in community settings: a systematic review and meta-analysis
BACKGROUND
Elder abuse is recognised worldwide as a serious problem, yet quantitative syntheses of prevalence studies are rare. We aimed to quantify and understand prevalence variation at the global and regional levels.
METHODS
For this systematic review and meta-analysis, we searched 14 databases, including PubMed, PsycINFO, CINAHL, EMBASE, and MEDLINE, using a comprehensive search strategy to identify elder abuse prevalence studies in the community published from inception to June 26, 2015. Studies reporting estimates of past-year abuse prevalence in adults aged 60 years or older were included in the analyses. Subgroup analysis and meta-regression were used to explore heterogeneity, with study quality assessed with the risk of bias tool. The study protocol has been registered with PROSPERO, number CRD42015029197.
FINDINGS
Of the 38 544 studies initially identified, 52 were eligible for inclusion. These studies were geographically diverse (28 countries). The pooled prevalence rate for overall elder abuse was 15·7% (95% CI 12·8-19·3). The pooled prevalence estimate was 11·6% (8·1-16·3) for psychological abuse, 6·8% (5·0-9·2) for financial abuse, 4·2% (2·1-8·1) for neglect, 2·6% (1·6-4·4) for physical abuse, and 0·9% (0·6-1·4) for sexual abuse. Meta-analysis of studies that included overall abuse revealed heterogeneity. Significant associations were found between overall prevalence estimates and sample size, income classification, and method of data collection, but not with gender.
INTERPRETATION
Although robust prevalence studies are sparse in low-income and middle-income countries, elder abuse seems to affect one in six older adults worldwide, which is roughly 141 million people. Nonetheless, elder abuse is a neglected global public health priority, especially compared with other types of violence.
FUNDING
Social Sciences and Humanities Research Council of Canada and the WHO Department of Ageing and Life Course.
Introduction
Elder abuse is a serious human rights violation that requires urgent action. 1 It is also a major public health problem that results in serious health consequences for the victims, including increased risk of morbidity, mortality, institutionalisation, and hospital admission, and has a negative eff ect on families and society at large. [2][3][4] Despite the severity of its consequences, major gaps remain in estimating the prevalence of elder abuse.
Understanding the magnitude of elder abuse is a crucial fi rst step in the public health approach to prevent this type of violence. 5 However, the lack of consensus in defi ning and measuring elder abuse and its major subtypes (psychological, physical, sexual, and fi nancial abuse and neglect) have resulted in wide variations in reported prevalence rates. For example, national estimates of past-year abuse prevalence rate ranged between 2·6% in the UK 6 and 4% in Canada 7 to 18·4% in Israel 8 and 29·3% in Spain. 9 To date, only a handful of studies have synthesised results of elder abuse prevalence studies, and few have done so quantitatively. Cooper and colleagues' 10 global estimate is one in 17, or 6%, in the past month. This estimate was based on individual studies selected as best evidence. Dong's systematic review 11 ranged from 2·2% to 79·7% and covered fi ve continents, with large geographic variations that might stem from cultural, social, or methodological diff erences. Given the large number of prevalence studies published over the past decade and the absence of global quantitative estimates of the prevalence of elder abuse, we believed it was an opportune time for a full systematic review and quantitative analysis of elder abuse prevalence.
To address the need for more accurate global and regional estimates of elder abuse prevalence, we did a systematic review and meta-analysis of existing elder abuse prevalence studies from around the world. We aimed to understand the wide variations in prevalence estimates by investigating the infl uence of studies' demographic and methodological characteristics.
Search strategy and selection criteria
In this systematic review and meta-analysis, we used a comprehensive four-step search strategy to identify relevant studies. No language restrictions were placed on the searches or search results. The study conforms to the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) guidelines. A detailed description of the method has been previously reported and is available upon request. 12 The research is part of a larger systematic review; however, the present study focused on self-reported prevalence studies on elder abuse within community settings. Forthcoming publications will focus on prevalence of abuse in institutional settings as well as studies using servicebased data.
Second, reference lists of publications retrieved in the fi rst step were screened for relevant studies. Third, we searched additional web-based platforms including specialised journals, Google searches for grey literature, and WHO Global Health Library for scientifi c literature published in developing countries. Finally, after all the screening and reviewing of studies had been completed, we consulted 26 experts in the fi eld by email, representing each of the six WHO regions (ie, African, Americas, South-East Asia, Europen, Eastern Mediterranean, and Western Pacifi c) to provide further review to identify any studies that were missing up to Dec 18, 2015. Articles were independently screened in two stages: screening of titles and abstracts followed by the retrieval and screening of full-text articles by two reviewers using See Online for appendix
Research in context
Evidence before this study We did a thorough search of the scientifi c literature before initiating this study to detect any existing systematic reviews or prevalence studies; furthermore, we used the systematic review done for this study, as detailed above, to ensure that no studies had been missed. Although no meta-analyses existed before this study, one systematic review emerged in the scientifi c literature after the initiation of this study that found a global aggregate elder abuse prevalence rate of 14·3% (95% CI 7·6-21·1).
Added value of this study
Our study is the fi rst of its kind to use meta-analysis to quantify prevalence estimates derived from a comprehensive search strategy that included additional search for studies that are not commonly found in academic sources.
Implications of all the available evidence
The dearth of elder abuse prevalence studies from low-income and middle-income countries and from southeast Asia and Africa, despite our comprehensive search strategy, suggests a need for further research to better understand elder abuse in these areas of the world. However, high rates of abuse globally suggest that increased attention to the issue of elder abuse is warranted, including investment in development and assessment of elder abuse interventions to help reduce the spread and eff ect of elder abuse. the eligibility criteria described below. If several publications reported on a single study, the publication that provided the most data was selected for further synthesis. Inter-rater reliability was analysed using the Statistical Package for Social Sciences (SPPS Statistics 21). This analysis showed high levels of agreement between the reviews (κ 0·86-0·96). Disagreements were resolved through discussion, or with the help of a third reviewer. Inclusion criteria were community-based samples that provided estimates of abuse prevalence at a national or subnational level (eg, states or provinces, counties, districts, and large cities [except in the USA, where states are the smallest unit, due to a large number of prevalence studies]) and inclusion of participants that were aged 60 years and older, in line with the UN defi nition of older people. 13 We excluded studies that were reviews, conference proceedings, or used qualitative methods only; studies that focused exclusively on self-neglect or homicide; and studies that concentrated only on institutional abuse or on specifi c subpopulations.
Data extraction and quality assessment
Data were extracted by two reviewers (YY, CRM): YY extracted data from the publications and CRM crosschecked for accuracy. Three main categories of data were extracted: characteristics of the samples, methodological characteristics of each study, and prevalence estimates of elder abuse and its subtypes. The data extraction tables were pilot tested and refi ned before extraction. The study quality was assessed as part of the data extraction strategy by two reviewers with the standardised Risk of Bias Tool (panel 1) 14 designed to assess population-based prevalence studies. To assess the risk of bias, reviewers rated each of the ten items into dichotomous ratings: low risk and high risk. An overall score was calculated by adding all the items rated as low risk. Thus, higher scores indicated lower risk of bias and stronger method quality.
Data analysis
Meta-analysis was done to synthesise the prevalence estimate for elder abuse and its subtypes. The decision to do a meta-analysis was made a posteriori to ensure that suffi cient studies with similar characteristics (eg, same prevalence period population) were available for metaanalysis. Prevalence rates were calculated from raw proportions or percentages reported in the selected studies. The investigators were contacted for those studies in which raw data were missing or unclear. All analyses were done using Comprehensive Meta-Analysis software (CMA version 3.9). 15 Variances of raw proportions or percentages were pooled based on a random-eff ects model. 16 We calculated the pooled estimates and the 95% CIs in studies and considered non-overlapping CIs as an indication of statistically signifi cant diff erences. To determine the extent of variation between the studies, we did heterogeneity tests with Higgins' I² statistic to measure the proportion of the observed variance that refl ects true eff ect sizes. 16 We followed Duval and Tweedie's Trim and Fill method to visually inspect the funnel plots and assess both the degree of publication bias and its eff ect on the study fi ndings. 16,17 We used their method of removing extreme outliers (ie, small studies) from the funnel plot and re-computing the eff ect size to correct for publication bias. 17 Subgroup analyses were done to investigate the sources of heterogeneity, using bivariate comparisons and meta- regression. These analyses tested individual associations between the pooled estimates and several covariates: WHO regions (recoded as Americas, Asia, Europe, and others); income classifi cation of each country (according to the World Bank classifi cation, recoded into high vs middle-income and low-income countries); method of data collection (face-to-face vs all others); sampling procedure (random vs convenience sampling); research quality (recorded as good vs fair-to-poor); and sample size (coded as high, medium, and low tertiles, using the 33rd and 67th percentile scores). Signifi cant and relevant covariates were entered into a multivariate metaregression model. This study is registered with PROSPERO, number CRD42015029197.
Role of the funding source
The Social Sciences and Humanities Research Council of Canada (SSHRC) funded the corresponding author's time spent on this project and the WHO Department of Ageing and Life Course funded additional data extraction eff orts. Neither the SSHRC nor the WHO Department of Ageing and Life Course had any role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding author had full access to all the data in the study and had fi nal responsibility for the decision to submit for publication.
Results
Of the 38 544 studies, 415 potentially relevant full-text articles were independently reviewed. From these, we identifi ed 234 studies that provided data on abuse prevalence. Among these, seven studies examined elder abuse prevalence in people with dementia, 14 provided prevalence data for any abuse that had occurred since the victims became older adults (ie, aged 60 or 65 years and older), ten focused on subpopulations (eg, older women and ethnic minorities), 32 were incidence-based and service-based, 84 did not report the prevalence period or provided prevalence periods ranging from the past month to the past 5 years, and 35 were duplicates in that they used the same datasets as other studies (fi gure 1). To avoid bias in data synthesis, we grouped studies with the same prevalence period for meta-analysis. After excluding ineligible studies, 52 studies provided pastyear prevalence data for abuse and were thus included in the meta-analysis. Panel 2 summarises the key outcome measures based on the defi nitions provided by WHO 1 and the US Centers for Disease Control and Prevention. 18 The 52 studies selected for meta-analysis were geographically diverse and included 28 countries, with fi ve studies from the WHO region of the western Pacifi c, fi ve from the southeast Asia region, 15 from the region of the Americas, 25 from the European region, and two from the eastern Mediterranean region. Studies also came from countries across the World Bank income classifi cation: fi ve studies from lower-middle-income countries, 13 from upper-middle-income countries, and 34 from high-income countries. Moreover, 40 studies were based on random samples and the remaining 12 were convenience samples. Most studies (38) used face-to-face interviews to collect data, eight studies used self-administered questionnaires, and six used telephone interviews. The quality of each study was assessed. A maximum quality score of 10 was achieved in 16 of the 52 studies; 35 studies were scored as good quality and 17 studies were scored as fair-to-poor (table 1).
Prevalence rates for overall elder abuse were reported in 44 studies that included of 59 203 individuals. Overall elder abuse consisted of any combination of abuse subtypes as reported in the studies. The combined prevalence for overall abuse in the past year was 15·7% (95% CI 12·8-19·3; fi gure 2). Visual inspection of the funnel plot showed no evidence of publication bias (data not shown). The set of studies was heterogeneous for overall abuse (Q [43]=4532·02, p<0·0001), suggesting diff erences in the eff ect sizes exist within this set of studies. Higgins' I² showed that 99% of the variance comes from a source other than sampling error. The sources of the variation were investigated with bivariate analyses. Sample size was signifi cantly associated with elder abuse prevalence (ie, high, medium, and low; Q[2]=18·96, p<0·0001). Two further covariates had p values below 0·10: income classifi cation (ie, high-income and middle-income or Sample size, income classifi cation, and method of data collection were entered into the meta-regression, which yielded a signifi cant model (F [4]=3·34, p=0·0191) that explained 26% of the variance. We found that when compared with studies with high sample size, studies with medium and low sample sizes had signifi cantly higher prevalence estimates (7·2% vs 18·2%; T [36]=2·70, p=0·0101) and 18·1% (T[36]=2·51, p=0·0164). Studies using random sampling and those done in high-income countries had lower prevalence estimates in the metaregression model, although diff erences for these variables were not independently statistically signifi cant.
Of the 44 studies that reported overall abuse, 32 provided gender breakdown, with women representing 19 756 of 34 886 individuals. There was no gender diff erence in prevalence estimates (Q[1]=3·07, p=0·0799). Additional analyses were done to examine bivariate gender diff erences within several subgroups, revealing no signifi cant diff erences. The global and WHO regional prevalence estimates for abuse in women and men are shown in fi gure 3.
Discussion
Using meta-analytical methods, we pooled the prevalence estimates of elder abuse reported in 52 publications published between 2002 and 2015. The global prevalence of elder abuse was 15·7%, or about one in six older adults. Given the approximate 2015 population estimates of 901 million people aged 60 years and older, 53 this rate amounts to 141 million victims of elder abuse annually. Prevalence estimates for abuse subtypes were highest for psychological abuse, followed by fi nancial abuse, neglect, physical abuse, and sexual abuse. There was signifi cant heterogeneity in the studies; 26% of the variance could be explained by sample size, income classifi cation, and method of data collection. We found that studies with smaller sample sizes have higher prevalence estimates.
Few systematic reviews on the global prevalence of elder abuse exist, and none have used meta-analysis to synthesise global prevalence estimates. For the fi rst time, this study provides methodologically rigorous global and regional estimates of elder abuse. Almost one in six older adults experienced abuse in the past year. This estimate is similar to the estimate from a recent systematic review by Pillemer and colleagues, 54 which found a global aggregate of 14·3% (95% CI 7·6-21·1). This fi gure was calculated based on 18 well conducted and large-scale population studies from 20 countries: 17 from high-income countries, two from upper-middle-income countries, and one from a lowermiddle-income country. Our estimate of 15·7% was calculated based on 44 studies that came from a broad range of research quality and sample sizes. The convergence between these two global estimates, from two independently conducted systematic reviews, lends them credibility. The present study also reveals considerable regional variations. Dong did a small-scale systematic review of prevalence studies and grouped estimates by continents, 11 including Asia with a range from 14% in India 23 to 36·2% in China, 30 Europe with a range from 2·2% in Ireland 39 to 61·1% in Croatia, 28 and the Americas with a range from 10% in the USA 52 to 79·7% in Peru. 45 Like Dong, 11 our fi ndings provided insights into geographical diff erences in prevalence estimates, with Asia at 20·2%, Europe at 15·4%, and the Americas at 11·7%.
There are few analyses of how studies' characteristics infl uence abuse prevalence, and none in the area of elder abuse. Meta-analytical research on childhood sexual abuse suggested that studies using random sampling, compared with convenience sampling, as well as those with larger sample sizes, rather than smaller ones, were more likely to produce lower prevalence estimates. 55,56 The present study's meta-regression found that these two variables and income classifi cation explained 26% of the variance in elder abuse prevalence. Large sample sizes, random sampling, and high-income countries were associated with lower prevalence estimates, although only sample size diff erences were independently statistically signifi cant. As such, the methodological characteristics of this sample had eff ects in similar directions to those seen in published work on childhood sexual abuse.
Despite several additional analyses, our research found no signifi cant diff erence in prevalence between older women and older men. Few studies have examined gender diff erences in elder abuse; those that did found mixed results, with some identifying disparate rates across genders. 57 Yet in studies of intimate partner violence, gender symmetry is reported, supported by both systematic review 58 and meta-analysis. 59 Although much research on abuse has used gender roles and masculinity as a predictor for violent behaviour, emerging evidence has shown a weak association between gender roles and abuse. 60 This evidence is further supported by similar rates of intimate partner violence emerging among same-sex and heterosexual couples. 60 However, most of this scientifi c literature comes from high-income countries and if more studies from low-income and middle-income countries were available, the fi nding of gender symmetry might not hold. Nonetheless, our fi ndings contribute to this growing evidence for gender symmetry in abuse victimisation.
There are many strengths in this systematic review and meta-analysis. Our study is the fi rst of its kind to use metaanalysis to quantify prevalence estimates derived from a comprehensive search strategy that included additional searches for studies that are not commonly found in academic sources. We also communicated with 26 experts to identify relevant articles. This study is also the fi rst to include non-English language articles in a systematic review. We have extracted data from 47 non-English articles; the ten included in the analysis were written in Spanish, Portuguese, Chinese, German, and Farsi. Our study is the only study on elder abuse to explore the sources of heterogeneity. The wide confi dence intervals found in our study as well as Pillemer and colleagues' study 54 show the importance of further research in this area to identify further sources of this large variance.
Our model (which included country income classifi cation, whether the study used a random or convenience sample, and the size of the sample) left 74% of the variance unaccounted for. Factors that might explain this large proportion of variance, particularly between WHO regional estimates, might include country-specifi c or culture-specifi c social norms that govern family dynamics and expectations and methodological characteristics that we were unable to include. These methodological factors might include varying defi nitions of elder abuse as well as the use of standardised or non-standardised instruments to assess and measure abuse.
Despite the strengths of our study, there are several limitations that can be addressed with future research. Although our comprehensive search strategy has identifi ed many relevant studies, the majority of the studies included in the meta-analysis were from highincome countries. Prevalence studies are sparse or absent for many regions of the world, particularly in southeast Asia and Africa, which seem to have higher rates of abuse than developed countries. 11,31,61 More prevalence studies in low-income and middle-income countries are needed, particularly within these regions. These prevalence studies should use similar methods to allow for comparisons across countries.
Although many attempts have been made to contact the authors of selected studies, crucial data on defi nitions and measurements were still missing. This information is important for further methodological analyses that could examine how diff erent defi nitions, measurements, and study periods aff ect prevalence estimates. For instance, although our fi ndings are consistent with existing studies showing higher prevalence for psychological and fi nancial abuse compared with other subtypes, there are challenges in defi ning and measuring psychological and fi nancial abuse. Moreover, although our systematic review identifi ed 234 studies on prevalence, the meta-analysis only focused on abuse occurring in the past year. It is possible that death of a victim can aff ect past-year prevalence; future research could compare and examine abuse estimates by using diff erent study periods (eg, past month or lifetime), focusing on national or subnational studies, or examining prevalence variations within each WHO region. Additional research could explore the eff ect of country-specifi c or culture-specifi c social norms on prevalence estimates by including additional normative variables (eg, fi lial piety and existence of elder caregiving policies). The present study, focusing on older adults in general, found lower prevalence estimates than did studies that examined abuse in people in other age groups with disabilities. 62,63 Future research might also benefi t from examining elder abuse prevalence in older adults with physical and cognitive disabilities, particularly given the widespread cognitive declines often seen in the oldest elders. Research in these areas would provide the basis to developing eff ective strategies to prevent and respond to abuse.
Elder abuse, despite aff ecting almost one in six (more than 140 million) older people, has not achieved the same public health priority as other forms of violence. None of the 169 targets of the UN's recently adopted 17 Sustainable Development Goals explicitly addresses violence against older people. By contrast, target 5.2 aims to eliminate all forms of violence against women and target 16.2 aims to end violence against children. 64 If the proportion of elder abuse victims remains constant, the number of victims will increase rapidly due to population ageing, 53 growing to 330 million victims by 2050. The fi ndings of this study strengthen the case for global action to expand eff orts for preventing and supporting victims of abuse. Considering the serious health consequences, the health sector has an important role to prevent, raise awareness of, and provide evidence-based guidance for health-care practitioners to respond to elder abuse, particularly on psychological and fi nancial abuse, which are more prevalent. Yet, few evidence-based interventions exist at present. [65][66][67] Investment in developing and assessing elder abuse interventions must be a public health priority to help to reduce the eff ect of elder abuse worldwide.
Contributors
YY, CRM, ZDG, and KHW designed the study. All authors oversaw its implementation. YY and CRM coordinated and did all review activities, including searches, study selection (including inclusion and exclusion of abstracts), data extraction, and quality assessment. YY, CRM, ZDG, and KHW planned the analyses and YY did the meta-analyses and meta-regressions. YY wrote the initial draft and YY, CRM, ZDG, and KHW contributed writing to subsequent versions of the manuscript. All authors reviewed the study fi ndings and read and approved the fi nal version before submission.
Declaration of interests
We declare no competing interests. | 2018-04-03T05:32:45.737Z | 2017-02-01T00:00:00.000 | {
"year": 2017,
"sha1": "d0343e902bec41f0ff17cd1bf8fa7a9dfa69451d",
"oa_license": "CCBY",
"oa_url": "http://www.thelancet.com/article/S2214109X17300062/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d0343e902bec41f0ff17cd1bf8fa7a9dfa69451d",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6561099 | pes2o/s2orc | v3-fos-license | Can beliefs about musculoskeletal pain and work be changed at the national level? Prospective evaluation of the Danish national Job & Body campaign
Can beliefs about musculoskeletal pain and work be changed at the national level? Prospective evaluation of the Danish national Job & Body campaign. Scand J Work Environ Health . Objectives This study evaluates the Danish national Job & Body campaign on beliefs about musculoskeletal pain and work. Methods Initiated in 2011, a national campaign in Denmark targeted public-sector employees with a mixture of networking activities, workplace visits, and a mass media outreach with topics related to job and body (eg, musculoskeletal pain, movement and work) and creating balance between demands at work and physical capacity. At baseline (2011) and at four time points until the end of 2014, random cross-sectional samples of ≥1000 representative public-sector employees (total N=5012) replied to eight questions concerning beliefs about musculoskeletal pain and work. Changes over time were modelled using general linear models (averaged for all questions, 0–100 points, where 0 is completely negative and 100 completely positive) and logistic regression analyses (for the single questions) controlling for age, gender and a number of work-related factors. Results At the last follow-up in 2014, 17.3% of public-sector employees were familiar with the campaign. Beliefs about musculoskeletal pain and work were 3.4 points (95% CI 2.4–4.3) higher than at baseline. For the single questions, 4 out of 8 showed improved odds for more positive beliefs [odds ratios (OR) of 1.28–1.89]. Conclusion During follow-up of the national campaign, beliefs about musculoskeletal pain and work were more positive among public-sector employees in Denmark. Due to the time-wise mixture of several campaign activities, the isolated effect of each component could not be disentangled. Whether changes in health occurred remain unknown.
The Danish national Job & Body campaign mindfulness found improvements of work-related fear avoidance beliefs among laboratory technicians (20). A prospective study in a factory setting showed that distribution of an educational psychosocial pamphlet improved beliefs about pain control and consequences of low-back pain (21). A recent randomized controlled trial (RCT) among public-sector employees found reduced sickness absence due to low-back pain -in spite of unchanged low-back pain intensity -from group-based reassuring information and a non-threatening explanation for low-back pain to alter beliefs about back pain and activity (22). Hence, beliefs related to musculoskeletal pain, movement and work are modifiable and can lead to altered sickness behavior (22). However, the translation from research to practice -especially when going from a workplace study to the society level -can be challenging. Thus, several obstacles such as lack of reach, uptake, and sustainability in the population are seen across several scientific disciplines (23)(24)(25).
National campaigns are a method to reach a large proportion of the population. During the late 90s in the state of Victoria in Australia, Buchbinder and coworkers evaluated a media campaign concerning beliefs about back pain in the population and found both shortand long-term positive effects (26,27). In Scotland, Waddel and co-workers evaluated the "Working Back Scotland" multimedia campaign and also found both short-and long-term effects of the campaign on back beliefs but not on sickness absence (28). In Norway, Werner and co-workers evaluated the "Active Back" media campaign targeted mainly towards the general public and health professionals, and found positive effects on back beliefs but not sickness behavior (29). Finally, in the province of Alberta, Canada, Gross and co-workers evaluated the media campaign "Don't Take it Lying Down", drawing on experiences from the campaigns in Australia and Scotland. In contrast to the previous campaigns, the Canadian campaign did not influence back beliefs (30). Therefore, evaluation of such campaigns in different contexts and countries are needed before solid recommendations can be provided.
During the first decade of the new millennium, it was the impression that many people in Denmark still had negative beliefs about pain, movement, and work. The opportunity to perform the national Job & Body campaign in Denmark came with the collective agreement "Kvalitetsreformen 2007" to improve the quality of the public sector. The Danish Working Environment Information Centre developed the campaign in close collaboration with researchers from the National Research Centre for the Working Environment. The campaign targeted prevention of (i) risk factors for musculoskeletal pain at the workplace, (ii) consequences for workers with musculoskeletal pain, and (iii) long-term sickness absence from work. In relation to this, the campaign highlighted five main messages: (i) stay physically active even in periods of musculoskeletal pain, (ii) prevention -and not only rehabilitation of musculoskeletal pain -is useful, (iii) perform physical exercise, (iv) create a good balance between demands of the job and the capacity of the body, and (v) physical wellbeing is a shared responsibility and should be managed together at the workplace. These messages were detailed, explained, discussed and exemplified in different ways. An overview of the campaign is summarized in a 3-minute YouTube video (31). The primary sources of knowledge for the campaign were the book The Back Pain Revolution (32); Danish reviews about sickness absence, return to work and risk factors related to physically demanding work (33,34); previous campaigns from other countries (27)(28)(29)(30); and best available knowledge and experience from Danish researchers in the field of musculoskeletal pain and work. An important part of the change theory of the campaign was to alter beliefs and behavior in relation to musculoskeletal pain and work through new and additional understanding of prevention in Denmark and thereby complement the more traditional approach relying on biomechanical risk factors such as heavy lifting. This was inspired by a biopsychosocial understanding of pain, ie, acknowledging that pain is multifactorial in origin and consequently that a single element would be insufficient to prevent musculoskeletal pain and its consequences effectively.
Because most of the accessible information at that time was written by and for researchers, the key facts were converted to easy-to-understand data, good advice, and a number of simple messages made accessible at the campaign website jobogkrop.dk. The content also presented "best practice" examples from different Danish workplaces having good experience with initiatives and practical tools to prevent and manage musculoskeletal disorders and its consequences. In this way, the campaign aimed to stimulate daily dialogues between coworkers, leaders, and health and safety representatives at the workplace to positively change beliefs and behavior related to musculoskeletal pain and work.
The aim of the present study was to evaluate the Danish Job & Body campaign on beliefs about musculoskeletal pain and work. Because different time-wise effects may occur, and several activities were initiated at different time-points, follow-ups were performed three times within the first one-and-a-half years (short-term) and after three years (long-term).
Study design and respondents
The Danish national Job & Body campaign was initi-Andersen et al ated in 2011 and ran until until 2015. The last follow-up measurement for the present study was obtained in the second half of 2014. Thus, we were able to evaluate the first three years of the campaign. We used a prospective design with representative random cross-sectional samples of public-sector employees drawn at different time points throughout the campaign. The public-sector employees were drawn from "Epinions panel of Denmark" consisting of more than 200 000 Danes (35). Data collection was performed as web surveys, where people from the panel were invited with a web link to participate in one of the ongoing Epinion surveys. When a respondent clicks the link, s/he is directed towards the survey where there are still missing respondents within each category of gender, age, region and sector to reach the target number of representative respondents for that survey. Beforehand, the respondents do not know to which survey they will be directed. This method ensures the randomness of the sample representing gender, age, region, and sector. The inclusion criteria to be invited were (i) public-sector employee, (ii) currently employed, and (iii) age 18-70 years. Invitations continued until the number of responses were 1000. Because some respondents had an ongoing questionnaire session when 1000 responses were reached, the actual number of respondents typically exceeded 1000 a bit.
The Danish national Job & Body campaign
There were four broad elements of activities: (i) networking with employers' associations and trade union organizations, (ii), comprehensive theme sessions with a specialized communication team, (iii) campaign tour, and (iv) mass media campaign. In addition there were a number of tools and materials to facilitate behavioral changes at the workplaces.
Campaign activities
Networking. A cornerstone of the campaign was to use existing networks to obtain a wide reach of the messages and make the local workplaces the center of action. In Denmark, the ministries, regions, and municipalities are the main point of entrance to public sector workplaces. To maximize reach, strategies for dissemination of the campaign was planned together with and spread through employers' associations and trade union organizations as an intensive networking campaign aiming to reach as many public sector workplaces in Denmark as possible through the 21 ministries, 5 regions, and 98 municipalities -comprising close to 900 000 public-sector employees. Thus, the campaign used these networks rather than trying to establish contact with each individual workplace. Contact persons for the campaign were established in 19 of the 21 ministries, all 5 regions and all 98 municipalities. Using these networks, the campaign made contact with the existing and well-established structures at the local workplaces, ie, the health and safety organizations as well as the management, and thereby also the local health and safety representatives. In this manner, the target group was made aware of the campaign through several channels, ie, at their workplace as well as through the employers' associations and trade union organizations. To maximize the chance of workplaces using the campaign material and messages, an important principle was that the campaign should be adaptable to the needs of each workplace. For example, the workplaces could participate whenever it best suited their plans and ongoing activities.
Communication team. The Danish Working Environment Information Centre had a specialized communication team who performed more comprehensive theme sessions -typically 2-3 hours, but sometimes the entire working day -for 9000 public-sector employees across 169 workplaces during the campaign period. The workplaces could contact the campaign team and order the visit. The theme sessions of the Communication Team were dialog-based and targeted towards taking action at the workplace and included all levels of the workplace, ie, leaders, employees as well as the health and safety organization. On a separate day before the theme session, a preliminary meeting was held with the respective workplace. During the preliminary meeting, a participatory approach to the theme sessions was emphasized -both in relation to involvement in the theme session as well as to the actions to take place afterwards. The actual theme sessions concerned three main points (i) dissemination of knowledge, (ii) dissemination of good practice examples, and (iii) hands-on activities working with methods and practical tools. The sessions typically started with a presentation about the knowledge base of the selected theme and afterwards there were questions from the audience, dialog, and work in smaller groups. The communication team facilitated the process of the dialogs and group work. The purpose of this process was to maximize the relevance of the theme session for the workplace. The sessions typically concluded with physical activities (eg, elastic band exercises) combined with knowledge of health benefits of physical exercise. After the theme session, the PowerPoint presentation as well as the materials produced during the session (eg, posters and agreement forms) were delivered to the workplace to facilitate the subsequent process of working with the chosen theme.
Campaign tour. A campaign tour was also performed, consisting of visiting larger public workplaces all over Denmark. The workplaces could contact the campaign team and order the visit. While workplaces could order the campaign tour as well as a visit from the communication team, both rarely occurred at the same workplace. Two associates travelling in a campaign bus performed a total of five campaign trips over the course of the tour. In the first and second half of 2012, the campaign bus visited 52 and 66 public sector workplaces, respectively. In the first half of 2013, the campaign bus visited 27 public sector workplaces. The activities consisted of delivering campaign materials, performing workshops, and inspiring local workplace involvement and implementation of the campaign. Typically the workplace visits were followed up by local PR about the campaign.
Mass media campaign. Finally, the campaign was supplemented by a mass media campaign to stimulate an even wider reach of the messages. In 2013, the first large mass media campaign was launched during weeks 10-16 and followed up with a large media campaign during weeks 38-40. During 2014, the mass media campaign was followed up again, although not as extensively as in 2013. First, this was targeted at media and magazines from the relevant employers' associations and trade union organizations. Later, this was expanded to also include TV spots and online advertising. As part of this, web banners were included on major Danish news sites such as politiken.dk and ads were included on Facebook. The banners and ads delivered the campaign messages and provided a link to the campaign webpage (36). The Facebook page of the campaign (Job&krop) is still active with more than 18 000 followers as of February 2017. The content of the campaign website (www. jobogkrop.dk) has been transferred to the main website of The Danish Working Environment Information Centre (www.arbejdsmiljoviden.dk) where it will continue to be available for free.
Materials and practical tools of the campaign
A total of nine materials and tools (figure 1) were developed as part of the campaign to facilitate behavioral changes at the workplaces (i) the campaign site www. jobogkrop.dk acting as the main point of information, (ii) a dialog folder for the leaders and the health and safety organizations to facilitate joint identification of challenges and solutions, (iii) five short movies with the main messages of the campaign to put prevention of musculoskeletal pain on the agenda, (iv) a pocketbook for employees with easy-to-understand information, illustrations and explanations of physical exercises, and good advice, (v) elastic resistance bands and posters with illustrations of exercises for the neck, shoulder and back, (vi) posters for the workplace in different sizes, Andersen et al notepads, pens and small boxes of candy -all with the main messages of the campaign, (vii) a smartphone app with weekly updates of "good advice", videos with physical exercises, and 1-minute movies with good advices from experts in the field (researchers and occupational physicians), (viii) an easy-to-use survey with six questions for workplaces to perform a quick evaluation of physical wellbeing at the workplace in order to stimulate the dialog about possible areas of action, and (ix) a catalog with inspiration and good advice to plan, initiate and maintain actions at the workplace to enhance physical wellbeing.
Budget of the campaign
The expenses during the campaign were 24 million Dkr (3.2 million Euros) to external consultants, announcements, ads, and materials. In addition, the in-house usage of working time for the Danish Working Environment Information Centre amounted to 3 full personyears per year of the campaign from 2011 to 2015, ie, a total of 15 person-years.
Outcome variable
The outcome variable -beliefs about musculoskeletal pain and work -was inspired by the Back Beliefs Questionnaire (37). Because the campaign concerned musculoskeletal pain and work in general, and was not limited to back pain only, the statements were phrased to fit the contents of the campaign and only two of the eight specifically mentioned back pain or back problems. The eight statements were: (i) Pain in muscles and joints should be prevented and actively dealt with together at the workplace, (ii) When having pain in muscles and joints, it is generally important to keep physically active, (iii) When having pain in muscles and joints, one should rest until the pain is gone, (iv) When having pain in muscles and joints, one should try to live as normally as possible, (v) Almost all people are affected at regular basis by pain in muscles and joints, but it is rarely dangerous, (vi) One can almost always go to work even if having back pain, (vii) Work is the reason that one gets back problems, and (viii) If there is imbalance between the physical demands of the work and our physical capacity, the risk of getting pain in muscles and joints increases. Respondents replied to the statements on a 5-point Likert scale of "completely agree", "agree", "neither agree nor disagree", "disagree", and "completely disagree".
Control variables
From the register of "Epinions panel of Denmark" the following control variables were included: age (continu-ous), gender (woman, man), region of Denmark (North, Central, Southern Denmark, Zealand, Capital), sector of employment (municipality, region, state), occupational sector (industry, construction, transport, trading, social and healthcare, teaching and research, office and administration, agriculture and food, public service, other), and job position (employee, leader, other). In addition we asked whether the respondents had a representative role in relation to health and safety at the workplace (yes/no) by asking leaders whether they were members of the health and safety organization, MED (medindflydelse og medbestemmelse, which means influence) committee or cooperation organization and employees whether they had a position as health & safety representative, MED representative, or shop steward.
Statistics
All statistical analyses were performed in SAS version 9.4 (SAS Institute, Cary, NC, USA). For the main analysis, the 8 responses about beliefs were normalized and averaged on a 0-100 scale where 100 equals completely agree for questions 1, 2, 4, 5, 6 and 8, and completely disagree for questions 3 and 7. The main analysis was the change over time in beliefs, ie, from 2011 to 2014 (including all five time points), modelled using general linear models (Proc GLM) with year 2011 as reference value. The analysis was controlled for the variables mentioned above. Estimates are reported as least square means and 95% confidence intervals (CI) for each time point and differences of least square means and 95% CI for differences between time points. In an additional analysis, using the same statistical procedure and control variables, we tested the difference in beliefs at followup in 2014 between those who knew the campaign and those who did not.
For exploratory analyses of each of the 8 questions, the scale was dichotomized to "agree" (response options completely agree and agree) or "not agree" (the remainder response options) for questions 1, 2, 4, 5, 6 and 8, and to "disagree" (response options completely disagree and disagree) and "not disagree" (the remainder response options) for questions 3 and 7. The change in the odds for agreeing or disagreeing, respectively, from before the campaign in 2011 to the end of the campaign in 2014 was modelled using binary logistic regression (Proc LOGISTIC) with year 2011 as reference. Thus, these analyses used only the baseline (2011) and follow-up data (2014), and were also controlled for the variables mentioned above. Estimates are reported as odds ratios (OR) and 95% CI for agreeing (numbers 1, 2, 4, 5, 6, and 8) or disagreeing (numbers 3 and 7) with the beliefs statements.
Effect sizes (Cohen's d) were calculated as the change score divided by the pooled standard deviation (38). For reference, Cohen's d of 0.20, 0.50 and 0.80 corresponds to small, moderate, and large effect sizes, respectively. Table 1 shows descriptive characteristics of the study population. Because of missing values for some of the information the total number is not equal for all variables. The public-sector employees were on average 46.9 (SD 11.2) years and more than two thirds were women. All five regions of Denmark were represented, with the majority (31.8%) from the Capital Region. All three parts of the public sector were represented with the majority (55.0%) from the municipality. Furthermore, all relevant occupational sectors were represented with the majority from social and healthcare (36.1%), teaching and research (19.5%), office and administration (19.1%) and public service (16.4%). Thus, the sample largely reflected publicsector employees in Denmark. The respondents at baseline in 2011 and follow-up in 2014 were largely comparable. Table 2 shows that for the results of the exploratory analyses of the single questions. Out of the 8 questions, 4 showed improved odds for more positively beliefs at the last follow-up in 2014 compared with baseline in 2011 with OR ranging from 1.28 to 1.89. figure). For the other materials in the campaign (figure 1), the movies were watched 34 000 times on YouTube, 15 000 dialogue folders were distributed, 125 000 pocketbooks were distributed, 25 000 elastic bands and 40 000 posters with exercises were distributed, 90 000 campaign posters, 40 000 pens, 60 000 notepads were distributed, the campaign app was downloaded 5000 times, the survey in digital and analogue format was ordered 248 and 1000, respectively, and 4000 samples of the catalog with inspiration were delivered.
Knowledge of campaign
At the last follow-up in 2014, 17.3% of the respon-dents replied "yes" to the question about whether they knew the Job & Body campaign. Those who knew the campaign scored 3.7 (95% CI 1.7-5.7) points higher on beliefs about musculoskeletal pain and work than those who didn't know the campaign. Among those who knew the Job & Body campaign, 56% stated that the campaign had provided them with relevant research based knowledge about prevention and management of musculoskeletal pain, 60% stated that the campaign had provided them with methods and practical tools to prevent and manage of musculoskeletal pain, 37% stated that their workplace had initiated activities because of the campaign, and 49% stated that the campaign had led to dialogues with other colleagues about prevention and management of musculoskeletal pain. Corresponding percentages only among those having a representative role in relation to health and safety at the workplace was 24% (knowledge of campaign), 58% (provided relevant knowledge), 70% (provided methods and practical tools), 50% (initiated workplace activities), and 70% (campaign led to dialogs with colleagues).
Discussion
At the last follow-up of the Danish national Job & Body campaign, beliefs about musculoskeletal pain and work were more positive among public-sector employees in Denmark. Those who were familiar with the campaign at follow-up in 2014 scored higher on beliefs than those who didn't know the campaign. Due to the timewise mixture of several campaign activities, the isolated effect of each component cannot be disentangled. Whether changes in health occurred remain unknown. The campaign was inspired by a biopsychosocial understanding of pain, ie, acknowledging that pain is multifactorial in origin and consequently that a single element would be insufficient to effectively prevent musculoskeletal pain and its consequences. For example, (i) the biological components of the campaign included biomechanical factors such as preventing unnecessary heavy occupational lifting and creating a better balance between the physical work demands and the physical capacity of the individual; (ii) the psychological components included factors such as staying active in spite of pain, relating to psychological phenomena such as "fear avoidance", and (iii) the social components included factors such as dealing with the problems together at the workplace, involving the individual, group, organization, and management. Although the first of these -biomechanical factors -may also be viewed as an "injury model" approach, our point of view is that biomechanical factors are a natural part of a biopsychosocial approach. For example, staying physically active during periods of pain, but avoiding unnecessary heavy lifting, is not contradictory to nor in conflict with a biopsychosocial approach including biological, psychological and social components.
Based on effect size calculations, the overall effect of the Job & Body campaign on beliefs about musculoskeletal pain and work was small to moderate. In previous campaigns, effects have ranged from none to moderate (27)(28)(29)(30). While single workplace interventions or clinical treatment of back patients often show larger effect sizes than campaigns, the present results are quite positive considering that they represent all public-sector employees of Denmark. However, whether the small to moderate change in beliefs about musculoskeletal pain and work leads to measurable changes in health or health behaviors are unknown. Buchbinder and co-workers have argued that campaigns are necessary to make changes in beliefs and attitudes in the society for highly prevalent conditions, eg, in the case of back, neck and shoulder pain (26,27). By contrast, single efforts for preventing musculoskeletal pain and its consequences are unlikely to have the necessary reach to make a broad impact in society in terms of knowledge and beliefs. Because musculoskeletal pain is widespread across job groups, age and gender, campaigns may be the most practical tool to create changes in beliefs at the society level. However, in spite of positive results for pain beliefs in three out of four campaigns from other countries as well as from the Danish campaign, the overall impression is that such campaigns do not affect sickness absence or sickness behavior (27)(28)(29)(30). Although not evaluated in the Danish Job & Body campaign, changes in sickness related outcomes at the national level likely require greater efforts than campaigns alone. Due to the complex nature of sickness absence behavior, the design of the present study does not allow for any type of valid evaluation of such outcomes.
Exploratory analyses of the single questions provided further information in the evaluation of the campaign. For the single questions, 4 out of 8 showed improved odds for more positive beliefs about musculoskeletal pain and work. The belief that musculoskeletal pain should be prevented and dealt with at the workplace was improved most with an OR of 1.89 (question 1) and was one of the main messages of the campaign. The questions concerning musculoskeletal pain and physical activity (questions 2, 3, and 4) were also improved. These questions were also central to the main messages of the campaign. By contrast, the questions that concerned back pain and work (numbers 6 and 7) -rather than musculoskeletal pain and Andersen et al physical activity in general -were not improved with the campaign. Only one third of the public-sector employees disagreed that work is the reason that one get back problems. Although prospective epidemiological studies show that work can indeed lead to back pain, eg, heavy lifting or work with bent back (12,13), psychosocial factors at work, lifestyle, genetics etc, also influence the risk for back pain. Thus, from a scientific point of view it is difficult to argue that work is the reason for back pain, but rather that it is one of many reasons. The low prevalence of public-sector employees disagreeing with that specific question may reflect some deeply rooted beliefs about back pain and work in the population that are difficult to move at the society level. By contrast, the campaign by Buchbinder and co-workers succeeded in improving back beliefs based on the standardized Back Beliefs Questionnaire. The reason that we did not find improvements concerning the back specific questions may be that our campaign had a broader focus about musculoskeletal pain, physical activity and work, whereas the campaign by Buchbinder and co-workers had a more specific focus on the back. This suggests a certain level of specificity of the outcome in relation to campaign messages. For the last two questions, the prevalence agreeing with statements 5 and 8 were quite high at baseline (76% and 81%, respectively) and did not improve further, which may indicate a ceiling effect for these types of questions or that the campaign messages concerning these aspects did not get adequately through.
We also evaluated the campaign's reach. At the end of 2014, the proportion of public-sector employees indicating that they knew the Job & Body campaign was 17.3%. Awareness among the target population in the previous campaigns -although evaluated using different methods and at different times points of the campaigns -have ranged from 39-86% (27)(28)(29)(30), with the highest level in the Australian campaign. In the Danish Job & Body campaign, messages were spread in different ways and through different channels. Consequently, it is likely that the campaign messages reached a higher proportion than those indicating that they knew the specific Job & Body campaign. Thus, one may get the messages without consciously connecting this to or knowing the specific campaign. Nevertheless, that only 17.3% indicated that they knew the campaign may also reflect that we live in a society with exposure to huge amounts of information -so-called "information overload" -with concomitant difficulties in harvesting the most relevant parts. This phenomenon is for example well-known in healthcare (39). Those who indicated that they knew the Job & Body campaign reported that the information was relevant and behavioral changes had occurred at the workplace level. More than a third stated that their workplace had initiated activities because of the campaign, and about half stated that the campaign had led to discussions with other colleagues about prevention and management of musculoskeletal pain. This further supports that even without knowing the Job & Body campaign, the messages could have been spread through daily dialogs with colleagues who knew the campaign and through activities at the workplace initiated due to the campaign. The high cumulative number of visits to the website of the campaign -almost 800 000 at the end of 2014, in a country with about 5.7 million inhabitants of which about 900 000 are public-sector employeesalso indicates that relatively many people were exposed to the website part of the campaign. Because the website is in Danish, it is unlikely that people from other countries have markedly influenced the number of visits. Although people may visit the website on several different occasions, the cumulative number of visits indicates that the campaign reached a high number of people.
Because different time-wise effects may occur, and several activities were initiated at different time-points, we performed both short-and long-term follow-up measurements. The change in beliefs became significant only at the end of 2014, ie, three years after commencement of the campaign. The cumulative number of website visits may be a good proxy measure for the overall reach of the campaign. The number of website visits was lowest in 2011, ie, the year when the website was finalized, which is not surprising considering that the mass media campaign had not begun. The number of visits became slightly higher in 2012 when the first wave of workplace visits had taken place. A marked boost in website visits was observed in the first half of 2013, ie, during the period with the first mass media campaign. In relation to this, beliefs about musculoskeletal pain and work only became significantly more positive at the last follow-up in 2014, which indicates that long-term campaigns targeting several channels -networking activities, workplace visits and mass media campaigns -are necessary to make changes at the national level. The results also indicate that the time from launching a campaign until the messages are spread in the society and take effect may take some years. However, due to the mixture of several campaign activities the isolated effect of each component cannot be disentangled. Timing of delivery of each component makes it difficult to know whether the observed changes were caused by the long-term efforts since 2011 or by the mass media campaign delivered during the later phase. Leavy and co-workers performed a systematic review of mass media campaigns for increasing physical activity in the population, which highlighted the importance of raising awareness of the campaign in the target group as a first step, eg, through mass media communication (40). Instead we chose to raise awareness through existing networks. In hindsight, an initial mass media campaign in 2011 in addition to the networking strategy might have provided even more positive results.
While the present campaign was ongoing, Gross and co-workers published an important paper summarizing the results and experiences from the previous campaigns (41). Through a workshop with leading experts in the field and a literature review, recommendations for future campaigns were put forward. A key message was that legislative and health policy changes should go together with public education for massive societal changes to occur. The most successful of previous campaigns, ie, the Australian, used both downstream approaches -consisting of efforts to influence individual behavior -and upstream approaches -consisting of efforts to influence behavior of governments and health policy-makers (26,27). Gross and co-workers calls this combined downand upstream approach "social marketing" (41). Our campaign had a strong involvement of the employers' associations and trade union organizations in the initial phase of the campaign and in this way the campaign made contact with the existing and well-established structures at the local workplaces, ie, the health and safety organizations as well as the management, and thereby also the local health and safety representatives. Thus, the campaign was not only targeted to individual employees, but also to the entire structure and network around the employee, and thereby contained to a certain degree of both down-and upstream elements.
The changes in beliefs about musculoskeletal pain and work in response to the Job & Body campaign should be considered in the light of the costs. The networking activities involving employers' associations and trade union organizations to reach all public sector workplaces in Denmark were comprehensive. Thus, the cost of the campaign was €3.2 million for external consultants, announcements, ads and materials, and additionally 15 person-years in time spent for employees in-house. However, rather than performing solely a mass media campaign -which would have been less costly -we expected that the networking activities would increase reach and that the workplace visits would maximize relevance for individual workplaces. Furthermore, the cost of the campaign should be seen in the light of the high cost of musculoskeletal disorders in Denmark, eg, the annual cost in Denmark for disability pensions alone is about Dkr40 billion (~ €5.3 billion) of which more than 20% of the cases are due to musculoskeletal disorders.
This study has both strengths and limitations. More than 20% of the respondents had a role in relation to health and safety at the workplace, suggesting that there may be a selection bias, ie, those working with health and safety at the workplace may have been more interested in participating. However, this was quite similar between baseline and follow-up. This was not an RCT, but an evaluation of a real-world large scale national campaign. Because a cornerstone of the campaign was to utilize existing networks that are already closely connected across Denmark, it was decided not to randomize in two groups of campaign / no campaign to different parts of Denmark. Even with a cluster randomized design, eg, randomizing different parts of Denmark to the campaign, marked contamination between groups would likely have occurred. In contrast to tightly controlled RCT designs, a number of uncontrolled variables exist in national campaigns. If simultaneous changes occurred in the general working environment, health or lifestyle in Denmark this would be able to influence the results. However, national surveillance data have shown that only minor changes -in both positive and negative direction -in the working environment, health and lifestyle of the working population have occurred during the period 2012-2014 (42). Altogether, general changes during this period are unlikely to explain the present findings of more positive beliefs about musculoskeletal pain and work among public-sector employees. Nevertheless, without a randomized design we cannot be certain about causality of associations. A strength of the study is this can be considered a real-world experiment. Although we did not have information on non-respondents, the study population at baseline and follow-up were quite comparable, and the analyses were controlled for a number of factors that may influence the belief scores. Furthermore, the results points in the same direction as three of the four previous campaigns with similar topics (26)(27)(28)(29)(30). Instead of a cohort study, we chose to use a prospective design with random cross-sectional samples at different time points. The strength of this design is that the respondents were not affected by having replied to same questionnaire previously. Together with the previous four campaigns from other countries, this strengthens the overall validity and generalizability of intensive and long-term national campaigns to influence beliefs about musculoskeletal pain and work in the population.
Concluding remarks
In conclusion, beliefs about musculoskeletal pain and work were more positive among public-sector employees in Denmark at the end of the Job & Body campaign. Due to the mixture of several campaign activities, the isolated effect of each component cannot be disentangled. Furthermore, timing of delivery of each component makes it difficult to know whether the observed changes were caused by the long-term efforts since 2011 or by the mass media campaign delivered during the later phase. Finally, the effect size of changes in beliefs was small to moderate, and whether this resulted in altered health remain unknown. | 2018-04-03T03:30:53.082Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "0388c7899b67c479d2275ea4e3cc4d602b441cd1",
"oa_license": "CCBY",
"oa_url": "https://www.sjweh.fi/download.php?abstract_id=3692&file_nro=1",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2d2ca35c53ba09d47cda8a8dbd39e9db7461373e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5276364 | pes2o/s2orc | v3-fos-license | The Natural Antibiotic Resistances of the Enterobacteriaceae Rahnella and Ewingella
In contrast, a number of bacteria are naturally resistant against some antibiotics. The molecular basis for natural resistance may be a general factor like the lack of the targeted pathway, a variant of the targeted molecule that is not inhibited by the antibiotic or a membrane limiting entry of the antibiotic into the cell. In addition natural resistance may also be mediated by a resistance gene belonging to the cell’s core genes. Such resistance genes are vertically inherited, shared by (nearly) all isolates of a species and co-evolve with their hosts. They are often encoded by the chromosome, are usually immobile and their expression level is tightly regulated or very low. The establishment of such a resistance requires a long lasting, usually mild selection pressure as it may be present in the soil, which contains many microorganisms producing antibiotics. Examples for this type of natural resistance are the chromosomally encoded β-lactamases found in several species of the Enterobacteriaceae (Naas et al., 2008), many of them colonising plants and soil.
Introduction
The antibiotic resistance genes present in clinical isolates are usually acquired and located on mobile elements allowing their horizontal transfer to other strains or even across bacterial species. Consequently, resistance genes with 100% sequence identity may be found in otherwise unrelated genera while the occurrence of such an acquired resistance within a certain species is highly variable.
In contrast, a number of bacteria are naturally resistant against some antibiotics. The molecular basis for natural resistance may be a general factor like the lack of the targeted pathway, a variant of the targeted molecule that is not inhibited by the antibiotic or a membrane limiting entry of the antibiotic into the cell. In addition natural resistance may also be mediated by a resistance gene belonging to the cell's core genes. Such resistance genes are vertically inherited, shared by (nearly) all isolates of a species and co-evolve with their hosts. They are often encoded by the chromosome, are usually immobile and their expression level is tightly regulated or very low. The establishment of such a resistance requires a long lasting, usually mild selection pressure as it may be present in the soil, which contains many microorganisms producing antibiotics. Examples for this type of natural resistance are the chromosomally encoded β-lactamases found in several species of the Enterobacteriaceae (Naas et al., 2008), many of them colonising plants and soil.
Although these environmental microorganisms pose a low risk to human health, concerns about the spread of their antibiotic resistance genes to pathogens have arisen. Their resistance genes are usually non-mobile, but inclusion into mobile genetic elements may allow the spread to unrelated bacteria. In the last two decades the CTX-M type enzymes have become the most prevalent extended-spectrum β-lactamases (EBSLs) in pathogenic Enterobacteriaceae (Canton & Coque, 2006). The CTX-M enzymes are believed to originate from Klyvera ascorbata and Klyvera georgiana chromosomal β-lactamases (Olson et al., 2005;Rodriguez et al., 2004). The inclusion of these genes in integrons located on large conjugative plasmids has likely facilitated their spread among the Enterobacteriaceae. Such plasmids contain frequently multiple resistance genes, which might have further enhanced spread of the CTX-M genes in microbial communities by co-selection (Canton & Coque, 2006). Once established in pathogens the spectrum of the resistance genes may be increased by point mutations further impeding treatment of infections with antibiotics. Thus improved understanding of natural resistance, conditions favouring transfer of resistance genes to pathogens and the underlying molecular mechanisms are important areas of research.
Rahnella and Ewingella, two closely related genera of the Enterobacteriaceae, are naturally resistant to several β-lactam antibiotics. Rahnella is widespread in nature and routinely present in the daily human diet but also Ewingella may be present at high titers in some kinds of food. Both microorganisms have been infrequently isolated from clinical specimens. Here the biology, natural habitats, clinical significance and antibiotic susceptibility patterns of Ewingella and Rahnella will be addressed. Novel results about their resistance genes will be presented and the evolution of these genes and the potential for their transfer to other bacteria will be discussed.
Biology, clinical significance and antibiotic resistances of Rahnella and Ewingella
In 1976 a new class of Enterobacteriaceae was defined during a numerical taxonomy study and provisionally named 'group H2' (Gavini et al., 1976). Based on DNA relatedness studies this group was later proposed as a new species, Rahnella aquatilis (Izard et al., 1979). In the following years strains belonging to this novel genus were infrequently isolated from water and clinical specimens and Rahnella was thought to be a rare microorganism (Farmer et al., 1985) until it was found to be frequent in plant and soil specimens. Also Ewingella was recognised as a separate group of the Enterobacteriaceae in a phenotypical study, which was subsequently confirmed by DNA-DNA hybridisation experiments (Grimont et al., 1983). Based on current reports Ewingella is believed to be a rare member of the Enterobacteriaceae (Brenner & Farmer 2005) but some studies indicate that it might be common in some ecological niches. Investigations of clinical isolates revealed that Rahnella and Ewingella are resistant to several antibiotics, mainly β-lactams. The susceptibility patterns suggested the presence of an extended spectrum Ambler class A β-lactamase (ESBL) in Rahnella (Stock et al., 2000), which could be confirmed by cloning and sequencing of the resistance gene (Bellais et al., 2001). The susceptibility pattern and detection of the enzyme by SDS-PAGE/nitrocefin staining suggested an Ambler class C β-lactamase (AmpC) for Ewingella (Stock et al., 2003). Here we report for the first time a DNA sequence-based phylogenetic analysis confirming that the Ewingella β-lactamase belongs to the AmpC class.
Biology, habitat and possible applications of Rahnella and Ewingella
The genus Rahnella comprises three genomospecies, Rahnella aquatilis (= genomospecies 1), Rahnella genomospecies 2 and Rahnella genomospecies 3 (Brenner et al., 1998), while the genus Ewingella consists of only one species: Ewingella americana. Based on phenotypical tests two biogroups of Ewingella americana have been defined, which show differences in L-rhamnose and D-xylose fermentation (Grimont et al., 1983). Strains belonging to Rahnella and Ewingella have no special nutritional requirements and can use a number of carbon sources. They are able to grow in the temperature range from close to 0°C to approximately 40°C, although many strains show a reduced biochemical activity at elevated temperatures (Brenner & Farmer 2005;Brenner et al., 1998;Davis & Eyles, 1992;Jensen et al., 2001;McNeil et al., 1987).
www.intechopen.com
The Natural Antibiotic Resistances of the Enterobacteriaceae Rahnella and Ewingella
Ewingella has also been isolated from vegetables (Hamilton-Miller & Shah, 2001) and vacuum-packaged meat (Brightwell et al., 2007), but seems to be significantly less frequent than Rahnella in such samples. In contrast, Ewingella is very common on mushrooms including button mushroom, shiitake and oyster mushroom (Reyes et al., 2004). Importantly, Ewingella is the causative agent of a browning disorder of button mushroom called 'internal stipe necrosis' (Inglis & Peberdy, 1996), which causes significant economic loss. In addition, Ewingella has also been isolated from molluscs (Müller et al., 1995). Clinical specimens tested positive for Ewingella were mainly blood and swabs from the respiratory tract and wounds.
Rahnella and Ewingella have some interesting properties for agronomic and industrial applications. Both seem to promote plant growth and Rahnella may be useful as antagonist for controlling plant pathogens including Erwinia amylovora, causing fire blight of pear and apple trees (Laux et al., 2002), and Xanthomonas campestris, the causative agent of black rot (El-Hendawy et al., 2005). In addition, Rahnella might improve the supply of plants with nutrients like phosphate (Kim et al., 1997) and it is able to fix nitrogen (Heulin et al., 1994). The polysaccharides levan and lactan produced by different strains of Rahnella have interesting properties for industrial processes (Kim et al., 2003;Matsuyama et al., 1999;Pintado et al., 1999;Seo et al., 2002). The high uranium(VI) resistance of Rahnella and its ability to bind this toxic heavy metal is currently intensively investigated and its potential for bioremediation is studied (Beazley et al., 2007;Geissler et al., 2009;Martinez et al., 2007). Because of the increasing interest a project for sequencing of the Rahnella genome was launched and recently finished. The sequence of environmental strain Rahnella aquatilis Y9602 is available from the genbank database (www.ncbi.nlm.nih.gov) under accession number NC_015061.
Clinical significance
Rahnella and Ewingella are only occasionally isolated from clinical specimens and the clinical significance of both microorganisms is still under debate. Both are believed to be opportunistic pathogens. The pathogenic potential of Rahnella seems to be relatively low while a few fatal outcomes of infections caused by Ewingella have been reported.
www.intechopen.com
Antibiotic Resistant Bacteria -A Continuous Challenge in the New Millennium 80
Clinical significance of Rahnella
Several reports describe the isolation of Rahnella in a clinical context (Table 1). However, in some cases the clinical significance is difficult to assess particularly because many patients had some underlying conditions including haematologic and solid organ malignancy, diabetes and AIDS or had undergone surgery. The age of the patients ranged from 11 months to 78 years and an, although statistically insignificant, male predominance has been recognised among them (Gaitán & Bronze, 2010). Typical sites of isolation were blood, wounds and urine. Interestingly, a significant number of patients developed symptoms during hospitalisation suggesting nosocomial infections.
The first description of Rahnella in a clinical context dates back to 1985, where it was isolated from a burn wound (Farmer et al., 1985). In another case Rahnella was isolated from a surgical wound that had persisted for more than eight months and was repeatedly tested negative for bacteria before a purulent exudate appeared. At that time pure cultures of Rahnella could be isolated from the wound exudate (Maraki et al., 1994). Since Rahnella is easy to cultivate and previous efforts to detect bacteria in the wound were negative it seems most likely that the wound was infected recently before the exudate appeared, for instance during the daily wound cleansing procedure. In a further case Rahnella was isolated from a diabetes mellitus associated foot wound. Although the infection reacted well to treatment with ampicillin-sublactam the toe and the second digit of the foot had to be amputated because of severe necrosis. This course of disease belongs to the most severe described for an infection with Rahnella. However, the ulceration of the wound had begun two month before any medical treatment was started and a co-infection with Candida sp. was diagnosed.
While, in a clinical context, Rahnella was first isolated from a wound swab, its most frequent site of isolation was blood. Rahnella bacteraemia was associated with fever and in two cases with septic shock (Chang et al., 1999;Gaitán & Bronze, 2010). Most patients showed Rahnella bacteraemia during hospitalisation (9 of 15 cases) and venous catheters, surgery and drug abuse seem to pose risk factors for infection with this bacterium (Funke & Rosner, 1995;Gaitán & Bronze, 2010;Hoppe et al., 1993;Oh & Tay, 1995). In two epidemiologically related cases a parenteral nutrition fluid was identified as the most probable source of Rahnella (Caroff et al., 1998). Both cases appeared in the same hospital within three days and the bacterial strains isolated from the blood of both patients showed identical biochemical profiles and antibiograms and shared the same macrorestriction and ribotyping profiles. Also other patients who had received the same batch of the parenteral nutrition fluid experienced episodes of shivers but blood cultures were not taken impeding further analysis (Caroff et al., 1998). In one very unusual case a contaminated intravenous infusion fluid that a patient had self-administrated could be identified as the source of Rahnella (Chang et al., 1999). Thus in a number of cases Rahnella cells were directly introduced into the blood circulation. Under certain circumstances Rahnella may also be able to spread from the urinary tract to the blood system. Blood cultures of a febrile 76-year old man complaining of nausea and vomiting grew Rahnella. The patient had a history of a benign prostatic hypertrophy and the analysis of his urine revealed "many" bacteria. Because of these results and the underlying conditions pyelonephritis was suggested as a possible source of the patient's bacteraemia (Tash, 2005). Since the bacteria isolated from blood and urine of this patient were not compared by biochemical and molecular methods a causal link between the urinary tract infection and bacteraemia remains speculative. With respect to that it is important to note that Rahnella was isolated from urine in some other cases but no signs for bacteraemia were reported (Alballaa et al., 1992;Domann et al., 2003;O'Hara et al., 1998). Table 1. Infections caused by Rahnella. All cases we could find in the literature are included. a Year of not available); b The isolates were obtained in the 1990s; c Pantoea agglomerans is considered as the rea www.intechopen.com The Natural Antibiotic Resistances of the Enterobacteriaceae Rahnella and Ewingella
83
Rahnella was also isolated from the faeces of two children with acute diarrhoea. In both cases typical enteropathogenic bacteria, parasites and viruses could not be detected. However, the detection of Rahnella in the faeces of patients with diarrhoea is not a sufficient reason for the conclusion that this microorganism is the true cause of the infectious process (Reina & Lopez, 1996). It seems indeed unlikely that Rahnella is an enteropathogen since this organism is frequently present in food, particularly vegetables which are frequently eaten raw, while the isolation of Rahnella from faeces from patients suffering acute gastroenteritis seems to be a rare exception.
Infections with Rahnella reacted very well to treatment with antibiotics and most patients recovered rapidly, though even many of them were immunocompromised. Some patients recovered even without antibiotic treatment (Caroff et al., 1998;Reina & Lopez, 1996). Importantly, no deaths were reported as outcome of an infection with Rahnella. These data and the fact that Rahnella is a frequent microorganism routinely present in the human diet suggest that it has only a slight pathogenic capacity and its ability to infect humans may be highly dependent on their immunological status.
Currently few data about the pathogenic capacities of the three genomospecies of Rahnella are available. The routinely used phenotypic tests allow identification of Rahnella only at the genus level. Thus the genomospecies of the isolates of the cases summarised in Table 1 is unknown. A study using DNA-DNA hybridisation revealed that three clinical isolates belonged to Rahnella aquatilis (= genomospecies 1) and three were identified as Rahnella genomospecies 2 (Brenner et al., 1998) indicating that both genomospecies may act as opportunistic pathogens. However, a study including more strains is highly demanded to assess any potential differences of the pathogenic potential of the Rahnella genomospecies.
Clinical significance of Ewingella americana
Ewingella americana has been isolated from a variety of clinical specimens, particularly blood and wound swabs and less frequently from sputum (Brenner & Farmer 2005). Typical underlying conditions were surgeries, injuries from accidents, drug abuse and renal failure ( Table 2). Some patients had diabetes, received immunosuppressive therapy, were HIV positive or suffered from other chronic infections. However, in contrast to infections with Rahnella, a significant number of patients were fully immunocompetent.
Most patients had undergone surgery prior development of bacteraemia, suggesting nosocomial infections. Pien and Bruce (1986) described a nosocomial outbreak of Ewingella bacteraemia. Six cases of Ewingella bacteraemia appeared in an intensive care unit of a hospital within six weeks. All infected patients had high fever or leukocytosis and had undergone either cardiovascular or peripheral vascular surgery. A careful environmental culturing study identified a contaminated ice bath used to cool syringes for cardiac output determinations as most likely source for the bacteria. Ewingella americana was cultured from the bath and its removal from the intensive care unit terminated the outbreak (Pien & Bruce, 1986). In another hospital Ewingella americana was diagnosed in blood drawn from 20 patients (Gardner et al., 1985). None of the patients had symptoms typical for Ewingella americana sepsis. An environmental investigation revealed that the bacteria were present in a citric buffer anticoagulant used to fill coagulation tubes. Review of blood drawing procedures showed that the non-sterile coagulation tubes were frequently filled first allowing contamination of the subsequently filled culture tubes (McNeil et al., 1985). At least some of the patients received inappropriate, unnecessary antimicrobial therapy, incurring the risk of adverse drug reactions and the selection of drug-resistant bacteria (McNeil et al., 1987).
A fatal case of Waterhouse-Friderichsen syndrome was associated with an Ewingella infection of a previously healthy 74-year-old women (Tsokos, 2003). She experienced dragging pain in her left leg. Since the physical examination was unremarkable except for restricted mobility caused by the painful leg and her temperature was normal, just an analgetic was administered and bed rest ordered. On the next morning she was found dead in her bed. An autopsy revealed intraparenchymal haemorrhages in both adrenal glands, the heart showed granulocytic infiltration, clots were present in the larger arterial vessels and her brain and lungs were oedematous. Ewingella americana could be isolated from heart and spleen blood obtained during autopsy. In agreement with a suspected sepsis a highly increased level of procalcitonin was measured. Death was attributed to acute adrenal insufficiency due to Waterhouse-Friderichsen syndrome caused by Ewingella americana (Tsokos, 2003). In a second case the death of a 30-year-old man was associated with pneumonia caused by Ewingella americana (Bukhari et al., 2008). In this case the patient was admitted deeply comatose with multiple severe injuries caused by a road traffic accident to hospital. His brain showed oedema, intercerebral haemorrhage in basal ganglia to the right thalamus and subarachnoid haemorrhage along with the fracture of the frontal bone. The upper part of his right lung showed contusion. Ewingella americana was identified in his tracheal aspirate but not from any other sample of the patient. The isolated strain exhibited multiple antibiotic resistances but it was not reported whether the patient received any antibiotic treatment. On the eighth day of admission he went to a stage of multiple organ failure and died. It was hypothesised that the cause of death may be pneumonia associated with brain damage (Bukhari et al., 2008). However, because of the underlying conditions it is difficult to rate whether the infection with Ewingella was indeed the cause of death. Only two other cases of respiratory infection caused by Ewingella have been reported. In both cases the patients recovered quickly after treatment with antibiotics. However, it is important to note that in one of these cases the isolated strain was multidrug resistant (Pound et al., 2007).
In two cases Ewingella was associated with eye infection (Da Costa et al., 2000;Heizmann & Michel, 1991). Swabs of the conjunctivae grew the microorganism. Symptoms were keratoconjunctivitis, adhesive eyelids, itching and impaired secretion of tears. In both cases the infection reacted well to antibiotic treatment and the symptoms were relieved in a few days. One report describes also the isolation of Ewingella from faeces of a patient with diarrhoea. However, like in the cases of isolation of Rahnella from faeces, the clinical significance of this finding is unclear. Since Ewingella may be present on some kinds of food, isolated bacteria may originate from the ingested food and be unrelated to diarrhoea. Studies on the frequency of Ewingella in the human diet and additional case reports are necessary to rate the enteropathogenic potential of this microorganism.
Taken together these reports suggest that Ewingella has a higher pathogenic capacity than Rahnella. Several cases of infection in immunocompetent patients were reported. Ewingella may also cause infections with fatal outcome. Furthermore, while all Rahnella strains isolated so far are susceptible to most antibiotics, two multiple drug resistant isolates of Ewingella have been reported. The origin of these resistances, their molecular basis and capacity to spread to other genera are intriguing questions to be addressed in the future.
www.intechopen.com
The Natural Antibiotic Resistances of the Enterobacteriaceae Rahnella and Ewingella 87
Identification of Rahnella and Ewingella
Reliable identification of strains is crucial for determining appropriate treatments of infections, hygiene monitoring in medical centres and industry and for basic research studies investigating the biology and ecology of microorganisms. In the past Rahnella strains were often identified as Enterobacter agglomerans, which may also explain that Rahnella was thought to be a rare genus while it is now considered as a relatively frequent bacterium.
Rahnella and Ewingella can be isolated using media not inhibitory for Enterobacteriaceae such as MacConkey agar or Bromothymol blue lactose agar. Levine EMB agar is especially suitable for Rahnella, which forms dark colonies on this medium (Rozhon et al., 2010). Ewingella was successfully isolated from mushrooms using VRBG agar (Reyes et al., 2004) or LB agar plates. The latter were anaerobically incubated to suppress growth of Pseudomonas (Inglis & Peberdy, 1996). Since a single phenotypic test allowing identification of Rahnella or Ewingella is lacking, a complete set of biochemical tests is necessary for identification. Rahnella is often described to be phenylalanine deaminase positive, which is a very rare characteristic among the Enterobacteriaceae, and to be motile at 25°C but not at 37°C. However, it must be emphasised that Rahnella shows only a very weak positive reaction for phenylalanine deaminase and some isolates react negative. Similarly, some strains are also immotile at 25°C. Thus the results of these two tests should be interpreted with care. It is important to note that the three Rahnella genomospecies can not be differentiated by biochemical tests (Brenner et al., 1998). Nevertheless, in many reports strains are claimed to be identified as ′Rahnella aquatilis′ although only phenotypic tests were performed. Such classifications should be evaluated very critically. The three Rahnella genomospecies were originally identified by DNA-DNA hybridisation experiments (Brenner et al., 1998). With the rapid development of molecular techniques in the last decades DNA sequencing of housekeeping genes is now the method of choice for identification of Rahnella at the genomospecies level and for confirmation of the identification of Ewingella. For sequencing of the (partial) 16S rRNA gene the primer pair 16S-3/16S-5 can be employed (sequences: 5´-ATATTGCACAATGGGCGC-3´ and 5´-GCCATTGTAGCACGTGTGTAG-3´, respectively; amplicon: 881 bp) (Rozhon et al., 2011). For verification a part of the groEL gene can be sequenced using the primer pair groEL-fwd/groEL-rev (sequences: 5´-ATGGCAGCTAAAGACGTAAAATT-3´ and 5´-TTACGACGRTCGCCRAAGC-3´, respectively; amplicon: 857 bp) (Rozhon et al., 2011). In addition a part of the dnaJ gene can be sequenced using the primer pair dnaJ-fwd/dnaJ-rev (sequences: 5´-CAGTATGGTCATGCAGCCTTTGAACA-3´ and 5´-TCAAAGAACTTTTTCACGCCGTC-3´, respectively; amplicon: 917 bp). Neihgbour-joining trees constructed with such sequences are shown in Figure 1. The genbank database contains numerous Rahnella and Ewingella 16S rRNA and several groEL and dnaJ gene sequences. Since little is known about the identification of most of these strains only sequences of strains deposited to strain collections should be used for analysis of the obtained data (
Susceptibility patterns
The susceptibility patterns of more than 180 Rahnella strains have been described in the literature (Table 4). Many of these strains were isolated from clinical specimens but more than 75 originate from environmental samples (most of them were obtained in the study of Ruimy et al. (2010b) and in this study). Rahnella was found to be resistant to narrow spectrum penicillins, aminopenicillins, carboxypenicillins and most strains showed a lowlevel resistance to ureidopenicillins with MICs below 16 mg/l (Stock et al., 2000). Resistance was also observed for 1 st and 2 nd generation cephalosporins while most strains were sensitive or at least intermediate for 3 rd and all strains were sensitive to 4 th generation cephalosporins and carbapenems. Addition of β-lactamase inhibitors including clavulanic acid, sublactam and tazobactam decreased the MICs of all β-lactams tested. This pattern suggests the presence of a cavulainc acid-sensitive extended spectrum Ambler class A β-lactamase (Ambler, 1980) resembling the chromosomally encoded class A β-lactamase of Antibiotic a Class b (Christiaens et al., 1987) (Freney et al., 1988) (Goubau et al., 1988) (Harrell et al., 1989) (Hohl et al., 1990 1 12 1 1 6 1 1 1 2 1 1 2 1 1 72 1 2 1 1 1 55 1 Table 4. c Only resistance information was published.
www.intechopen.com
The Natural Antibiotic Resistances of the Enterobacteriaceae Rahnella and Ewingella
91
Wiedemann, 1999) and Serratia fonticola (Peduzzi et al., 1997). In contrast to Rahnella, Escherichia hermanii and the Klebsiella isolates were sensitive to 1 st and 2 nd generation cephalosporins while the Serratia fonticola β-lactamase showed activity even against 3 rd generation cephalosoprins. The unique susceptibility pattern of Rahnella indicates an enzyme distant from the other Ambler class A β-lactamases.
Also most Ewingella strains are resistant to several β-lactamases, mainly 1 st and 2 nd generation cephalosporins, while they were sensitive to 3 rd and 4 th class cephalosporins. In contrast to Rahnella only a low or medium-level resistance for penicillins could be observed. The distribution of the MICs of these antibiotics showed a peak at the concentration range clinically defined as 'intermediate' resulting in strains that were sensitive, intermediate or resistant (Stock et al., 2003). This overlap is likely the reason that the phenotypes of ampicillin and amoxicillin resistance seem to be inconsistent in the literature (see Table 4). The β-lactamase of Ewingella is insensitive to inhibitors, which is typical for class C β-lactamases.
Apart from β-lactams the most remarkable resistance of Rahnella and Ewingella was for fosfomycin. The MICs of most strains exceeded 64 mg/l and often reached 512 mg/l (Stock et al., 2000;Stock et al., 2003). Also one highly resistant Rahnella isolate with a MIC exceeding 1600 mg/l was reported . Other resistances shared by most strains included only such to which other species of the Enterobacteriaceae are also intrinsically resistant, for instance macrolides, lincosamides and glycopeptides.
Remarkably, two multidrug resistant strains of Ewingella were reported. Based on an antibiogram a successful treatment with cefotetan and trimethoprim/sulfamethoxazole was initiated in one case (Pound et al., 2007), while no information about antibiotic therapy was reported in the second case (Bukhari et al., 2008). Further reports of strains with unusual susceptibility patterns are rare and usually only one or two additional resistances were observed (Table 4 and 5). Thus treatment of infections is usually simple. In several cases trimethoprim/sulfamethoxazole, ciprofloxacin, gentamycin and 3 rd generation cephalosporins were successfully used. For Rahnella also combinations of penicillins with β-lactamase inhibitors may be an option, while this is inappropriate for Ewingella infections.
Antibiotic resistance genes and their evolution
Cloning and sequencing of the Rahnella β-lactamase (bla RAHN-1 ) confirmed that it belongs to the Ambler group C (Bellais et al., 2001). The bla RAHN-1 gene comprises 888 bp and its translated amino acid sequence shows 75%, 71% and 67% identity to the chromosomally encoded β-lactamases of Serratia fonticola, Kluyvera cryocrescens and Citrobacter sedlakii and approximately 70% identity to plasmid encoded CTX-M type ESLBs found in isolates of Klebsiella pneumoniae, Escherichia coli, Acinetobacter baumanii and other species ( Figure 2B). Currently the sequences of the complete bla RAHN loci of four different strains are available. They show a similar pattern: bla RAHN and its surrounding genes have the same transcriptional orientation. An upstream transcriptional regulator that may regulate bla RAHN expression is lacking (Figure 2A). The expression of many chromosomally encoded class A β-lactamases including that of Citrobacter diversus (Jones & Bennett, 1995) and Proteus vulgaris (Ishiguro & Sugimoto, 1996) is regulated by LysR-type transcription factors but also some examples lacking such a control system, for instance bla KLUC-1 of Kluyvera cryocrescens (Decousser et al., 2001), are known. A recent phylogenetic study using partial β-lactamase gene sequences of Rahnella strains isolated from different vegetables and fruits revealed two clusters (Ruimy et al., 2010b). A similar dichotomy was also observed for a phlyogenetic tree based on partial 16S rRNA and rpoB sequences (Ruimy et al., 2010a). The originally described bla RHAN-1 gene (Bellais et al., 2001) clustered with the sequences obtained from Rahnella genomospecies 2. The variant found in Rahnella aquatilis was named bla RHAN-2 (Ruimy et al., 2010b). Here we provide data confirming the results of these studies: we sequenced the (partial) bla gene of a number of reference strains and environmental isolates. The obtained phylogenetic tree ( Figure 2D) is in agreement with that obtained for the 16S rRNA, groEL or dnaJ gene (Figure 1). These data clearly suggest that bla RHAN was present in the ancestor before www.intechopen.com The Natural Antibiotic Resistances of the Enterobacteriaceae Rahnella and Ewingella 93 the divergence in genomospecies. Previously the isolation of Rahnella strains from 12,000 year old American mastodon remains was reported. We used the partial 16S rRNA gene sequence of these isolates and of recent reference strains to construct a phlyogenetic tree ( Figure 2E). The four prehistoric strains cluster clearly with genomospecies 2. This indicates that divergence in genomospecies occured significantly more than 12,000 years ago. Thus the bla RHAN seems to be present in Rahnella for a long time and thus represents a natural resistance of this microorganism.
However, we were unable to obtain any PCR product for strains belonging to Rahnella genomospecies 3 although these strains were intermediate or resistant to amoxicillin and cephalothin. Thus Rahnella genomospecies 3 may either possess a β-lactamase resistance gene unrelated to bla RAHN-1 and bla or the primer binding sites may be different. Since the β-lactam susceptibility pattern of the three Rahnella genomospecies is very similar, the latter explanation seems more plausible.
Based on the susceptibility pattern an Abler class C β-lactamase was suggested for Ewingella americana (Stock et al., 2003). Using different primer combinations we could amplify and sequence the (partial) ampC gene of the strains WMR82 and WMR121. The amino acid sequence shows 72% identity to AmpC of Serratia proteamaculans and approximately 67% and 59% to AmpC of other Serratia species and to the Providencia cluster, respectively ( Figure 2C). It is interesting to note that the AmpC sequences of the two Ewingella isolates share only 96.3% sequence identity. In contrast the plasmid encoded mobile β-lactamases found in some Klebsiella pneumoniae and Escherichia coli isolates exceed 98% identity ( Figure 2C). It is believed that they originate from the chromosomally encoded ampC gene of Hafnia alvei (Girlich et al., 2000). This result and the observation that the vast majority of Ewingella americana strains have a similar susceptibility pattern suggest natural rather than acquired βlactam resistance for this microorganism. While the molecular basis of β-lactam resistance is well known, the genotype of the fosfomycin resistance remains elusive. The high level of fosfomycin resistance observed in several strains and the report of successful transfer of the fosfomycin resistance to Serratia marcescens rather suggest the presence of a specific fosfomycin:glutation-S-transferase than mutations in the GlpT, a transporter necessary for entry of fosfomycin into the cell.
The plasmid complement of Rahnella
Originally bla RAHN-1 was thought to be chromosomally encoded, since transfer experiments to Escherichia coli failed (Bellais et al., 2001). The recently completed Rahnella genome sequencing project showed unambiguously that the β-lactamase gene of strain Y9602 is located on a 617 kb megaplasmid, pRAHAQ01. The bla RAHN-2 locus and the surrounding genes of pRAHAQ01 share striking homology to three previously reported bla RAHN-1 and bla RAHN-2 sequences (Bellais et al., 2001;Ruimy et al., 2010b), indicating that they may also be plasmid born. To investigate this in more detail we analysed the sequence of pRAHAQ01 for putative plasmid replication genes and found only one candidate: Rahaq_4731 or repB. RepB shares 82% amino acid sequence identity with the replication protein of pEA29, a large plasmid of the plant pathogen Erwinia amylovora (McGhee & Jones, 2000). PCR analysis using primers for a conserved part of the repB gene showed a positive result for all strains tested ( Figure 3). Moreover, in a previous study the presence of 400 kb to 700 kb megaplasmids in Rahnella soil isolates has been described (Evguenieva-Hackenberg & Selenska-Pobell, 1995). This substantiates that bla RAHN may be commonly plasmid encoded. pRAHAQ01 and a second large plasmid found in strain Y9602 seem to be immobile since no known transfer system could be found on their backbones. Furthermore, no evidence could be found that bla RAHN is located on a transposon or an integron.
A number of Rahnella strains possess also small plasmids. The majority of them were found to belong to the ColE1 family but also some ColE2 and rolling circle plasmids were isolated. Interestingly, the Rahnella ColE1 plasmids formed a distinct cluster in the ColE1 family and lacked any mobilisation system, suggesting that they rarely spread by horizontal gene transfer events. The ColE2 and the rolling circle plasmids possessed mobilisation systems but, like the ColE1 plasmids, were cryptic and did not encode any resistance gene (Rozhon et al., 2010).
Taken together these results suggest that the Rahnella β-lactamase, although plasmid encoded, is hardly mobilised to other microorganisms. Indeed, any evidence for its spread to human pathogens is currently lacking (Ruimy et al., 2010b). Similarly, also the ampC gene of Ewingella has so far remained restricted to its natural host but further experiments are necessary to rate its ability for mobilisation. Such studies would be important because previous reports provide evidence that Ewingella americana may be present in clinical environments (McNeil et al., 1987;Pien & Bruce, 1986) and the appearance of multiple drug resistant Ewingella americana strains (Bukhari et al., 2008;Pound et al., 2007) indicates that this micoorganism may exchange genetic information with human pathogens.
Conclusion
Rahnella is commonly associated with plants and Ewingella has been found at high titers in cultured mushrooms. Thus these two Enterobacteriaceae may be frequent in some types of food. Both may appear as infrequent human opportunistic pathogens. Infections are easy to treat if the specific antibiotic resistance patterns of these bacteria are considered. Rahnella and Ewingella are naturally resistant to several β-lactams, which is mediated by an Ambler class A and an Ambler class C β-lactamase, respectively. The β-lactam resistance gene of Rahnella, bla RAHN , is located on the large non-mobile plasmid pRAHAQ01. This plasmid www.intechopen.com The Natural Antibiotic Resistances of the Enterobacteriaceae Rahnella and Ewingella 95 belongs to the pEA29 family, which is commonly found in plant associated bacteria. Rahnella acquired bla RAHN presumably in prehistoric times before the divergence into genomospecies. Since then bla RAHN has co-evolved with its host and diverged to bla RAHN-1 and bla RAHN-2 found in Rahnella genomospecies 2 and in Rahnella aquatilis, respectively. The variant present in Rahnella genomospecies 3 remains to be identified. Although bla RAHN is located on a plasmid it is not per se mobile and so far no hint for its mobilisation to other species has been found. However, since several examples of chromosomal resistance genes that were transferred into pathogens have been documented, it can not be excluded that also bla RAHN may spread to other bacteria in the future. Based on the suceptibility pattern it was previously hypothesised that the β-lactamase of Ewingella americana is an Ambler class C enzyme. Here we have provided compelling data confirming this assumption. However, further studies are necessary to assess whether the Ewingella ampC gene is chromosome or plasmid born and its potential for transfer needs to be investigated. Rahnella and Ewingella are also naturally resistant to fosfomycin. The molecular basis of this resistance remains elusive. Other resistances were rarely reported for Rahnella, while recently two multidrug resistant strains of Ewingella were described. These characteristics should be considered for treatment of infections and for potential applications of Rahnella and Ewingella.
Acknowledgment
We would like to thank Harald Preßlmayer for translation of French, Spanish and Italian manuscripts. This work was supported by the Austrian Science Fund. | 2017-08-28T13:03:31.639Z | 2012-04-04T00:00:00.000 | {
"year": 2012,
"sha1": "e2808ff60dd9f4f95e9c2c343592d636d1ea185c",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/chapter/pdf-download/34687",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "72b9f2d0e3c3b7c3e4302b8cf49d774becc37103",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
234591790 | pes2o/s2orc | v3-fos-license | Beyond Physical Threats: Cyber-attacks on Critical Infrastructure as a Challenge of Changing Security Environment – Overview of Cyber-security legislation and implementation in SEE Countries
States, organizations and individuals are becoming targets of both individual and state-sponsored cyber-attacks, by those who recognize the impact of disrupting security systems and effect to people and governments. Wide range of critical infrastructure sectors are reliant on industrial control systems for monitoring processes and controlling physical devices and for that reason, physical connected devices that support industrial processes are becoming more vulnerable. Not all critical infrastructure operators in all sectors are adequately prepared to manage protection (and raise resilience) effectively across both cyber and physical environments. Additionally there are few challenges in implementation of protection measures, such as lack of collaboration between private and public sector and low levels of awareness on existence of national key legislation. From supranational aspect, in relation to this papers topic, the European Union has took first concrete step in defense to cyber threats in 2016 with „Directive on security of network and information systems“ (NIS Directive) by prescribing Member States to adopt more rigid cyber-security standards. The aim of directive is to improve the deterrent and increase the EU’s defenses and reactions to cyber attacks by expanding the cyber security capacity, increasing collaboration at an EU level and introducing measures to prevent risk and handle cyber incidents. Yet, not all Member States share the same capacities for achieving the highest level of cyber-security. They need to continuously work on enhancing the capability of defense against cyber threats as increased risk to state institutions information and communication systems but also the critical infrastructure objects. In Southeast Europe there are few additional challenges – some countries even don't have designated critical infrastructures and they are only perceived through physical prism; non-EU countries are not obligated to follow requirements of European Union and its legislation, and there are interdependencies and transboundary cross-sector effects that needs to be taken in consideration. Critical infrastructure Protection is the primary area of action, and for some of SEE countries (like the Republic of Croatia) the implementation of cyber security provisions just complements comprehensive activities which are focused on physical protection. This paper will analyze few segments of how SEE countries cope with new security challenges and on which level are they prepared for cyber-attacks and threats: 1. Which security mechanisms they use; 2. The existing legislation (Acts, Strategies, Plan of Action, etc.) related to cyber threats in correlation with strategic critical infrastructure protection documents. Analysis will have two perspectives: from EU member states and from non-EU member states point of view. The aim of research is to have an overall picture of efforts in region regarding cyber-security as possibility for improvement thorough cooperation, organizational measures, etc. providing also some recommendations to reduce the gap in the level of cybersecurity development with other regions of EU.
The Global Risks Report 2017 of the World Economic Forum rates cyber risks right after the terrorism as the dominant social threat of the twenty-first century. Cyber-security and cyber-space protection is becoming increasingly complex by each day, as a direct consequence of the development of technology, globalization, the emergence of new challenges such as asymmetric threats and other forms of new security threats. Although the use of information and communication technology has a positive impact on the development of the functional capabilities of numerous systems, increasingly interconnected devices and information flows are raising the vulnerability of the objects and other linked critical infrastructures, primarily through the exposure to cyber threats and information and communication infrastructure failures. Systems and infrastructures become very fragile and more prone to risk, which can cause dysfunction but also result in major technological collapse (Mikac, Cesarec and Larkin, 2018: 181). According to researches, attacks on critical information infrastructures are mostly affecting the financial, information and communications technology and energy sectors (Tofan, et.al., 2016:4), which is directly linked to the concept of interdependence that makes infrastructure the most vulnerable, where, for example, "the outage of a hydro or thermal power plant will not only adversely affect the energy sector but also the information, telecommunications, economic, financial and the whole range of services, but the same is equal in the other way" (Matika, 2009: 51).
The goods, products and services in the physical facilities is increasingly being replaced by virtual ones, which, although an asset for community development and a precondition for global collaboration and connectivity, also causes an additional threat of cyber attacks and shifts the focus of national security issues to cyber security. Technology binds, enables work and progresses in development for (critical) infrastructure in all sectors, therefore it is necessary to give attention to infrastructure protection in the cyber dimension as well. It is important to emphasize that the security system includes not only physical protection, but also protection of data and information systems (i.e. electronic services, which are connected to a certain critical infrastructure) and full implementation of adequate information security policies, as well as the protection of the cyber space in which they originate and transmit different types of data. Critical information infrastructure, therefore, is an electronic flow of information, and in this sense cyberspace itself is a critical information infrastructure, which implies the need for a close connection between the concepts of critical infrastructure protection and cyberspace (Perešin and Klaić, 2012: 336).
The Global Cybersecurity Index[1] 2017 presented modeling approach of five strategic pillars on cybersecurity, highlighting legal, technical, organizational, capabilities and cooperation. It also emphasizes that cybersecurity is not only the IT security, it also includes organizational, personal and physical security measures. But what we are witnessing today, business processes often overlook physical security when considering cyber security as main threat. Still, what is virtual takes place through the physical (cameras, sensors, cables) and it is very intertwined. However, although contemporary information systems threats can be classified into characteristic groups of failures, incidents, and attacks, the specificity is that we must clearly distinguish two important categories of information system threats from the traditional understanding: unstructured threats (hackers, individuals) and structured threats (foreign states, terrorists and criminal organizations) (Klaić and Perešin, 2012:2). Referring to the strategic component, especially in the context of security policies that are actually the basis of social action, the role of critical information infrastructure and its impact to CIP policy is extremely significant, which has become evident in the increasing interplay between these two domains. In the beginnings of establishing the regulatory framework in the European Union in the field of critical infrastructures, following a shift in the legislative focus from the threat of terrorism, Directive 2008/114/EC on the identification and designation of European critical infrastructures and the assessment of the need to improve their protection was adopted, emphasizing two sectors: energy and transport, but also stating that it "...should be reviewed with a view to assessing its impact and the need to include other sectors within its scope, inter alia, the information and communication technology ('ICT') sector (Council of the European Union, 2008:1). Due to the increasing importance and advancement of technology, the need to further develop the legislative area related to cyberspace has been recognized. One of the most important documents is certainly the NIS Directive (Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union) adopted in 2016, but the EU has been dealing with cyber security issues comprehensively since 2004, starting with founding of ENISA (European Union Agency for Network and Information Security), as a specialized EU agency. In 2009, there was also a Communication from the Commission to the European Parliament the Council the European Economic and Social Committee and the Committee of the Regions on Critical Information Infrastructure Protection "Protecting Europe from large-scale cyber-attacks and disruptions: enhancing preparedness, security and resilience" (COM (2009)149), which focuses on prevention and awareness and defines a plan of immediate action to strengthen the security and trust in the information society. It was followed, by a Joint Communication to the European Parliament, the Council the European Economic and Social Committee and the Committee of the regions, Cybersecurity Strategy of the European Union: An Open, Safe and Secure Cyberspace which also emphasizes the need to intensify on-going efforts to strengthen Critical Information Infrastructure Protection. These were initial step towards creating EU Cybersecurity policy, and based on them, and the need to have a common level of security of network and information systems in all Member States, NIS Directive was drafted and entered into force in August 2016. The deadline for national transposition by the EU Member States was 9 May, 2018. Today, the NIS Directive presents main legislation of the Cybersecurity Strategy of the European Union and is extremely significant for the implementation on network and information systems and services which play a vital role in the society. The NIS Directive was adopted to connect the key areas, actors and processes, in order to increase the level of protection and the providing minimum common standards in this field.
Putting in the mutual context the NIS Directive and Directive 2008/114/EC, the NIS Directive arose from the need to complement the existing normative CIP framework, because of the lack of adequate critical infrastructure protection in the information and communication technology sectors. Although, it is important that the NIS directive puts emphasis on information and technology sector (for the raising of the level of security in all sectors dependent on IT), an additional challenge arises in which the critical infrastructure operators are also becoming Operators of critical infrastructures and Operators of Essential Services which results in overlapping or duplicating their obligations (in the allocation of resources, the additional involvement of the staff and experts to increase resilience and the level of protection). It is important for this research to emphasize that the Directive 2008/114/EC is more focused on assets, while the NIS Directive is more focused on services. In that part it is shown aforementioned relationship between the physical and the "virtual" and an indication of the challenge in the interconnection of these two components. In order not to make the analysis of the challenges of changing security environment and the impact on Southeast Europe countries too extensive, the focus of this research will be on four countries with different specificities -two EU Member States and two non-EU countries. In the first group, the Republic of Croatia -as the last Member State to acquire full EU membership in 2013 (although it does not yet have the same capacity as other Member States) had to adapt to the new requirements of Directive 2008/114/EC and the NIS Directive; and Romania which is already a long-standing member of the EU (since 2007) with presumption of success in implementing the provisions of the mentioned Directives. Selected non-EU countries that are part of this analysis are Montenegro, which has the status of candidate for accession since 2006 and North Macedonia since 2005, and must align with the requirements placed on all countries wishing to become part of the community that focuses on setting a high level of security. With those requirements, there is often a lack of awareness of the possibilities and differences that countries have in fulfilling such conditions. Primarily because in the vast majority of SEE countries, the all-hazard approach is based specifically on the physical domain of critical infrastructures, yet the cybersecurity domain cannot be neglected -given its high impact on the security of networks, systems and data that are allowing critical infrastructures to deliver essential services.
EU states and non EU-states -understanding the differences in the approaches to CIP and CIIP security policy
The introduction of security measures and standards, both physical and information security, through specific policies in legal entities in different sectors of society should form the basis of a national regulatory framework for information security. Although sectoral approaches are somewhat different, the common threats that arise in their environment and the need to manage risk, imposes a need for a comprehensive approach in critical infrastructure protection.
We can define information infrastructure in general as "a combination of computer and communication systems that serve as the basic infrastructure for public bodies, industry and the economy. Critical infrastructures such as the transportation and distribution of electricity are inevitably dependent on telecommunications, public telephone networks, the Internet, terrestrial and satellite wireless networks and associated computer resources for information, communication and control management" (Brnetić et.al., 2013: 6). Infrastructure objects are also inter-linked in cyberspace, through systems such as Supervisory Control and Data Acquisition Systems -SCADA Systems.
In the context of EU security policy, the NIS Directive brings definition where network and information system means: (a) an electronic communications network; (b) any device or group of interconnected or related devices, one or more of which, pursuant to a program, perform automatic processing of digital data or; (c) digital data stored, processed, retrieved or transmitted by elements covered under points (a) and (b) for the purposes of their operation, use, protection and maintenance (The European Parliament And The Council Of The European Union, 2016:13). The definition from Directive 2008/114/EC say that infrastructure is the "asset, system or part thereof located in Member States which is essential for the maintenance of vital societal functions, health, safety, security, economic or social well-being of people, and the disruption or destruction of which would have a significant impact in a Member State as a result of the failure to maintain those functions". The fundamental link between these two definitions is the provision of services essential for the maintenance of critical societal and economic activities. For example, energy technologies used before today's extremely advanced technologies, are becoming rapidly more connected (and dependent) to modern, digital technologies and networks. Digitalization makes the energy system better, through new means, such as advanced innovative energy services, yet it also creates significant risk making energy sector more exposed to cyber security incidents.
Due to such inevitable changes, the European Commission is developing measures and mechanisms for its Member States to meet the challenges of today. The basis of these efforts is the establishment of a comprehensive legislative framework based on three documents: the aforementioned " Cybersecurity (2017) 450 final) from September 2017, which also includes the Cybersecurity Act which strengthens the EU Agency for cybersecurity (ENISA) and establishes an EU-wide cybersecurity certification framework for digital products, services and processes. The aforementioned Act provides for a comprehensive set of measures that build on previous actions and fosters mutually reinforcing specific objectives: Increasing capabilities and preparedness of Member States and businesses; Improving cooperation and coordination across Member States and EU institutions, agencies, and bodies; Increasing EU level capabilities to complement the action of Member States, in particular in the case of cross-border cyber crises; Increasing awareness of citizens and businesses on cybersecurity issues; Increasing the overall transparency of cybersecurity assurance of ICT products and services to strengthen trust in the digital single market and in digital innovation; Avoiding fragmentation of certification schemes in the EU and related security requirements and evaluation criteria across Member States and sectors. So, the EU Member States have the tools and policies required to address cybersecurity, but it still remains a national priority and responsibility. National cybersecurity strategies are the main documents to set strategic principles, guidelines, and objectives to mitigate cyber security risk. Member States that already had cyber security strategies have begun to consider revising and modifying national strategies to incorporate the provisions of the NIS Directive into their strategic objectives. However, this is a small number of Member States, twelve of them -by the Year 2012 (when ENISA begun the process of supporting the EU Member States and EFTA countries to develop, implement, and evaluate their National Cyber Security Strategies), which developed cyber strategies (ENISA, 2018). Such support, Member States, nor potential Member States have not received in development of their national strategic documents regarding critical infrastructure protection and implementation of Directive 2008/114/EC. This is a good practice that should be transposed in that area as well. Nationally, with the aim to establish effective early warning mechanisms for threats, a various forms of Computer Emergency Response Team -CERT organizations were founded, as the points for the exchange and analysis of threat information. Information is exchanged not only in relation to cyber threats, but also for each defined sector of critical infrastructure, which additionally speaks about their interconnection.
However, the vulnerabilities in critical infrastructure are not only within EU Member States borders. A particular challenge for the Commission is encouraging candidate countries to adopt the same standards as Member States, for example in such areas as cyber-related legislation or the protection of critical infrastructure (European Court of Auditors, 2019:44). Additional efforts are done, but it also needs to be taken into consideration that lot of those countries has outdated systems and technology that can be ineffective to avoid possible attacks and achieve the expected level of resilience. As well, there is the lack of adequate measures and no coordination of critical infrastructure protection efforts (as many non-EU countries, some of them in SEE, don't have national CIP normative framework). Therefore, vital systems, objects and networks are exposed to various threats and in the need of comprehensive approach to develop CIP field. Having in mind the fact that you cannot protect something you don't analyze, evaluate and optimize it is at utmost importance for those countries to identify their critical infrastructure at national level (which is the process that was never done). The critical infrastructure field is evolving and getting refocused on the cyber critical infrastructure which demands even higher level of protection, so non-EU countries also need to consider the update of their national strategy on the protection of critical infrastructure, in line with the European and other inter and supra-national recommendations.
Legislative Frameworks -pre and after NIS directive
Quality concept of the national regulatory framework for information security is the basis for cyberspace regulation in the global environment (Klaić and Perešin, 2011). Accordingly, numerous countries have considered how to adapt their legislation in order to prepare for the emerging challenges. Different approaches have been developed until the consideration of creating horizontal legislation at EU level in order to protect the network and information systems across the Union based on a comprehensive and uniformed approach. In this article in several parts it is shown how and why the NIS directive was developed, its relevant provisions as well as the legislative that was existing before NIS directive -which is today the most relevant document for all Member States, as well as those countries that are in the pre-accession stages and want to steer their national efforts to achieve an adequate level of protection in their environment. Cyber threats, as well as other threats to critical infrastructure, are inevitable for every country in the world, regardless of its level of development. In order to give comparison what the Directive has changed in national legislative frameworks, firstly, the NIS Directive in general will be presented (its importance and obligations), and then the overview of national efforts (in analyzed countries) to achieve protection in the context of cyber (and infrastructure) security (their mechanisms and strategies).
The NIS Directive focuses on protection for Critical Information Infrastructures or national essential services, namely, through setting baseline security measures and implementing cyber incident notification. In addition, it stipulates the obligation to implement other technical and organizational measures for risk management and measures to prevent and minimize the effect of the incident on the security of network and information systems. Following those requirements, the https://ojs.vvg.hr/index.php/adrs/article/view/45/41 5/13 NIS Directive prescribes the EU Member States to adopt and implement a national strategy on the security of network and information systems (known as national NIS strategy). This national strategy must address a list of issues, including a risk assessment plan, a governance framework to achieve the objectives of the national strategy, the identification of measures relating to preparedness, response and recovery and others. But the main objective of the Directive is to provide a common level of security of network and information systems in all Member States (which was lacking before), having in mind that possible security incidents due to interconnectivity could have significant consequences on the whole community. NIS Directive also introduces an obligation for operators, to notify about incidents that may have a significant effect on the continuity of providing specific service. There are two types (groups) of actors to which directive implies -Operators of Essential Services and Digital Service Providers. The Operators of Essential Services are those who provide key services to society or the national economy in the seven sectors: Energy (electricity, oil, gas); Transport (air, rail, water, road); Banking, Financial Market Infrastructures, Health, Drinking Water Supply and Distribution, Digital Infrastructure (internet traffic exchange, domain name services, and national top level domain control). On the other side, the Digital Service Providers are legal persons that provide service in three sectors: Marketplaces, Cloud Computing Services and Online Search Engines. In this perspective there is the fundamental difference between operators of essential service and digital service providersoperators of essential services are affiliated with physical infrastructure, while digital service providers are more in the "wide space" having cross-border (or even -no border) character.
The NIS Directive is part of a broad EU digital initiative which: promotes awareness on the need to develop the digital economy (in the relation to current process of creating an EU digital single market); enhances security awareness of cyberspace and reflects on a number of segments of modern society -including the development of public-private partnership and electronic services in public administration. Thereby, the NIS Directive creates appropriate framework for the prevention and protection of society against cyber threats by establishing a common approach of all Member States, as they individually ensure harmonized vertical sectoral approaches in terms of NIS Directive, while the new EU Personal Data Protection Regulation (GDPR) provides a similar horizontal approach through all segments of society as a whole (Government of the Republic of Croatia, 2018: 2).
The Republic of Croatia
Referring to the document of the Government of the Republic of Croatia from the introductory part of this chapter (which assesses the current situation (state -of-play) and presents the basic issues that need to be regulated by law), and which was made shortly before the adoption of the Act on the Cyber Security of the Key Service Operators and Digital Services Providers that implemented the NIS directive, it is evident that the importance of the European legislative framework has been understood with the full intention of implementation.
Drawing the parallel with the Croatian Cyber Security Strategy, which was adopted in 2015, it initially meets the necessary requirements set by the NIS Directive in relation to strategic national frameworks for the achievement of goals and requirements in cyberspace as a virtual dimension of society. The Croatian National Cyber Security Strategy, which was created in terms of recognizing the importance of national cybersecurity, says that "critical communications and information infrastructures are those communications and information systems that operate or are critical to the functioning of the critical infrastructure, regardless of which critical infrastructure sector is" (Government of Republic of Croatia, 2015). It is on the trail of NIS directive provisions, although it addresses the risks to network and information systems that support key services in designated sectors, where by the definition they cover a broad, general scope of all categories of possible incidents (failures, accidents and attacks), which can have a negative effect on the security of the network and information systems used in the providing of key services or digital services. What is significant and facilitates the implementation of regulations at national level is the existence of a detailed and structured Action Plan for the implementation of the National Cyber Security Strategy, but also the establishment of strategic and operational interdepartmental national bodies to manage the implementation of the strategy and address all relevant national cyber security issues. With the proposal of the Act on the Cyber Security of the Key Service Operators and Digital Services Providers, the strategy was expanded with additional requirements, aligned with the requirements arising from the transposition of the NIS Directive in the Republic of Croatia as an EU Member State.
In addition, the National Cyber Security Strategy and the Action Plan for the Implementation of the National Cyber Security Strategy have strongly highlighted critical infrastructure and its protection concept, most than all national strategies, assessments and plans to date. It was primarily perceived through critical communications and information infrastructures, which are defined as "communication and information systems whose malfunctioning would significantly disrupt the operation of one or more identified national critical infrastructures". In the Strategy, a large amount of space is devoted to critical communication and information infrastructure coupled with cyber crisis management (Mikac, Cesarec and Larkin, 2018: 122). Also, the Strategy emphasizes the importance of the Critical Infrastructures Act and the necessity of achieving its provisions. It outlines five objectives that can be equally transferred to all sectors and are part of the context of the basic needs for implementing critical protection system procedures. These are: 1. To establish criteria for identifying critical communication and information infrastructure; 2. Identify binding security measures applied by the owners/managers of identified critical communications and information infrastructures; 3. Strengthen prevention and protection through risk management; 4. Strengthen public-private partnerships and technical coordination in the processing of computer security incidents (Government of the Republic of Croatia, 2015: 14-16). In accordance with the obligation of identification of critical infrastructure and all procedures that were also necessary but lacking in the implementation of the Critical Infrastructure Act, guidelines and prescribed criteria and thresholds were adopted for assessing the importance of the negative impact of an incident for critical communication and information infrastructure, that were also ultimately transferred to other sectors of critical infrastructure which are not designated by the Act on the Cyber Security of the Key Service Operators and Digital Services Providers. For the first time, cross-sectoral criteria for the needs of national CI identification have been adopted and have been successfully used. The above mentioned once again speaks of the interplay of these two normative documents in the Republic of Croatia.
Considering period before the NIS directive, according to Klaić and Perešin (2011: 690) who are bringing a hierarchy of information security regulations in the public sector, there are several levels: the first three levels constitute the implementation framework (implementation policies), with laws, regulations, internal acts and other documents prescribed by the Office of the National Security Council and information security advisers in the competent bodies, followed by internal implementing acts in government bodies and by the regulations of The Information Systems Security Bureau as the National CERT. The next three levels towards the top of pyramid is the legislative framework, that is, information security policies. These include the ordinances of the Office of the National Security Council on security checks, physical security (etc.), followed by the Law on the Security and Intelligence System of the Republic of Croatia, the Law on Security Checks, the Law on Data Confidentiality and the Decree of the Government of the Republic of Croatia. At the top of the pyramid, as a document setting strategic goals, was then the National Information Security Program (2005), which consists of 10 chapters defining information security, information security requirements from the aspect of international relations, the state of information security in the Republic of Croatia, segmentation of competences in relation to data and information structure in the Republic of Croatia, security policy, education and development of security culture), which is today replaced by the Cyber Security Strategy of the Republic of Croatia, as an umbrella document.
The main body responsible for cyber security in the Republic of Croatia is the "National Cyber Security Council" established in 2017 to achieve the Strategy's objectives and implement the Action Plan's measures as adequately as possible, and represents a platform for establishing and managing horizontal cyber security initiatives, both in the public sector and inter-sectoral. Also, the Council's purpose is to coordinate more effectively the prevention and response activities of cyber security threats in the context of a complementary approach to the prevention and resolving security incidents, and thus to the coordinate development of national capabilities in cyberspace. The work of the Coordination is coordinated by the competent body -the Ministry of the Interior, and it is directed by the "Office of the National Security Council". The "National Cyber Security Council" is required to submit an annual report on the operational and technical coordination of cyber security in the Republic of Croatia.
As from the aspect of general critical infrastructure protection system in the Republic of Croatia, there was some challenges in achieving its functionality from perspective of inter-institutional cooperation and complexity of identification process, yet they are getting overcome by adapting national framework and by positioning CIP competent body at higher level of authority (from Administration State Body to the Ministry level). Also, the strategic direction in Republic of Croatia through implementation of guidelines set out by European Programme for Critical Infrastructure Protection policy and EU Cybersecurity policy has the prerequisites for achieving a successful critical infrastructure protection system.
Romania
Romania is facing various threats to critical infrastructure, mostly from cyberspace. This is due to an increasing interdependence between cyber infrastructure and infrastructure such as that belonging to banking, transport, energy and national defense sectors. The globality of cyberspace is likely to increase the risks affecting both citizens, businesses and the government (Government of Romania, 2013: 4).
From the legislative framework perspective (where the national strategy is the umbrella document), most relevant is the Cybersecurity Strategy adopted in 2013, which is setting out the principles for understanding, preventing and counteracting cybersecurity threats, vulnerabilities and risks. The main objectives of the Strategy are to adapt the regulatory and institutional framework to the threat dynamics of cyberspace and to establish and implement security profiles and minimum requirements for national cyber infrastructures, including the proper functioning of critical infrastructures. Strategy also highlights increased risks to citizens, businesses and the government, as cyber infrastructures face technical threats/failures, human threats and natural threats; puts in focus the resilience of cyber infrastructure; promote and develop co-operation between the public and private sectors at national and international level in the field of cyber security; sets preconditions to develop a security culture by raising awareness about vulnerabilities, risks and threats in cyberspace and the need to protect information systems; and also mentions the need to actively participate in initiatives by international organizations to which Romania belongs, as well as establishment of the international confidence-building measures concerning activities in cyberspace. According to researchers, in 2013, the Romania was one of minority of countries that defines all cyber-related notions in its national cyber security strategy, understanding it as: "normality resulting from applying a set proactive and reactive measures that ensure confidentiality, integrity, availability, authenticity and nonrepudiation of electronic information, and the public and private resources and services in cyberspace" (Luiijf et al., 2013: 6). In addition to national strategy, there is also a normative document developed for purpose of transposition of NIS Directive, adopted in January 2019, Law no. 362/2018 concerning measures for a high common level of security of network and information systems. The National Defense Strategy of Romania (2015 -2019) also emphasize relevance of cyber security of critical infrastructures, as the national security objectives include consolidating security and protection of critical infrastructures -including the cyber security sector. Strategy also recognizes the need to adapt critical infrastructures in relation to the occurrence of cyber attacks (The Presidential Administration of the Republic of Romania, 2015). It is relevant that necessity of CIP protection is recognized in wide range of national strategies, mostly because of multisectoral approach that needs to be applied in order to have adequate system for protection and resilience on (cyber) critical infrastructure. In that way it can be more easily achieved.
At the organizational level, the first step was taken in 2008 by the Romanian Intelligence Service, the Cyber-Intelligence National Authority (CYBERINT), which created the CYBERINT National Center as a platform for collaboration between institutions within the National Defense System and the interface with similar structures in NATO (Romanian Intelligence Service, Cyberintelligence, n.d.). The role of the Center is to prevent, analyze, identify and respond to incidents of cyber infrastructure that provide public utility functionality, develop and disseminate public policies to prevent cybercrime incidents and counteract incidents (Early Alert System and Real-Time Information on Cyber-Incidents) and provide advice to public authorities responsible for the identification and protection of critical infrastructure (Barbu, 2019: 52). From the strategic/operational level, the Romanian Intelligence Service, is the body responsible for the protection of state information and any network utilized by government entities in the possession of state secrets. The Cyber Security Strategy of Romania establishes two additional entities, which would act in conjunction to cover cybersecurity specific network and information security in Romania: "The National Cyber Security System" (SNSC) as a body composed of representatives from public institutions and tasked with the building and maintenance of a range of cybersecurity measures; and "The Operative Council for Cyber Security" which oversees the SNSC in its duties, as well as responding in the event of critical cybersecurity incidents. It is composed of representatives from Romanian government ministries and Romanian intelligence services (BSA, 2015.) In comparison with Republic of Croatia it is a similar approach in establishment of competent bodies for the implementation of national cyber security.
Regarding critical infrastructure protection system in general, Romania has transposed the spirit of Directive 2008/114/EC by the Government Emergency Ordinance no. 98/2010 on the identification, designation and protection of critical infrastructures, which regulates all national critical infrastructure sectors. It has organised processes, built a system of critical infrastructure protection, established functional forms of support to public institutions and owners or critical infrastructure operators in their tasks, and this works in practice (Lazari and Simoncini, 2014). In addition to the aforementioned policies and measures in the field of critical infrastructure protection, the Romanian Government has provided the basis for developing an adequate security environment with the aim of achieving the following strategic goals: 1) Ensuring unified procedures for the identification, designation and protection of critical infrastructure by leveling national and European critical infrastructure; 2) Operationalization of the national early warning system through the integration of all networks and existing information and organizational capacities; 3) accurate evaluation of the critical infrastructure vulnerability levels and identification of measures needed for preventive action and risk reduction; 4) development of cooperation at national, regional and international level in the field of critical infrastructure (Udeanu, 2015: 133). Additionally, in order to improve the transposition of the Directive 2008/114/EC and to ensure a better correspondence, Law no. 636/2018 was adopted in November 2018 with focus on strengthen the role of the national critical infrastructure and European critical infrastructure owner/operator/administrator and give new attributions and responsibilities to relevant public authorities (Maravela, Popescu and Roman, 2018). It can be seen that Romania is adapting its framework accordingly to the recognized gaps which is applied on both perspectives -of cyber and physical critical infrastructure threats. Montenegro as the EU candidate state (from June 2012) has its strategic orientation of critical infrastructure protection in the Montenegro National Security Strategy which was adopted in 2018, prioritizing development of efficient CIP system and strengthening of resilience. This initial step to organize national efforts in this field, is not the first existing legislation document that mentions CIP -it is also mentioned in various national laws, among which is the Law on cyber security adopted in 2016 (which defines security risk protection measures in information and communication systems, responsible legal entities in the management and use of information and communication systems and competent authorities for the implementation of protection measures, coordination and monitoring of the application of the main security regulations). There are also some recent strategical documents related to CIIP such as Strategy on Cyber Security (2018)(2019)(2020)(2021)with the aim to strengthen capacities for the IT critical infrastructure protection, and generally security of infrastructures. It identifies eight IT critical infrastructure sectors and brings critical information and technology infrastructure definition, as the "information systems whose disruption or destruction could jeopardize life, health, safety of citizens and state functioning or from whose functioning depends public activities" (Government of the Republic of Montenegro, 2017:14). Additionally, it includes provisions on: Modern risks, threats and challenges; Retrospect (from the first Cyber Security Strategy until today); National organizational structure; National cyber defense, including cyber capabilities, critical IT infrastructure, inter-institutional cooperation, data protection, education, public-private partnership, regional and international cooperation; and Monitoring. As mentioned, the first Montenegro Cyber Security Strategy (2013)(2014)(2015)(2016)(2017) had its main objectives of: 1. Defining institutional and organizational structure in the field of cyber security in the country; 2. Protection of critical information structures; 3. Strengthening capacities of state law enforcement authorities; 4. Incident response; 5. Define and strengthen the role of Ministry of Defense and Military in cyberspace; 6. Public-private partnership; 7. Raising public awareness and protection on the Internet, which has put in focus majority of challenges and fields of regulation that are also taken in consideration in whole EU level (Government of the Republic of Montenegro, 2013).
By available researches, the establishment of the "National council for cybersecurity/information security" as the competent body was planned by first Cyber Security Strategy in 2013, yet, it was not achieved. Once operational, the Council is supposed to be the key institution related to cybersecurity issues. The Council will also be in charge of creating procedures for the regular exchange of information between state authorities and key institutions from the private sector, i.e. internet providers, agents for .me domain, banking sector, electric power companies and companies that host e-services in Montenegro (Minović, et.al. 2016:20). In some terms form perspective of cyber security -the direction and coordination of the work of the bodies constituting the intelligence and security sector is carried out by the National Security Council, and the operational coordination and harmonization of the activities of the bodies that constitute the intelligence and security sector is performed by the Bureau for Operational Coordination (Government of the Republic of Montenegro, 2018:23).
There is an operational importance of establishment of competent bodies so the implementation of processes can be monitored, which can be perceived as one of the "weak points" of cyber security in Montenegro.
Considering the general preconditions of national CIP framework -adequate legislation, the most relevant document for CIP in Montenegro was adopted in December 2019, Law on determining and protecting critical infrastructure, bringing definition of critical infrastructure, CI sectors, criteria for identification, obligations of stakeholders and all other issues relevant to critical infrastructure system regulation. It also regulates the area of European critical infrastructure, since the provisions of this chapter will apply upon the accession of Montenegro to the European Union. Since the law is newly adopted, we can conclude that the system of critical infrastructure is still under the development in Montenegro, and the applicability of presented framework could not be analyzed -procedures for CIP yet needs to be evolved.
The Republic of North Macedonia
North Macedonia has a candidate status since 2005 and through the efforts in establishment of national security framework its tendency to implement all EU standards in security field is very visible. The focus of protection of critical infrastructure from national perspective is in energy sector, information technologies, water systems and air traffic (Mitrevska, Mileski and Mikac, 2019:143) -each of them regulated by their Laws which provide the wide range of measures. In general concept of CIP, North Macedonia doesn't have formal framework, but it has the basis in strategic and normative documents in the field of defense and security, such as: National Cyber Security Strategy of the Republic of North Macedonia, (2018)(2019)(2020)(2021)(2022), Law on Internal Affairs, Crisis Management Law, Protection and Rescue Law and Law on Private Security (Mitrevska, Mileski and Mikac, 2019:146).
National Cyber Security Strategy can be perceived as the initial process and willingness to establish CIP system. The Strategy mentions critical infrastructure as prone to cyber incidents and emphasize these threats as one of the most serious in terms of national security. It also considers critical communications and information infrastructure in terms of cyber crisis management -the need to strengthen national capacities for cyber security prevention and protection, and implement https://ojs.vvg.hr/index.php/adrs/article/view/45/41 9/13 activities to raise national cyber security awareness. The Strategy defines cyber-physical threats to critical infrastructure, such as: increased number of cyber-attacks, including industrial cyber espionage, cyber vandalism and vulnerability identification in the energy sector, transport systems and other parts of the Critical Information Infrastructure. In terms of competent authorities to monitor implementation of cyber security and through that the protection of cyber-physical threats to critical infrastructure, establishment of such body is was on of the priority activities of National Cyber Security Strategy. "The National ICT Council" was established in February 2018 to prepare and monitor the implementation of the National ICT Strategy, and at the end of 2018, the Government of Republic of North Macedonia made a strategic decision to establish the "National ICT and Cyber Security Council", and extending responsibilities, members and authority of the existing "National ICT council". "The National ICT and Cyber Security Council" consists of relevant ministers, thereby ensuring compliance of strategic-level decisions across state institutions (European Commission, 2019:15).
From perspective of CI in other sectors, there are not such strategically oriented documents, yet some legislation, such as previously mentioned the Law on Internal Affairs (regulates the obligation of the police to protect important objects that are specific, i.e. part of critical infrastructure); and the Law on Private Security (which prescribes which legal entities are obliged to private security -in their activities which can jeopardize people, environment, objects and facilities of particular cultural and historical importance and in other cases when it is in the interest of the security) -can be perceived as nationally established forms of critical infrastructure that are not defined in the means of Directive 2008/114/EC (which is adapted and/or transmitted by Member States), but are nevertheless identified and recognized as objects of national importance. Despite of that, it is visible that there is no comprehensive regulatory framework for the management of such facilities. There is no legislative document that would solely (and specifically) deal with critical infrastructure protection system. Therefore, a formal framework needs to be adopted in order to build a critical infrastructure protection system in a whole.
Recommendations for future (cooperation in critical infrastructure protection and dealing with gaps in achieving cyber-security)
Critical infrastructure protection, both physical and information-communication is a complex and challenging job. That is one of the many reasons why the public sector (governments, legislators, etc.) cannot effectively work on raising the level of resilience and protection without cooperation with representatives of the private sector (who are majority owners/operators of critical infrastructure in most countries), NGOs, the scientific community and experts in specific areas of information and national security. Cooperation of the public and private sector, must be especially emphasized, where due to the competences that the private sector has (in critical infrastructure management) it must face challenges in achieving critical infrastructure protection (e.g. implementation of security measures requiring the investment of additional resources). In the foregoing, the public sector must support them, whether through deductions or other benefits that are achieved through public-private partnerships as one of the fundamental pillars of cyber security policy.
Consequently, it is important to highlight cyber security public policies as one of the main tools for achieving cyber security. The foundation of national information security is in the development of protection policies, strategies and action plans in case of incidents which are compromising data, and/or functionality of infrastructures. Achieving cybersecurity is complex task that requires multi-level involvement of mechanisms that should also be included at the governmental level in public policy. Not only on national level, it is equally important for stronger resilience to adopt coherent public policies for EU level on coordinated cross-sectoral action and trans-sectoral cooperation mechanisms which can ensure security in the whole community. Also it is important to have forms of establishment of cooperation with EU (as well as non-EU members), such as bilateral and multilateral agreements, memorandums of understanding, commitments between the competent authority and international strategic partners in the public, private and academic sectors. The example of such cooperation is formal agreement on stance, for example "Joint Statement, Visegrad (V4)-Austria, Croatia, Slovenia" where the cyber security is identified as one of the issues to take action (Ministry of Foreign Affairs and Trade of Hungary, 2017) through the cooperation of SEE countries and other Member States.
The next set of recommendations is related to the development of joint regional cybersecurity capabilities which can foster sharing of information about threats to cybersecurity (including early warning systems); development of tools and techniques; exchange of experts and best practices -to have better and faster reaction in case of a cybersecurity incident which could affect the region. On that note, joint workshops, trainings and exercises not only in SEE region but also on European level can be very useful to test national mechanisms and see how they function before the real event of a cyber incident. For that purpose, large scale and sophisticated attacks can be simulated as well as failure modes for recognized vulnerabilities. As an example of such exercises, we can take exercises conducted by the European Network and Information Security Agency (ENISA) -in 2018, "Cyber SOPEx" was held with the aim of improving cooperation between national Computer Security Incident Response Teams and a focus on raising awareness of information sharing. understanding the roles and responsibilities within the team and use of tools needed to successfully handle incidents; and "Cyber Europe 2018" organized by ENISA in collaboration with cyber-security bodies and agencies across Europe, with 900 European cyber-security experts from 30 countries facing the scenario of intense cyber-security incident at the airport as critical infrastructure.
The Education is the next segment of recommendation, and previously mentioned agency ENISA has activities to facilitate education and general awareness which will promote NIS skills and support the Commission in enhancing the competence of professionals in this area. It also provides Guidelines such as: "Cybersecurity Culture Guidelines: Behavioral Aspects of Cybersecurity" (2019), overviews and reports (like "Status of privacy and NIS course curricula in EU Member States" (2015) which can be used as transfer of best practice), etc.
As an additional opportunity for Member States but also non-EU countries are EU funds and project implementation (such as the Collaborative research and innovation projects) which can be used for enhancing security and resilience for national purposes, as well as the region and wider space of EU community. As part of the implementation of the NIS Directive, it is planned to use EU funds, where the most referenced is "Connecting European Facilities" -CEF. Through the CEF Cybersecurity calls EU seeks to support the EU Member States in putting the NIS Directive's legal provisions into practice. Between 2016 and 2017 the European Commission has awarded €18 million funding -mainly to CSIRTs (Computer Security Incident Response Teams provide support services to handle cybersecurity threats and incidents for national stakeholders (in public sector, operators of essential services, critical infrastructure entities and digital service providers) to 19 EU Member States. Since 2018, following the transposition of the NIS Directive into national legislations, the possibility of applying and using the CEF Fund to legal entities -sector operators, through the competent sector bodies, has been given, which is additionally significant for further capacity development. Also, there is a call for proposal under the Horizon 2020 Programme (which is of particular interest to SEE countries, as the analysis in this research has shown, are mostly still focused on strengthening the physical protection system of critical infrastructures -with a tendency to consider cyber security), called "Prevention, detection, response and mitigation of combined physical and cyber threats to critical infrastructure in Europe" where SEE countries already participate in projects: SATIE -Security of Air Transport Infrastructure of Europe (Croatia), InfraStress -Improving resilience of sensitive industrial plants & infrastructures exposed to cyber-physical threats (Slovenia) which is especially interesting because of open testbed stress-testing system as a concrete activity under project implementation.
As it can be seen in this chapter, there are wide range of possibilities set out to all countries that are willing to invest time and efforts to build concrete cooperation in CIP and CIIP all in the aim to overcome the gaps between more developed and slightly less developed countries in terms of security culture -by exchange of knowledge and best practices fostered by listed recommendations.
Conclusion
National critical infrastructure protection cannot be achieved without adequate protection of cyberspace through which all data related to the operation of critical infrastructure flows -either through their exchange or storage. That is why this dependence on information and communication technology requires that cyber-security measures are prescribed and regulated by national legislation -to enable systems, networks and object of critical infrastructure to be able to detect, prevent and effectively respond to security threats in a timely manner.
An essential element is also cross-sectoral compliance, which requires well-coordinated management and security mechanisms and separation of roles between data owners, infrastructure owners and users so that obligations can be prescribed and systematic approach achieved. A comprehensive perspective is important, because segmented solutions could affect the balance of the processes and the overlapping of authorities in terms of cyberspace and the perception of physical protection, which is a possible challenge in SEE countries that generally perceive the two areas separately. In that perspective, we can refer to our national (Croatian) example where the challenge is to effectively coordinate the processes related to the implementation of the Critical Infrastructure Act and the Cyber Security of the Key Service Operators and Digital Services Providers Act, since there may be overlapping of responsibilities, unnecessary waste of resources and delays in implementation due to the lack of clarity in the implementation of security measures (what level of protection for which area), but also the reluctance of stakeholders to whom are prescribed obligations in both laws that are equally comprehensive and because of that one will be completed and the other not -although they are very similar. It is interesting to see that non-EU countries have "skipped step" and regulated the cybersecurity area that mentions information and communication critical infrastructure rather than they regulated the critical infrastructure area as prescribed by Directive 2008/114/EC. One of the perspectives on this occurrence is the fact that these countries already protect and have identified infrastructures of national importance (without being specifically named "critical infrastructure") -as we can see in the analyzed countries (Montenegro and North Macedonia), but they don't have pre-existing mechanisms of protection in cyberspace, which is a contemporary challenge that has major negative effects if a security incident occurs.
In the part of considering critical infrastructure protection from cyber threats, it is determined that it is a complicated matter and therefore it must be included in national preparedness planning, as well as in recovery planning of individual infrastructures of national importance. Despite identifying potential threats and taking security measures, the level of resilience and security may not be fully satisfactory, as the threats are increasingly modified (such as hybrid threats) and become an additional challenge to cope, often exceeding national capacities and seeking international cooperation. The European Union, as a supranational community analyzed in this paper (although the NATO Alliance, for example, develops its own mechanisms), takes cyber security issues extremely seriously, placing it as one of the top priorities of modern security. According to European Commission guidance, Member States have significantly stepped up the implementation of activities to take action and organize organizational elements to deal with cyber threats, reinforcing existing mechanisms and legislative frameworks or creating new ones (if they did not exist before). Countries that are not EU Member States, followed their steps, recognizing the need to protect critical infrastructure and all the data it has, especially since cyberspace has no boundaries and cannot be monitored comprehensively so it seeks segmented protective actions. Therefore, it is nationally important to build cyber defense capabilities through education and training, various exercises and workshops, the development of information sharing mechanisms and the synergy of various professional organizations at national and international levels.
The analysis in this paper shows that there are efforts in the region to achieve a higher level of security in national systems that are most important for the functionality of the community, i.e. critical infrastructure, however, are in line with the capacities and capabilities of states -if there is no adequate protection against environmental impacts that affect the physical components of the infrastructure and sufficient awareness (usually at the strategic level), it is difficult to achieve influence on the creation of collective risk awareness in virtual space. Implementation challenges often stem from lack of knowledge how to implement processes which are prescribed by legislation, so sharing knowledge and experience is a good opportunity to bridge the gap in cybersecurity development for the benefit of the entire community, region and the global environment as a whole.
References
BSA The Software Alliance (2015) | 2021-05-16T00:04:18.504Z | 2020-11-17T00:00:00.000 | {
"year": 2020,
"sha1": "09651c3730ba3d2613b52b1a8a6ee92038ca4698",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.51381/adrs.v3i1.45",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "d904f1e9733bf7cd7d34f9f5d248e1b0e1e8bbd6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Business"
]
} |
233835796 | pes2o/s2orc | v3-fos-license | A Review of Different Approaches for Detecting Emotion from Text
. Emotion detection and analysis is one of the challenging and emerging issues in the field of natural language processing (NLP). Detecting an individual's emotional state from textual data is an active area of study, along with identifying emotions from facial and audio records. The study of emotions can benefit from many applications in various fields, including neuroscience, data mining, psychology, human-computer interaction, e-learning, information filtering systems and cognitive science. The rich source of text available in the Social media, blogs, customer review, news articles can be a useful resource to explore various insights in text mining, including emotions. The purpose of this study is to provide a survey of existing approaches, models, datasets, lexicons, metrics and their limitations in the detection of emotions from the text useful for researchers in carrying out emotion detection activities.
Introduction
The emotion is an interdisciplinary field involving psychology, computer science and others. In psychology, emotions are expressed as psychological state differently connected with contemplations, sentiments, behavioural responses, and a level of delight or displeasure [1]. In computer science, emotions can be identified in the form of audio records, video records and text documents. Analysing emotions from the text documents seems to be challenging due to the fact that textual expressions are not always directly use the emotion related words, but often outcome from the understanding of the meaning of concepts and interaction of concepts mentioned in the text document.
Emotion expressions are the crucial form of communication in interpersonal relationship [2]. It can be expressed into positive emotion, negative emotion or neutral [3]. In general the positive emotions are expressed as happy, excited, joy, pride and negative emotions such as sad, disgust, fear, depressed and so on. In such a way, the emotions are expressed in various forms to communicate and the rich source of textual information is obtained from the social networking sites such as YouTube, Twitter, and Facebook etc., where people are spending most of their time in posting and expressing their emotions [4]. By considering the textual data IOP Publishing doi:10.1088/1757-899X/1110/1/012009 2 available on the blogs, it is helpful to identify the intensity of emotion of an individual. For example "Really happy with this purchase" [5] express the positive emotions from a customer about the purchase of product. The term "Really" intensifies the emotion expressed by the customer. In this case it implies it is more positive emotions. Another customer review on the same product [5] expressed negative emotions on the purchase of the product such as "Really disappointed. Alexa has to be plug-in to wall socket all the time. My fault is for not checking".
Here the customer intensifies more negative comment about the purchase. By considering the intensity of emotion through the text, it helps to predict individual emotions. Also, it helps to know the state of emotions of the person that can assist friends and family to take preventive measures against accidents or self harm. The contribution of the paper is to provide the state of the art approaches used in the detection of emotion from text.
In section 2, the relevant emotion models are discussed. In section 3, the existing resources such as the Corpora and lexicon are discussed. In section 4, computational approaches used in literature are discussed. Evaluation metrics used in literature are discussed in section 5. Summary of the existing approaches are discussed in section 6. Finally conclusion is discussed in section 7.
Emotion Models
Emotions are recognized by humans and this effect has influenced the way emotions are viewed in scientific terms. Researchers in psychological science believe that individuals have internal mechanisms for a limited collection of responses (usually happy, sad, anger, disgust, and fear) that can be assessed in a simple and objective manner once activated [6]. In order to represent emotions, the three major approaches are used in the psychology research [7] are 1.1 Categorical approach: This approach of emotion involves placing emotions into categories or into distinct classes that are basic and universally recognized [7].The emotions are independent and also depend on how an experience perceiving the situation which can be categorized into six basic fundamental emotions i.e., {happiness, fear, sadness, surprise, disgust and anger} by Paul Ekman [8].
The model by Robert Plutchik [9] proposed eight fundamental emotions, i.e. acceptance / trust and anticipation. 1.2 Dimensional approach: In this approach, it considers emotional states to be bound to each other rather than to be independent. Hence, it is represented in dimensional space [7] (uni dimensional and multi-dimensional) describing how emotions are connected based on the event and their degree (low to high) of occurrence. This article explores more on multi-dimensional models for emotional representation.
• Russell's circumplex model [11] represent the emotions in two dimensional model. As emotions are not independent it distinguished as Arousal (Activation and Deactivation) and Valence (Pleasantness and Unpleasantness) where the dimension Arousal refers to how excited or apathetic an emotion is and the dimension Valence refer to how positive emotion and negative emotion is [7]. [11].
• In the two-dimensional model, Plutchik's wheel of emotions [9] represents emotions as a wheel of emotions, as seen in figure 2.2. In the concentric circle, the wheel represents emotions, with the inner core emotions being variants of the eight basic emotions, then the eight basic emotions in the outermost areas of the wheel, and finally combinations of the primary emotions. The wheel shows how the emotions are related, depends on the location on the wheel. [12] represent emotions into three dimension model such as pleasantness (or positiveness), arousal (or responsiveness),and potency (or dominance). In 2D representation the emotions are distinguished as Arousal (Activation and Deactivation) and Valence (Pleasantness and Unpleasantness) and the third dimension as Dominance Power [2]) refers to the degree to which an experienced had been control of their emotions. Figure 2.3 represents 3D emotion space. • Parrot [13] organized emotion into three hierarchical structures namely primary, secondary and tertiary emotions with joy, anger, fear, love and surprise as primary set of emotion.
Appraisal based approach:
The Dimensional Model can be seen as an extension of this approach. It contains componential emotion models based on the theory of appraisal. Appraisal theory describes how different emotions, in various participants and on different times, can arise from the same event [14]. The emotions can see changes through the significant components such as cognition, expressions, physiology, motivation, motor, reactions, and feelings [7]. In the categorical approach, emotional states are restricted to a limited number of distinct types and it can be difficult to resolve a complex emotional situation or mixed emotions. In such cases, dimensional approach can be used, however not all basic emotions fit into it. Hence the advantage of the componential model [7] can be used according to the variability of different emotional state due appraisal pattern.
Resources
In this section we discuss the Corpora and lexicons used in research works related to the detection of emotions using text.
Corpora
The Corpora is the collection of linguistic data that are used to detect the emotions from text. Table 2 describes various corpuses available in the literature.
Lexicons
The lexicons used to detect the emotions from the text that are available in literature and are discussed in Table 3. It is manually rated word for valence within an integer between minus five(-5) and plus five(+5). https://github.com/fnielsen/afinn.
Sentiment140
Lexicon [36] It is automatically generated from tweets that contain emoticons.
Computational approaches:
The different approaches proposed in the literature for the identification of emotions from the text were discussed in the following sections.
Keyword based approach:
In this approach, it exploits the knowledge of key features that are combination with emotion labels using a lexicon such as Word-Net Affect and SentiwordNet, linguistic rules are applied and sentence structures are exploited. Further text preprocessing has to be performed on the given dataset which includes stopword removal, tokenization and lemmatization. In addition, keyword spotting and emotion intensity are evaluated including with Negation checks. Finally, it determines the emotion label for each sentence. CC Liu et al. [14] express this approach is based on the set of keyword which contains emotions. Without a keyword in a sentence means that it doesn't contain any emotions in them. For example: "Today, I passed my exam with distinction" and the sentence as "Hurray! Today, I passed my exam with distinction" "I passed my examination with distinction today" could indicate the same emotion (joy), but if "hurray" is the only keyword to detect this emotion, the former without "hurray" might remain undetected. They introduced an architecture aimed at providing diverse contexts with a systematic understanding of textual input and better flexibility. That is, with semantic analysis, the extraction of semantic information, the design of ontology based on emotion models and the adoption of new keywords with case-based reasoning. [45] used the latter data to create generalized and customized user reviews based on their behaviors on Twitter. For the text preprocessing based on keyword based approach the emotions and sentiments from the twitter data were used. Shivhare et. al [46] developed emotion detector system based on the emotion ontology produced an accuracy more than 75%.
Rahman et.al [47] proposed a methodology for sentence level emotion detection and created 25 emotion classes. It is based on the keyword analysis, emoticons, keyword negation, short word, a set of proverbs etc and achieved an accuracy of 80%.
Corpus based approach:
Corpus-based emotion detection approaches use supervised learning to induce sources of information such as word-emotion lexicons classified or weakly-labeled from a text corpus with a predefined collection of emotions extracted from emotion theories such as Ekman, Parrot, etc. To model the syntactic and semantic trends of text for emotion detection website such as wikipedia is used and unsupervised learning is also implemented. More works are focused on lexicons, motivated by a considerable amount of study in the area of sentiment analysis.
Anil et al [48] demonstrate how using a generative unigram mixture model (UMM) to jointly model emotionality and neutrality of terms, labelled (blogs, news headlines) and weakly labelled (tweets) emotion text can be used to learn a word-emotion interaction lexicon. UMM generated emotion language models (topics) have significantly lower perplexity compared to those from state-of-the-art generative models such as supervised Latent Dirichlet Allocation (sLDA).
Flor et al. [49] used multilingual dataset from tweets (English and Spanish). It consists of 8409 Spanish and 7303 English labeled dataset from the tweets. They reported linguistic statistics and applied machine learning to detect emotions. To evaluate the approaches used 10 fold validation and obtained accuracy of 64% for Spanish and 55%. For English. Rachman et al. [50] developed CBE (Corpus Based Emotions) with widely used emotion corpus the Wordnet Affect Emotions (WNA) and the Affective Norms for English Word (ANEW). They showed using the CBE, it improves the performance of detecting emotion with F-Measure using WNA and ANEW obtained is 50% and with the CBE obtained 61%.
Anil et al. [51] proposed the Unigram Mixture Model (UMM) based on the domain specific emotion lexicon. It outperforms the feature derived from supervised Latent Dirichlet (LDA) and Pointwise Mutual information (PMI) and also the combination of n-gram, lexion and POS. The F score measure for n-gram, PMI+UMM for total emotion intensity and for hybrid approach for the dataset such as SemEval, ISEAR, 10 and 5 cross validation is 38.23%, 39.48%, 6.24% and 52.18% respectively.
4.3
Rule based Approach To manipulate knowledge in order to view information in an advantageous way, the rule-based approach is used. It begins with text preprocessing initially, including stop word elimination, POS tagging, tokenization, etc. The rules of emotion are then derived using the concepts of statistics, linguistics, and computation. The best rules are selected later. Finally, the rules are applied to emotion datasets to determine the emotion labels. Subsequently, the appropriate rules are chosen. Additionally, the rules are applied to the emotion dataset for determining the emotion labels.
An alternative approach to improving the sentiment classification of user reviews in online communities is proposed by Asghar, Muhammad Zubair, et al. [53] Lexiconenhanced sentiment analysis based on a rule-based classification and integrating emoticons, modifiers and domain-specific terms to evaluate the feedback posted in online reviews, in addition to the sentiment terms used in general purpose.
Dibyendu et.al [54] proposed sentence level emotion detection technique by applying semantic rules. It also includes negation words and obtained F1 score of 66.18%. Srinivas Badugu and Matla Suhasini [55] developed a Rule Based Approach that detects the emotions from the tweets and classify into different emotion categories and achieved an accuracy of 85%.
4.4
Machine learning approach Emotion detection from text is based on classification problem involving different models from the disciplines of Natural Language Processing (NLP), Machine Learning(ML). Machine learning is categorized into unsupervised learning and supervised learning. Naive Bayes (NB), Support vector machine (SVM), conditional random field etc., are the most common traditional unsupervised machine learning.
Hasan et al. [56] in his work of detection of emotion worked on text stream data and use both online and offline messages. Support Vector Machine, Naïve Bayes and Decision Tree (DT) were used to detect emotions and achieved 90% accuracy. For YouTube comments, Tripto and Ali [34] suggested work on machine learning models and obtained classification of 59.2% accuracy and 65.97 % and 54.24% accuracy for Multiclass sentiment labels.
Merav et al. [57] proposed a model for the children with communication problem and help to find how to react to the societal situation. They used dataset which consist of non-insulting sentences (1241) and insulting sentences(1255). Applied ML algorithm IOP Publishing doi:10.1088/1757-899X/1110/1/012009 10 and obtained 80% recall and more than 75 % precision for SVM method. And achieved precision and recall is (> 75%) for the Tree bagger, Multilayer Neural Network method.
Suhasini and Badugu [58] as implemented the machine learning approaches for the Twitter messages for the detection emotions. They showed efficiency in Naïve Bayes algorithm was more compared to the K Nearest Neighbour (KNN) and obtained an accuracy of 72.60%, 55.50% respectively. Fakhri [59] developed emotion recognition and prediction system for the detection of emotions using text. They used supervised learning algorithm such as Multinominal NB, DT, SVM and KNN for the ISEAR dataset and obtained highest accuracy for the Multinominal Naïve Bayes is 64.08%.
R Jayakrishna et al. [60] used machine learning approach for the Malyalam novel and using the SVM, classified sentences in the novel into happy and obtained precision of 0.94, for sad obtained precision of 0.92, for fear obtained precision of 0.93, for anger obtained precision of 0.90 and lastly for surprise obtained precision of 0.90. Sonia and Kavitha [61] proposed an algorithm to identify the intensity of emotions from the Twitter dataset and identified the intensity of four types of emotions, considering tweets: {happy, sad, angry and terror}.
Wikarsa and Thahir [62] implemented machine learning Naïve Bayes algorithm for 105 tweets dataset and applied 10 cross validation to the approach and obtained accuracy of 83%. Forugh and Hooman,, [63] in their work vector similarity measure(VSM), keyword base and STASIS approaches are employed to detect emotions in text to find the different categories of emotions and obtained precision of 0.53.
Deep learning approach
Deep learning(DL) is a subset of ML in which programs learn from understanding and experiencing the hierarchy of concepts where each concept is described in terms of its relation to simpler concepts. This methodology helps a program to learn complex ideas by building them on simpler concepts [64].
In many research papers address DL model as long short-term memory (LSTM). LSTM consist of special type of RNN (recurrent neural network) with long-term dependency management capabilities. LSTM overcomes the issue of disappearing or bursting gradient prevalent in RNNs.
Baziotis et al. [65] propose work on SemEval 2018 Task 1 competition using Deep Learning model. Their idea was a two-layer Bidirectional long-term short-term memory (Bi-LSTM) built with a multilayer self-attention system. They used the approach with the tool ekphrasis [66] for pre-processing the text. Because of the limited number of training results, they used a transfer learning approach by pre-training the Bi-LSTMs on the SemEval-2017, Task 4A [67] dataset. A dataset of 550 million English tweets has been collected to be used in text preprocessing, word2vec embedding training [68], IOP Publishing doi:10.1088/1757-899X/1110/1/012009 11 and affective word embedding for calculating the required word statistics. The experimental results revealed that transfer learning did not outperform the random initialization model. Zishan Ahmad et al. [69] designed a model using the DL classifier for Hindi text emotion detection and also showed that information gathered from resource-rich languages can be extended to other language domains using transfer learning and crosslingual embedding.They obtained F1 score of 0.53.
Seo-Hui Park et.al [70] developed an emotion detection model using CNN and 144,701 tweets were used and also used a ROC story data. The Joy emotion was found with highest accuracy of 73.3% and Anger results in the lowest accuracy value of 36.7% and the lowest Kappa score is 0.216.
Xiao Zhang et al [71] introduced the Factor graph model is used to detect multiple emotions or Online Social Network and also proposed multilabel learning algorithm and achieved contextual information and obtained F1 score of 62.7, other categories F1 score for BR is 57.0, F1 score for Back Propagation Neural network is 57.7, F1 score for Probabilistic classifier chain is 54.1, F1 score for Label combination 55.7 and finally F1 score for Machine Learning K Nearest Neighbor is 56.0.
Malte and Ratadiya [72]developed BERT (Bidirectional Transformer). Both for Hindi and English text were tested and attained F1 score of 0.4521 and 0.5520 respectively. Waleed Ragheb et.al [73] developed a model using the SemEval2019 task 3 dataset.
The proposed model use deep transfer learning, self-attention mechanism and turnbased conversational modeling to classify the emotions and 0.7582 F1 score is obtained. Ma et al, [74] used Bi-LSTM to distinguish emotions into good, sad and angry in text and Emoji assertions and noticed that their method's performance exceeded the baseline models for happy and angry, but not for sad. Bi-LSTMs from documents are capable of extracting contextual knowledge. They obtained Micro F1 score is 0.7557.
Huang et al. [75] 12 evaluation data sets were given as Test1 contains 2755 and Test 2 contains 5509 dialogues were given to the participants. Used Bidirection and Test2, which n LSTM approach and achieved F1 score is 79.59 micro. The best performances obtained for the class "Sad" and for the worst performance obtained for "Happy" emotion.
4.5
Hybrid Approach In a unified model, the hybrid approach is a combination of different approaches. This approach has a higher likelihood of transcending the other approaches individually, leveraging the strength of the approaches used while trying to conceal their corresponding limitations.
Riahi and safari [79] proposed an approach for emotion detection in implicit texts. Based on three subsystems, they implemented a combinational framework. Every one analyses input information from a different perspective and produces as output an emotion label. A machine learning algorithm is the first subsystem. The second is a vector space model (VSM) based mathematical method, and the third is keyword based submodel with an information fusion component to aggregate the final output of main system. Their conclusions are aggregated and used to annotate the test text if it is otherwise left abandoned and only if all three subsystems agree on the same emotion type. The efficiency of the method proposed is 9.13 % higher than the machine learning subsystem, 16.6 percent better than VSM, and 23% better than the keyword-based method.
Ramalingam et al. [80] developed a hybrid model combined with the keyword and learning based method and obtained high accuracy result for detection of emotions from the text. Angelina et al [81] used twitter dataset with NRC emotion lexicon. And used SVM for multiclass classification and implemented on the software WEKA obtained accuracy of 84.92%, Spark obtained accuracy of 88.01%.
Hamed Khanpour, Cornelia Caragea [82] proposed system explains with or without lexicon system to develop health domain is often expensive. It also combines with CNN and LSTM model to capture the Hidden semantics in online health model. And also used ConvexLex LSTM and obtain high performance. There dataset contain 1066 sentences Cancer Survivors' Network (CSN) so it is represented as B-DS. And another set on the lung cancer discussion contains 1041 sentences and represent as L-DS found Joy and Sad sentence more compare to other emotions. And the highest F1 score of ConvLex LSTM for joy is 93.2 and 89.8 and for sad 92.3 and 89.4 Perikos and Hatzilygeroudis [83] developed model which classify emotions and found the performance was satisfactory. Used Naïve Bayes, maximum entropy, knowledge based tool, and Ensemble classifier used and obtained an accuracy of 77%, 85%, 80%, and 87% respectively IOP Publishing doi:10.1088/1757-899X/1110/1/012009 13
Evaluation Metrics:
Evaluation metrics are used to measure the statistics between the good models that can be fit. The most common metrics used to measure the models are Kappa Coefficient, multi label accuracy (Jaccard accuracy), F-Score, Precision and Recall, Accuracy, Pearson Correlation, 10 fold cross validation, Chi Square.
Kappa Coefficient: [70]
It is a statistical measure of an inter-annotator reliability or agreement. Kappa coefficient is used to assess qualitative documents and determine the agreement between two annotators. The equation (1) used to calculate kappa is: Where p0 is an annotator's relative observable agreement and pe is the hypothetical probability of chance agreement. Using the observed data, p0 and pe are calculated to determine the probability of each observer randomly informing each group. It ranges from 0 to 1.
Jaccard
Accuracy: [86] It defined as the size of the intersection divided by the size of the union of two label sets and is used to compare set of predicted labels for a sample to the corresponding set of labels in original. It ranges from 0 to 1. The following equation (2) is used to calculate Jaccard Accuracy is:
Precision, Recall, F-Score, Accuracy:
The Precision (P) defined as the number of true positives (Tp) over the number of true positives plus the number of false positives (Fp). [52], [57], [60], [63] .The following equation (3) is used to calculate the Precision: The Recall is defines as the number of true positives over the number of true positives plus the number of false negatives [52], [57]. The following equation (4) is used to calculate Recall is: where R represents Recall, Tp is True positive and Fn is False negative.
F-Score measure is used to provide a score that balances both the concerns of precision and recall in one number. And macro F1 score is used to measure when multiple classes are declared. MacroF1 score has best value =1 and worst value as 0. [22], [50], [51], [71], [72], [73], [77], [69], [76], [82]. The following equation (5) is used to calculate the F1 score is: Where P denotes as Precision and R denoted as Recall.
Pearson Correlation [85]
It is the statistics that measure the statistical relationship or is the best method for an association, between the two variables that are continuous because it is based on the covariance of the two variables, and then it is divided by the product of their standard deviations. The following equation (8) is used to calculate Pearson Correlation is: Eq. 8 Where r is a correlation coefficient, xi is values of the x-variable in a sample, is mean of the values of the x-variable, yi is values of the y-variable in a sample and is mean of the values of the y-variable 5.5 10-fold cross validation: [49], [51], [62] The cross-validation technique is used to partitioning the original sample into a training set to train the model and a test set to validate it and to evaluate predictive models in machine learning. This procedure is named as k fold cross validation where the original data splits into k subsample. If a specific value is chosen for k, it can be used in the model reference instead of k, such as k=10 becomes a 10-fold cross-validation. [84] To test the independence of two cases, a chi-square test is used in statistics. We can get observed count O and predicted count E given the data of two variables. Chi-Square tests how the predicted number E and the measured number O deviate from each other.
Chi Square
The following equation (9) is used to calculate ChiSquare is: Eq. (9) Table 4 discuss about the summary of the existing approaches, contribution and limitations. In this paper, a comparison of various approaches for detecting an individual's emotional state from textual data has been undertaken. The three major approaches of the emotion modeling in the psychology research such as Categorical approach, Dimensional approach and Appraisal based approaches was discussed. Further, different computational approaches proposed for emotion detection from text such as Keyword based approach, Rule based approach, Machine learning-based approaches and Hybrid approaches was discussed. It further explores existing state-of-the-art with focus on their approaches applied, evaluation measures, datasets used, signification contributions and limitations useful for budding researchers. | 2021-05-07T00:03:59.958Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "60e203f33a13b3f395b2367a1daf87875b7adf25",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1110/1/012009",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "822905de8455738529b7a1bc261143b0a0f60309",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
216408857 | pes2o/s2orc | v3-fos-license | Highly Dispersed Pt Nanoparticles on N-Doped Ordered Mesoporous Carbon as Effective Catalysts for Selective Hydrogenation of Nitroarenes
Highly-dispersed Pt nanoparticles supported on nitrogen-modified CMK-3 mesoporous carbon (Pt/N-CMK-3) were first fabricated by a two-step impregnation route. The influences of N content on the catalyst porous structure, Pt nanoparticle size, surface properties, and interaction between Pt species and the support were investigated in detail using N2 sorption, X-ray diffraction (XRD), transmission electron microscopy (TEM), and X-ray photoelectron spectra (XPS). The N species acted as anchoring sites for the stabilization of Pt particles. Benefiting from the formation of ultrafine metal nanoparticles, the Pt/N-CMK-3 exhibited excellent catalytic activity and selectivity for the selective hydrogenation of nitro aromatics to the corresponding anilines with hydrogen. The Pt/N-CMK-3 catalyst could be reused eight times and keep its catalytic performance.
Introduction
Substituted aromatic amines are crucial industrial intermediates for the production of various fine chemicals, such as dyestuffs, agrochemicals, pharmaceuticals, and polymers, and most of them are synthesized by catalytic reduction of corresponding nitro aromatics [1][2][3]. Particularly, sheterogeneous catalytic reduction over supported metal catalysts as an environment-friendly and efficient protocol attracts much interest [4,5]. The selective reduction of nitro aromatics over supported metal catalysts was widely adopted with different hydrogen sources, such as hydrazine hydrate, sodium borohydride, gas hydrogen, formic acid, ammonia borane, and so on [6-9]. Among them, H 2 , a low cost, non-toxic, and the cleanest hydrogen donor, is recognized as the most ideal reducing agent for the hydrogenation of nitro aromatic compounds in industrial production.
Heterogeneous noble metal-based catalysts such as Ru, Rh, Pt, and Pd have been reported to be efficient for the hydrogenation of nitroarenes [10,11]. Downsizing the noble metal particles to a few nanometers can dramatically improve their catalytic activity, due to the increasing surface-to-atom ratio [12,13]. However, the supported small metal nanoparticles often suffer from serious aggregation because of the high surface energy [14][15][16]. In addition, it remains challenging to keep other reducible groups, especially the halogen groups (F, Cl, Br, and I), intact at high conversion rates when using noble metal catalysts with H 2 as the hydrogen source [17][18][19]. Generally, the catalytic performance is closely linked to the metal particle sizes, the structure or surface properties of the support, and the interaction effect between the metal and support. The activity and selectivity of the catalyst for
Catalyst Characterization
N 2 adsorption-desorption isotherms and corresponding pore size distribution profiles of the CMK-3, Pt/CMK-3, and Pt/N-CMK-3-x samples were recorded. As depicted in Figure 1a, all of the Pt/N-CMK-3-x samples exhibited typical IV isotherms with distinct hysteresis at relative higher pressure (P/P 0 > 0.4), ascribing to the characteristics of mesoporous structure, similar to that of the CMK-3 support. This result demonstrated that the mesoporous structure of CMK-3 was still kept after being incorporated with Pt, C, and N components. The similar pore size distributions of the samples in Figure 1b also proved this point, and the pore sizes were primarily in the range of 3-4 nm. The detailed textural parameters and physical properties of the samples are shown in Table 1. With the increased amount of 2-methylimidazole, the specific surface area, pore volume, and pore size of the Pt/N-CMK-3-x showed a remarkable decline, implying the successful incorporation of C and N from the pyrolysis of 2-methylimidazole. The final contents of Pt and N in the samples were determined by inductively coupled plasma atomic emission spectroscopy (ICP-AES) and CHN elemental analyzer ( Table 1). As can be seen, the Pt contents were all located at 1.90 ± 0.05 wt%, while the N contents showed obvious increase from 2.1 to 3.5 wt% with increasing the amount of 2-methylimidazole. However, when the amount of 2-methylimidazole was continuously increased, the Pt/N-CMK-3-3 showed only a small increase in N contents about 0.2 wt%. Maybe the excess 2-methylimidazole did not incorporate in the CMK-3 frameworks, due to N loss during the synthesis at 800 • C. determined by inductively coupled plasma atomic emission spectroscopy (ICP-AES) and CHN elemental analyzer (Table 1). As can be seen, the Pt contents were all located at 1.90 ± 0.05 wt%, while the N contents showed obvious increase from 2.1 to 3.5 wt% with increasing the amount of 2-methylimidazole. However, when the amount of 2-methylimidazole was continuously increased, the Pt/N-CMK-3-3 showed only a small increase in N contents about 0.2 wt%. Maybe the excess 2-methylimidazole did not incorporate in the CMK-3 frameworks, due to N loss during the synthesis at 800 °C. The XRD patterns of the CMK-3 and Pt/N-CMK-3 materials are displayed in Figure 2. All samples showed broadened diffraction peaks at 2θ = 43 • , indicating the presence of graphitic carbon. For the Pt/CMK-3 catalyst, an obvious diffraction peak at 2θ = 39.8 • was assigned to (111) lattice planes of metal Pt (PDF 70-2431). When the N species were incorporated in the matrix, the diffraction peak for metal Pt became weak and wide. As the N content was further increased, the diffraction peak of Pt (111) in Pt/N-CMK-3-2 and Pt/N-CMK-3-3 was disappeared completely, indicating that the Pt species were highly dispersed in the N-CMK-3 matrix. These results revealed that the incorporation of N in the carbon matrix benefited the dispersion of Pt nanoparticles on the surface, due to the complexing and stabilizing effect. The XRD patterns of the CMK-3 and Pt/N-CMK-3 materials are displayed in Figure 2. All samples showed broadened diffraction peaks at 2θ = 43 o , indicating the presence of graphitic carbon. For the Pt/CMK-3 catalyst, an obvious diffraction peak at 2θ = 39.8 o was assigned to (111) lattice planes of metal Pt (PDF 70-2431). When the N species were incorporated in the matrix, the diffraction peak for metal Pt became weak and wide. As the N content was further increased, the diffraction peak of Pt (111) in Pt/N-CMK-3-2 and Pt/N-CMK-3-3 was disappeared completely, indicating that the Pt species were highly dispersed in the N-CMK-3 matrix. These results revealed that the incorporation of N in the carbon matrix benefited the dispersion of Pt nanoparticles on the surface, due to the complexing and stabilizing effect. All the samples displayed obvious order pore channels, similar to pure CMK-3 (not shown). For Pt/CMK-3, plenty of darker metal nanoparticles with average particle sizes of 5.5 nm were distributed on the support. As the N species were introduced, the average size of Pt nanoparticles was remarkably decreased to 2.9 nm in Pt/N-CMK-3-1, which was attributed to the significant role of N in anchoring metal particles. It can be seen from Table 1 that the Pt particle sizes of Pt/N-CMK-3 and Pt/N-CMK-3-1 counted by TEM were approximately equal to the mean sizes of Pt crystallites by XRD. When the N contents were continuously increased, the mean sizes of the Pt particles reached a minimum value of about 1.2 nm in Pt/N-CMK-3-2 catalyst. These results, in agreement with the XRD result in Figure 2, confirmed that the N content had significant effects on All the samples displayed obvious order pore channels, similar to pure CMK-3 (not shown). For Pt/CMK-3, plenty of darker metal nanoparticles with average particle sizes of 5.5 nm were distributed on the support. As the N species were introduced, the average size of Pt nanoparticles was remarkably decreased to 2.9 nm in Pt/N-CMK-3-1, which was attributed to the significant role of N in anchoring metal particles. It can be seen from Table 1 that the Pt particle sizes of Pt/N-CMK-3 and Pt/N-CMK-3-1 counted by TEM were approximately equal to the mean sizes of Pt crystallites by XRD. When the N contents were continuously increased, the mean sizes of the Pt particles reached a minimum value of about 1.2 nm in Pt/N-CMK-3-2 catalyst. These results, in agreement with the XRD result in Figure 2, confirmed that the N content had significant effects on the sizes of metal Pt crystallites or particles formed, which will be analyzed in detail in subsequent XPS analysis. In addition, the high-angle annular dark-field scanning TEM (HAADF-STEM) and the elemental mapping images demonstrated that Pt and N species were uniformly dispersed in Pt/N-CMK-3-2 ( Figure 3e).
Catalysts 2020, 10, x FOR PEER REVIEW 4 of 13 the sizes of metal Pt crystallites or particles formed, which will be analyzed in detail in subsequent XPS analysis. In addition, the high-angle annular dark-field scanning TEM (HAADF-STEM) and the elemental mapping images demonstrated that Pt and N species were uniformly dispersed in Pt/N-CMK-3-2 ( Figure 3e). The valence states of Pt and N types of the prepared materials were characterized by XPS spectra. As depicted in Figure 4a, the Pt 4f XPS spectra of Pt/CMK-3 showed symmetric doublet peaks at the binding energy of 71.7 and 75.2 eV, respectively, which were attributed to Pt 0 . However, this binding energy value was higher than the binding energy of bulk Pt (4f7/2 = 71.2 eV) [28][29][30], indicating the interaction between Pt particles and the CMK-3. When the CMK-3 support was treated with 2-methylimidazole and the N species were introduced into the matrix, the binding energies of Pt 0 4f were shifted to higher values with increasing N content, implying the existence of an interaction effect between Pt and N species due to a charge-transfer. The N species in the Pt/N-CMK-3-2 was further identified by the N 1s XPS spectrum in Figure 4b. Three peaks at 397.5, 399.4, and 400.7 eV were assigned to pyridinic-type, pyrrolic-type, and graphitic-type N [31][32][33][34], respectively. It has been reported that the pyrrolic and pyridinic N sites act as anchoring sites for the stabilization of Pt particles and suppressing their agglomeration [35,36]. The relative peak area percentage of each type of N in Pt/N-CMK-3-x catalysts is listed in Table 2. The surface N compositions were close to those determined by the CHN elemental analyzer in Table 1. When the N content was increased, the relative peak intensity for graphitic N was strengthened, and in the meanwhile, the relative peak intensity for pyrrolic N was found to be decreased. However, the peak intensities for the pyridinic N showed no obvious change. Therefore, the total content of pyrrolic and pyridinic N for the Pt/N-CMK-3-2 showed a maximum value, due to the combination of the two factors of the increases in N contents and the decrease in pyrrolic N on the catalyst surfaces. The valence states of Pt and N types of the prepared materials were characterized by XPS spectra. As depicted in Figure 4a, the Pt 4f XPS spectra of Pt/CMK-3 showed symmetric doublet peaks at the binding energy of 71.7 and 75.2 eV, respectively, which were attributed to Pt 0 . However, this binding energy value was higher than the binding energy of bulk Pt (4f 7/2 = 71.2 eV) [28][29][30], indicating the interaction between Pt particles and the CMK-3. When the CMK-3 support was treated with 2-methylimidazole and the N species were introduced into the matrix, the binding energies of Pt 0 4f were shifted to higher values with increasing N content, implying the existence of an interaction effect between Pt and N species due to a charge-transfer. The N species in the Pt/N-CMK-3-2 was further identified by the N 1s XPS spectrum in Figure 4b. Three peaks at 397.5, 399.4, and 400.7 eV were assigned to pyridinic-type, pyrrolic-type, and graphitic-type N [31][32][33][34], respectively. It has been reported that the pyrrolic and pyridinic N sites act as anchoring sites for the stabilization of Pt particles and suppressing their agglomeration [35,36]. The relative peak area percentage of each type of N in Pt/N-CMK-3-x catalysts is listed in Table 2. The surface N compositions were close to those determined by the CHN elemental analyzer in Table 1. When the N content was increased, the relative peak intensity for graphitic N was strengthened, and in the meanwhile, the relative peak intensity for pyrrolic N was found to be decreased. However, the peak intensities for the pyridinic N showed no obvious change. Therefore, the total content of pyrrolic and pyridinic N for the Pt/N-CMK-3-2 showed a maximum value, due to the combination of the two factors of the increases in N contents and the decrease in pyrrolic N on the catalyst surfaces.
Catalytic Reaction
Nitrobenzene was first conducted as a model compound over the Pt/N-CMK-3-2 to optimize the reaction conditions, and the results are listed in Table 3. Firstly, the hydrogenation of nitrobenzene with H2 was investigated in different solvents. All the solvents like ethylacetate, toluene, ethyl ether, methanol, and ethanol showed excellent catalytic activity, but ethanol
Catalytic Reaction
Nitrobenzene was first conducted as a model compound over the Pt/N-CMK-3-2 to optimize the reaction conditions, and the results are listed in Table 3. Firstly, the hydrogenation of nitrobenzene with H 2 was investigated in different solvents. All the solvents like ethylacetate, toluene, ethyl ether, methanol, and ethanol showed excellent catalytic activity, but ethanol (nitrobenzene/ethanol = 4:1 (mmol mL −1 )) gave the highest nitrobenzene conversion (entries 1-7). Secondly, the results showed that the nitrobenzene conversion steadily increased with increasing H 2 pressure from 0.5 to 2 MPa, indicated that the H 2 pressure influenced the dissolution of H 2 in the solvent. The reaction rate did not change obviously when increasing H 2 pressure from 2 to 6 MPa (entry 5 and entries 10,11), implying the absence of a hydrogen transport limitation at high pressure. Finally, the nitrobenzene conversion continually increased with the temperature without loss of selectivity (100%) (entries [12][13][14]. Table 3. Catalytic performance of Pt/N-CMK-3-2 catalysts for hydrogenation of nitrobenzene a . Catalysts 2020, 10, x FOR PEER REVIEW 6 of 13 (nitrobenzene/ethanol = 4:1 (mmol mL −1 )) gave the highest nitrobenzene conversion (entries 1-7). Secondly, the results showed that the nitrobenzene conversion steadily increased with increasing H2 pressure from 0.5 to 2 MPa, indicated that the H2 pressure influenced the dissolution of H2 in the solvent. The reaction rate did not change obviously when increasing H2 pressure from 2 to 6 MPa (entry 5 and entries 10,11), implying the absence of a hydrogen transport limitation at high pressure. Finally, the nitrobenzene conversion continually increased with the temperature without loss of selectivity (100%) (entries 12-14). We compared the initial conversions of nitrobenzene as a model compound over Pt/N-CMK-3-x catalysts with H2 in ethanol for 10 min at 40 °C, and the results are summarized in Table 4. As can be seen, the CMK-3 and N-CMK-3-2 materials provided no activity (entries 1 and 2). When the CMK-3 was treated with 2-methylimidazole, the Pt/N-CMK-3-1 showed a nitrobenzene conversion of 8.0%, which was much higher than that of Pt/CMK-3 (3.9%). The Pt/N-CMK-3-2 with the minimum particle size showed the highest nitrobenzene conversion. The turnover frequencies (TOFs) for the Pt/N-CMK-3-2 exhibited high values of 18,819 h -1 . However, when the N content was further increased, the initial conversions of nitrobenzene of Pt/N-CMK-3-3 showed a decline. These variations matched well with the results of TEM. These results demonstrated that the activity of the Pt/N-CMK-3-x for the selective hydrogenation of nitrobenzene was significantly influenced by the N content inducing changes in the size of the Pt nanoparticles. We compared the initial conversions of nitrobenzene as a model compound over Pt/N-CMK-3-x catalysts with H 2 in ethanol for 10 min at 40 • C, and the results are summarized in Table 4. As can be seen, the CMK-3 and N-CMK-3-2 materials provided no activity (entries 1 and 2). When the CMK-3 was treated with 2-methylimidazole, the Pt/N-CMK-3-1 showed a nitrobenzene conversion of 8.0%, which was much higher than that of Pt/CMK-3 (3.9%). The Pt/N-CMK-3-2 with the minimum particle size showed the highest nitrobenzene conversion. The turnover frequencies (TOFs) for the Pt/N-CMK-3-2 exhibited high values of 18,819 h −1 . However, when the N content was further increased, the initial conversions of nitrobenzene of Pt/N-CMK-3-3 showed a decline. These variations matched well with the results of TEM. These results demonstrated that the activity of the Pt/N-CMK-3-x for the selective hydrogenation of nitrobenzene was significantly influenced by the N content inducing changes in the size of the Pt nanoparticles.
The scope of Pt/N-CMK-3-2 in hydrogenation of nitroarenes, a series of nitro compounds with diverse substituent groups, were tested under the optimized reaction conditions, and the results are summarized in Table 5. To our great delight, the Pt/N-CMK-3-2 exhibited high activity and selectivity for the hydrogenation of substituted nitroarenes. Apart from nitrobenzene ( Table 5, entry 1), the substituted nitrobenzenes having nonreducible groups like -CH 3 , -NH 2 , and CH 3 O-were also furnished with excellent yield (>99%) (entries 2-7). It has been reported that supported noble catalysts, such as Pt, Pd, Rh, etc., display poor chemoselectivity to the hydrogenation of the nitro group when halogen groups exist in the same molecule [37]. Herein, no obvious dehalogenation product was observed in the selective hydrogenation of the halogen-substituted nitroarenes (entries [8][9][10][11][12][13]. Moreover, other reducible groups such as -COOCH 2 CH 3 , -COOH, -CN, and -CHO on the nitrobenzene were also well tolerated to give the corresponding amines in high selectivity (entries [14][15][16][17]. Also, for heterocyclic nitroarenes containing N element, full conversion and high selectivity of >99.0% was achieved (entries [19][20][21][22]. In contrast, the hydrogenation of p-chloronitrobenzene over Pt/CMK-3 showed not only low catalytic activity but also poor selectivity (entry 23). The high chemoselectivity of the Pt/N-CMK-3-2 catalysts for the hydrogenation of nitroarenes to anilines was likely due to the higher reactivity of nitro group than other functional groups. The stability and reusability of the Pt/N-CMK-3-2 catalyst were further investigated by the hydrogenation of nitrobenzene at 40 • C. As presented in Figure 5, the Pt/N-CMK-3-2 exhibited a nitrobenzene conversion of 80.1% in the first cycle. After completion of the reaction, the catalyst was separated by filtration, washed with ethanol three times, and dried overnight at 60 • C. Then the recovered catalyst was directly used for the next run without any reactivation or purification. The nitrobenzene conversion remained at 78.7% for the eight runs, and the aniline selectivity was kept at 100%. After each cycling reaction, the Pt contents in the product solution were determined by ICP-AES. It was found that the solution hardly contained the Pt element (<1 ppm). As can be seen in Table 1, the total Pt content in the spent Pt/N-CMK-3-2 was 1.86 wt%, which was very close to that of before the reaction. It was demonstrated that the Pt/N-CMK-3-2 possessed good recyclability and has great potential for practical applications in the selective hydrogenation of nitroarenes in the future.
Catalyst Preparation
Ordered mesoporous silica SBA-15 was obtained using P123 as a structure directing agent and TEOS as the silica source under acidic conditions according to the document [38]. CMK-3 was 1 likely due to the higher reactivity of nitro group than other functional groups. The stability and reusability of the Pt/N-CMK-3-2 catalyst were further investigated by the hydrogenation of nitrobenzene at 40 °C. As presented in Figure 5, the Pt/N-CMK-3-2 exhibited a nitrobenzene conversion of 80.1% in the first cycle. After completion of the reaction, the catalyst was separated by filtration, washed with ethanol three times, and dried overnight at 60 °C. Then the recovered catalyst was directly used for the next run without any reactivation or purification. The nitrobenzene conversion remained at 78.7% for the eight runs, and the aniline selectivity was kept at 100%. After each cycling reaction, the Pt contents in the product solution were determined by ICP-AES. It was found that the solution hardly contained the Pt element (<1 ppm). As can be seen in Table 1, the total Pt content in the spent Pt/N-CMK-3-2 was 1.86 wt%, which was very close to that of before the reaction. It was demonstrated that the Pt/N-CMK-3-2 possessed good recyclability and has great potential for practical applications in the selective hydrogenation of nitroarenes in the future. 1.5 >99.9 99.9 2 likely due to the higher reactivity of nitro group than other functional groups. The stability and reusability of the Pt/N-CMK-3-2 catalyst were further investigated by the hydrogenation of nitrobenzene at 40 °C. As presented in Figure 5, the Pt/N-CMK-3-2 exhibited a nitrobenzene conversion of 80.1% in the first cycle. After completion of the reaction, the catalyst was separated by filtration, washed with ethanol three times, and dried overnight at 60 °C. Then the recovered catalyst was directly used for the next run without any reactivation or purification. The nitrobenzene conversion remained at 78.7% for the eight runs, and the aniline selectivity was kept at 100%. After each cycling reaction, the Pt contents in the product solution were determined by ICP-AES. It was found that the solution hardly contained the Pt element (<1 ppm). As can be seen in Table 1, the total Pt content in the spent Pt/N-CMK-3-2 was 1.86 wt%, which was very close to that of before the reaction. It was demonstrated that the Pt/N-CMK-3-2 possessed good recyclability and has great potential for practical applications in the selective hydrogenation of nitroarenes in the future. 2.0 >99.9 99.7 3 likely due to the higher reactivity of nitro group than other functional groups. The stability and reusability of the Pt/N-CMK-3-2 catalyst were further investigated by the hydrogenation of nitrobenzene at 40 °C. As presented in Figure 5, the Pt/N-CMK-3-2 exhibited a nitrobenzene conversion of 80.1% in the first cycle. After completion of the reaction, the catalyst was separated by filtration, washed with ethanol three times, and dried overnight at 60 °C. Then the recovered catalyst was directly used for the next run without any reactivation or purification. The nitrobenzene conversion remained at 78.7% for the eight runs, and the aniline selectivity was kept at 100%. After each cycling reaction, the Pt contents in the product solution were determined by ICP-AES. It was found that the solution hardly contained the Pt element (<1 ppm). As can be seen in Table 1, the total Pt content in the spent Pt/N-CMK-3-2 was 1.86 wt%, which was very close to that of before the reaction. It was demonstrated that the Pt/N-CMK-3-2 possessed good recyclability and has great potential for practical applications in the selective hydrogenation of nitroarenes in the future. The stability and reusability of the Pt/N-CMK-3-2 catalyst were further investigated by the hydrogenation of nitrobenzene at 40 °C. As presented in Figure 5, the Pt/N-CMK-3-2 exhibited a nitrobenzene conversion of 80.1% in the first cycle. After completion of the reaction, the catalyst was separated by filtration, washed with ethanol three times, and dried overnight at 60 °C. Then the recovered catalyst was directly used for the next run without any reactivation or purification. The nitrobenzene conversion remained at 78.7% for the eight runs, and the aniline selectivity was kept at 100%. After each cycling reaction, the Pt contents in the product solution were determined by ICP-AES. It was found that the solution hardly contained the Pt element (<1 ppm). As can be seen in Table 1, the total Pt content in the spent Pt/N-CMK-3-2 was 1.86 wt%, which was very close to that of before the reaction. It was demonstrated that the Pt/N-CMK-3-2 possessed good recyclability and has great potential for practical applications in the selective hydrogenation of nitroarenes in the future. The stability and reusability of the Pt/N-CMK-3-2 catalyst were further investigated by the hydrogenation of nitrobenzene at 40 °C. As presented in Figure 5, the Pt/N-CMK-3-2 exhibited a nitrobenzene conversion of 80.1% in the first cycle. After completion of the reaction, the catalyst was separated by filtration, washed with ethanol three times, and dried overnight at 60 °C. Then the recovered catalyst was directly used for the next run without any reactivation or purification. The nitrobenzene conversion remained at 78.7% for the eight runs, and the aniline selectivity was kept at 100%. After each cycling reaction, the Pt contents in the product solution were determined by ICP-AES. It was found that the solution hardly contained the Pt element (<1 ppm). As can be seen in Table 1, the total Pt content in the spent Pt/N-CMK-3-2 was 1.86 wt%, which was very close to that of before the reaction. It was demonstrated that the Pt/N-CMK-3-2 possessed good recyclability and has great potential for practical applications in the selective hydrogenation of nitroarenes in the future. The stability and reusability of the Pt/N-CMK-3-2 catalyst were further investigated by the hydrogenation of nitrobenzene at 40 °C. As presented in Figure 5, the Pt/N-CMK-3-2 exhibited a nitrobenzene conversion of 80.1% in the first cycle. After completion of the reaction, the catalyst was separated by filtration, washed with ethanol three times, and dried overnight at 60 °C. Then the recovered catalyst was directly used for the next run without any reactivation or purification. The nitrobenzene conversion remained at 78.7% for the eight runs, and the aniline selectivity was kept at 100%. After each cycling reaction, the Pt contents in the product solution were determined by ICP-AES. It was found that the solution hardly contained the Pt element (<1 ppm). As can be seen in Table 1, the total Pt content in the spent Pt/N-CMK-3-2 was 1.86 wt%, which was very close to that of before the reaction. It was demonstrated that the Pt/N-CMK-3-2 possessed good recyclability and has great potential for practical applications in the selective hydrogenation of nitroarenes in the future.
Catalyst Preparation
Ordered mesoporous silica SBA-15 was obtained using P123 as a structure directing agent and TEOS as the silica source under acidic conditions according to the document [38]. CMK-3 was prepared using SBA-15 as the template and sucrose as the carbon source, and then was carbonized at 900 • C for 6 h under nitrogen, as described by Ryoo et al. [39]. The silica template was removed using 5 wt% hydrofluoric acid aqueous solutions at room temperature. The CMK-3 product was obtained after filtering, washing, and drying. The N-CMK-3 was prepared by an impregnation method with the following steps: 6 g of CMK-3 and a certain amount of 2-methylimidazole were dissolved in 60 mL deionized water at room temperature; the amount of 2-methylimidazole was 1, 2, and 3 g in the synthesis of N-CMK-3-1, N-CMK-3-2, and N-CMK-3-3, respectively; then, the mixture was stirred at 60 • C and the water was vaporized slowly; and finally, the obtained solid was calcined in N 2 atmosphere at 800 • C for 6 h with a heating rate of 2 • C min −1 .
Pt/N-CMK-3-x (x represents the amount of 2-methylimidazole with 1, 2, and 3 g) catalyst with 2 wt% of Pt was synthesized by ultrasound-assisted traditional wetness impregnation method. In brief, 6 g of N-CMK-3-x powder and 6.2 mL H 2 PtCl 6 ·6H 2 O aqueous solution (0.1 mol L −1 ) were mixed with deionized water (60 mL) to form a homogeneous suspension under ultrasound conditions. Then, the mixture was stirred at 40 • C until water was evaporated. Finally, the obtained solid was calcined in a flow of 30 vol.% H 2 in N 2 at 200 • C for 3 h with a ramp rate of 2 • C min −1 . For comparison, Pt/CMK-3 was prepared by the identical route.
Catalyst Characterization
Nitrogen adsorption analysis was carried out at liquid nitrogen temperature (−196 • C) by using an ASAP2020 analyzer (Norcross, GA, USA). Prior to measurement, samples were degassed at 200 • C for 10 h. The specific surface areas of the samples were calculated by the Brunauer-Emmett-Teller (BET) method. The pore volume was calculated at relative pressure P/P 0 of 0.991. The pore size distribution plot was calculated using the Barrett-Joyner-Halenda (BJH) formula. The metal Pt loading amount of the catalysts was determined on ICP-AES (Waltham, MA, USA). The N content in the catalysts was measured by using the PerkinElmer 2400 CHN elemental analyzer (Waltham, MA, USA). XRDpatterns were performed on a Rigaku D/MAX-2200 (Billerica, MA, USA) apparatus with a Cu Kα source (40 kV, 40 mA) at room temperature in the 2θ range of 10-90 • . Transmission electron microscopy (TEM) and HAADF-STEM micrographs were obtained on a JEM-2010F (JEOL, Beijing, China) with an element energy-dispersive X-ray spectrometer operating at 200 kV. XPS of the catalysts were recorded with an ESCALAB 250xi spectrometer (Thermo Fisher Scientific, Waltham, USA) equipped with Al Kα radiation source (hν = 1486.6 eV). Binding energies of all elements were calibrated by C1s at 284.6 eV.
Catalytic Reaction and Product Analyses
The chemoselective hydrogenation of nitro aromatics was carried out in a 100 mL stainless-steel autoclave with a stirring controller. In a typical experiment, the autoclave was charged with 80 mmol of nitro aromatics, 40 mg of catalysts, and 20 mL of green solvent ethanol. Before starting the reaction, the reactor was flushed three times with 0.5 MPa of hydrogen to remove the air, and then sealed tight and pressurized to 2 MPa H 2 . The stirring speed was kept at 800 rpm. Then the hydrogenation reaction was proceeded at 40 • C for a certain time. After the reactor was cooled to room temperature, the remaining H 2 was carefully released. One hundred microliters of the mixture were isolated by filtration for further analysis.
Each reaction was repeated more than three times to reach the carbon balance of more than 98% and obtained the nitroarenes conversion with an error ascertained to be within 5%. The qualitative and quantitative analyses of the products were done by gas chromatography (GC)-mass spectrometry (GC-MS, Shimadzu GCMS-QP 2010 Plus, Shanghai, China) and GC (Varian CP-3800, Palo Alto, California, USA) with n-decane as the internal standard.
For the recycling study, the hydrogenation reaction was performed in the same reaction conditions as mentioned above. The catalyst after each run was filtered, and washed several times with ethanol, and dried at 60 • C. Then the recovered catalyst was directly used for the next run without any reactivation or purification. Considering the catalyst loss during the filtration, washing, and drying, the amount of catalyst changed throughout each cycle. However, the Pt/N-CMK-3-2/nitrobenzene/ethanol ratio was always kept the same as it was in the first cycle.
Conclusions
In summary, highly-dispersed Pt nanoparticles supported on nitrogen-modified CMK-3 mesoporous carbon were successfully synthesized by a facile two-step impregnation route. TEM results revealed that ultrafine Pt nanoparticles were uniformly dispersed on the N-doped mesoporous carbon. The prepared Pt/N-CMK-3-2 was found to exhibit much higher catalytic activity for the hydrogenation of various nitro aromatics as compared to the Pt/CMK-3 prepared without the incorporation of N species. The existence of N species in carbon matrix facilitated high metal dispersion and prevented the agglomeration of Pt nanoparticles, due to the interaction effect between Pt and N atoms, which resulted in high catalytic activity. In addition, the Pt/N-CMK-3-2 could completely transform various substituted nitro aromatics to the corresponding aromatic amines with excellent selectivity, even for the case of the halogenated nitrobenzene. The Pt/N-CMK-3-2 catalyst was highly stable and could be reused for the selective hydrogenation of nitrobenzene without obvious loss of catalytic performance. | 2020-04-02T09:27:59.240Z | 2020-03-31T00:00:00.000 | {
"year": 2020,
"sha1": "5d16da02a6c9e1b9d96dcf69222f893d4a3a799d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4344/10/4/374/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7ffc0b5a28b34ac3cb015457da00993c307d2988",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
14486671 | pes2o/s2orc | v3-fos-license | Aharonov-Bohm effect and plasma oscillations in superconducting tubes and rings
Low frequency plasma oscillations in superconducting tubes are considered. The emergence of two different dimensionality regimes of plasma oscillations in tubes, exhibiting a crossover from one-dimensional to two-dimensional behavior, depending on whether $k R\ll 1$ or $k R\gg 1$, where $k$ is the plasmon wave vector and $R$ is the radius of the tube, is discussed. The Aharonov-Bohm effect pertaining to plasma oscillations in superconducting tubes and rings, resulting in an oscillatory behavior of the plasmon frequency as a function of the magnetic flux, with a flux quantum period $hc/2e$ (analog of the Little-Parks effect), is studied. The amplitude of the oscillations is proportional to $(\xi/R)^2$, where $\xi$ is the superconducting coherence length.
INTRODUCTION
Collective excitations of charge density, so called plasma oscillations, in small lowdimensional superconducting structures have been a topic of great interest for a long time. There are two types of collective excitations in superconductors. One type is the so-called Carlson-Goldman mode [1]. In this mode the superconducting current oscillations are balanced by the current of the normal electrons, and the charge densities produced by the superconducting and normal electrons are mutually compensated. This mode occurs only for temperatures that are very close to the critical temperature T c [2,3,4]. The other type of collective excitations are plasma oscillations which are similar to plasma oscillations in normal metals. Unlike plasma oscillations in normal metals, those occurring in superconductors cannot exist in bulk superconducting samples because the typical frequencies of plasma oscillations in the bulk (10 16 Hz) are far above the superconducting gap ∆. However, in small systems like superconducting wires, thin films, and tubes, the dispersion relation of plasmons has a sound-like (acoustic) character. In what follows we will consider only acoustic type plasma excitations. We are interested in plasma oscillations that do not break the Cooper pairs i.e. oscillations with frequencies ω < 2∆/h.
The existence of such (acoustic) plasmons in superconductors was predicted rather early by Kulik [5], who considered plasma excitations for two geometries: a thin infinite solid wire (essentially one-dimensional, 1D) and a thin infinite film (essentially two-dimensional, 2D). It was found that the dispersion relation for the wire is a linear function of the wave vector k along the wire, whereas for the infinite thin film the frequency of the plasmons is proportional to the square root of the wave vector.
Formally, plasmons in low dimensional superconductors with linear and square root dispersion are similar to those in normal conductors [6,7,8], but unlike the latter they have lower frequencies and decay rates. Such "superconducting plasmons" were theoretically analyzed later [9] and they were subsequently observed experimentally in superconducting films [10,11] and wires [12].
The high sensitivity of non-simply connected superconducting systems (cylinders and rings) to weak magnetic fields is well known, and it is manifested by effects such as flux, or fluxoid, quantization [13], and the Little-Parks effect (oscillations of the critical temperature as a function of the magnetic flux with a period determined by the superconducting flux quantum hc/2e [14,15]). Both of these effects may be considered as manifestations of the Aharonov-Bohm (AB) effect [16] (see e.g. a review in [17]). Consequently, it is of interest to investigate possible manifestations of the AB effect in plasma oscillations in superconductors [18].
In this paper we study plasma oscillations in a superconducting tube with an arbitrary radius R and identify two dimensionality regimes (1D and 2D) of the plasmon dispersion relation and the crossover between them, depending on the magnitude of kR (see Ref. [19]). We demonstrate that the AB effect, pertaining to plasmons in superconducting tubes and rings, is expressed as oscillations of the plasmon frequencies as a function of the magnetic flux with a universal period of hc/2e (the flux quantum of a Cooper pair).
The paper is organized as follows. In Sec. II we discuss the formalism used to describe plasma oscillations in superconducting tubes and calculate their dispersion relations. In Sec. III we use these results to study the behavior of plasmons in tubes and rings placed in a magnetic field. We summarize our results in Sec. IV.
THE DISPERSION RELATION FOR PLASMA OSCILLATIONS IN A HOL-LOW SUPERCONDUCTING CYLINDER
In this section we study the propagation of plasma waves in a hollow superconducting cylinder (tube). Let us take the symmetry axis of the cylinder as the z axis of a cylindrical coordinate system and let r = (r, θ), be the radius-vector perpendicular to the z axis. For simplicity we assume that the width of the wall of the cylinder, d, is much smaller than its radius R (Fig. 1a). Since the motion of charge carriers is restricted to be within the material of the cylinder the charge and current densities can be written in the form where δ(r − R) is the Dirac delta function and j 2 and ρ 2 are the two-dimensional (areal) current and charge density, respectively. For the sake of brevity we omit the subscript 2 later. Both the current flows and the uncompensated charges will produce electric and magnetic fields around the cylinder. The Fourier components of the field potentials,φ andÃ, and those of the surface chargeρ and the current densityj (j = (j z ,j θ ), wherej z is the component of the current along the z axis, andj θ is the circular component) corresponding to the frequency (ω), the longitudinal wave vector (k), and the circular mode number (m), satisfy the Maxwell equations where κ = k 2 − (ω/c) 2 is the modified wave vector that takes into account retardation effects; here and elsewhere in the paper a superscript "∼" (for exampleφ) denotes a Fourier transformed quantity. From the requirement that the potentialsà andφ must be continuous and finite everywhere, one readily derives the following expressions for the fields on the surface The electric fields acting on the superconducting and normal electrons inside the cylinder can be written asẼ where I m (x) and K m (x) are the modified Bessel functions of the first and second order, respectively. Our discussion so far was general. To describe the superconducting regime we adopt a simple two-fluid phenomenological model [20] with a nonlinear superconducting term. We assume that the system is almost in a stationary state. For the description of the superconducting component we use the time-independent Ginzburg-Landau equation. It was shown rather early by Bardeen [21], that for clean metals (close to T c ) the two fluid model can be derived directly from the BCS theory. Subsequently it was also shown that the two-fluid model can also be successfully used for dirty metals [22]. The two-fluid model assumes that electrons are locally in thermodynamic equilibrium. In the case of low energy small amplitude long-wavelength collective excitations, like acoustic plasma waves, any gradients in the velocities or densities are sufficiently small, and the equation of motion of the superconducting electrons takes the form as for classically behaving particles.
The electrons of the superconducting component of our two-fluid model move without dissipation, whereas the electrons of the normal fluid dissipate energy. The parameter that characterizes dissipation of the normal fluid τ is the average time between collisions of the normal electrons in the metal. Unless this collision time is very small there is no significant difference between the normal and superconducting electrons since both contribute to the plasma oscillations. The situation is very different, however, for small collision times ωτ ≪ 1. In this case the normal carriers almost do not participate in the plasma oscillations (see discussion in Sec. IV).
Let us assume that the thickness d of the walls of the hollow cylinder (tube) is smaller than both the superconducting coherence length ξ, and the London penetration depth δ, (d ≪ δ, ξ). These assumptions allow us to treat the amplitude of the order parameter ∆ in the cylinder as a constant and the current density (in the dirty superconductor) can be written (from consideration of the time-independent Ginzburg-Landau equation) in the form (see e.g. [23]) and N s is the concentration of the superconducting electrons, v = (v z , v θ ) is their velocity which is a two component vector: v z is the velocity along the tube, v θ is the velocity in the plane perpendicular to the axis of the tube , v c is the critical velocity, σ n is the normal conductivity, and E is the external electric field. N ef f s is the "effective concentration" of the superconducting electrons in the film, which depends on the velocity v. The first term in the equation for the above equation for j describes the supercurrent, whereas the second term describes the current due to the normal electrons and it accounts for all the dissipation processes in our system.
The relationship between the current and the velocity of the superconducting electrons is nonlinear (see Eq.7). When the amplitude of the velocity oscillations in the plasma wave is much smaller than the critical velocity v c we can linearize this relationship about the homogeneous solution u where u = (u z , u θ ) with u z = const and u θ = const. The uniform homogeneous background current through the tube has two components: normal and superconducting. We assume that the superconductor is very dirty and therefore the normal conductance of the tube is very small. Because of the very small normal conductance the normal component of the constant background current is much smaller than the total background current and we neglect the voltage drop due to the normal current through the tube. In general however, if one takes into account the voltage due to the normal current, when considering the currents very close to the critical one, superconducting states with a uniform time-independent order parameter may become unstable toward small perturbations and the system may become normal or develop a time-dependent superconducting state (see e.g. [24], [25]). We assume here that we are sufficiently far from the critical current to assure that such transitions do not occur in the system under consideration. Substituting the continuity equation and the equation of motion written in the Fourier representationρ into (5) and (6) we derive the relations between the currents and the velocities of the electrons in the superconductor where After Fourier transformation, the relation between the perturbations of the velocity δv = (δv z , δv θ ) and the current density δj = (δj z , δj θ ) can be written as Combining equations (10) and (14) we get a linear algebraic system of equations In matrix notation the above can be written as whereĨ is the identity matrix.
This system of equations has nontrivial solutions if the determinant of the matrixC = AB −Ĩ is zero. The coefficients of matricesà andB are functions of k, m and ω, and the condition D(k, m, ω) = det(ÃB −Ĩ) = 0 gives implicitly the desired dispersion relation for the plasma excitations in the cylinder. Since the resulting expression is rather cumbersome we do not reproduce it here but represent the result graphically in Fig.2 (for σ n = 0).
Assuming that the normal conductivity of the material of the superconductor is zero, the general relation for the frequency of the plasmons can be written approximately in the form where ω s = ω 0 N s /N , N is total concentration of electrons (normal and superconducting) and ω 0 = 4πe 2 N/m e is a frequency of plasma oscillations in a bulk normal metal. In deriving Eq.20 we neglected terms of the order of ω 2 s Rd/c 2 ≪ 1, which are related to relativistic retardation effects and are small for tubes of practical sizes.
There are two important cases for which the dispersion relation for the plasma oscillations can be written explicitly in a simple form. These are: (i) the limiting case of plasma oscillations in an infinite superconducting thin wire (1D case), and (ii) plasma oscillations in a thin superconducting thin film (2D case).
In the first case (i), the radius R should be reduced to the limit when the cylinder becomes a thin thread without a hole inside. In this limit (R = d) one gets for the circular mode with m = 0, when u θ = 0 where we used asymptotic expressions for the modified Bessel functions for small values of the arguments, x ≪ 1 (I 0 (x) → 1 and K 0 (x) → −ln(γx/2)), and the constant γ = exp(C) ≈ 1.781 is the exponent of Euler's constant. Such a linear dispersion relation is typical for one-dimensional conductors.
To obtain the dispersion relation for the thin superconducting thin film (case(ii)) one should take the large radius limit for the cylinder (kR ≫ 1). In this case we obtain a square root dispersion relation (u θ = 0) Note that Eqs. (21) and (22) reproduce the expressions derived earlier by Kulik for superconducting thin wires and films ( see Eqs. (14) and (17) in Ref. [5], where the thin wire is referred to as a "filament"). Similarly, using asymptotic expressions for the modified Bessel functions of high orders (i.e. m ≫ 1) we obtain for kR → 0 the expression (u θ = 0) This expression shows that for large m the plasmon frequency ω is proportional to the √ m, where m is the circular mode number.
An interesting property that emerges from the expression given in Eq. (20) is that the frequency of the plasma oscillation can be decreased by passing an electric current through the tube. By increasing the current (i.e. increasing the velocities u z and u θ in Eq. (20)) and bringing it close to the critical current in the film, j max , one can lower the frequency of the plasmons and cause it to take values that lie below the energy gap. This lowering can be achieved for a range of wave vectors that is large enough to allow observation of a dimensionality crossover from a 1D (kR ≪ 1) to a 2D (kR ≫ 1) behavior [19]; see Fig. 2 where we display the plasmon frequency as a function of the dimensionless radius kR, for several values of the current I (expressed in terms of η = I/I c ) through the tube (I 1 < I 2 < I 3 ). Note that higher currents through the tube correspond to lower frequencies of the plasmons. The observed decrease of the plasmon frequency originates from the fact that, according to Eq. (7), increasing the current along the tube involves an increase in the velocities (i.e. u z ) of the superconducting electrons in the tube which in turn causes an effective decrease in the concentration of the superconducting electrons that participate in plasma oscillations and determine the plasma frequency.
III. SUPERCONDUCTING TUBES AND RINGS IN AN EXTERNAL MAG-NETIC FIELD
In this section we analyze the influence of a weak magnetic field on the propagation of plasma excitations in superconducting microstructures. We consider plasmons in two geometries: tubes and rings.
A. Tubes
Let us consider the situation when the superconducting microcylinder is placed in a longitudinal magnetic field H (Fig. 1a). We assume that the magnetic field is parallel to the symmetry axis of the cylinder and that it is weak enough such that the system remains superconducting. This geometry allows observation of several interesting effects, such as quantization of the magnetic flux through a hole in the cylinder, and periodic dependence of the critical temperature. One may also inquire about the influence of the magnetic field on the dispersion relation of the plasma oscillations.
Since the cylinder wall is made from a very thin film (we take the width of the wall of the cylinder to be smaller than both the London penetration length λ, the coherence length ξ, and the magnetic length l H = Φ 0 /H), the amplitude of the superconducting order parameter |Ψ| is constant across the wall of the cylinder. The magnetic field penetrates into the wall and the flux is not quantized. The quantity that is quantized in this case is the total change of the phase of the order parameter (so called 'fluxoid') [23]. Due to quantization of the fluxoid, the average circular velocity u θ of the electrons in the thin-walled cylinder is a periodic function of the magnetic flux where Φ = πR 2 H is the magnetic flux through the cylinder and Φ 0 = πhc/e is the superconducting quantum of the magnetic flux. The notation min n [. . . ] in the above formula denotes that for a given value of the flux Φ one should take a value of n, which minimizes the velocity u θ . One should remember, however, that in order to observe the above quantization phenomenon the radius of the cylinder should be not too small, in order to allow that even weak fields, that do not destroy the superconductivity, may create fluxes Φ of the order of the flux quantum. Note that since the wall thickness d is small, Little-Parks oscillations (πR 2 H > Φ 0 ) might be observed in samples with πR 2 H c < Φ 0 , where H c is the critical magnetic field for the bulk [26]. Inserting the above expression for the velocity u θ into Eq. (20) and considering for simplicity only the case with m = 0, we obtain where From Eq.(25) we can conclude that the frequencies ω of the plasmons, as well as their velocities, demonstrate an Aharonov-Bohm (AB) behavior i.e. a periodic dependence on the magnetic flux with the fundamental period Φ 0 (Fig. 3). The amplitude of the oscillations of the frequency for a cylinder with a large radius (R > ξ) is approximately proportional to (ξ/R) 2 , where ξ is the coherence length of the superconductor. To illustrate the above analysis, we show in Fig. 3 the behavior of the plasmon frequencies, ω, in a superconducting tube as a function of the dimensionless flux Φ/Φ 0 . In Fig. 3(a) we display ω for various values of the current, characterized by the parameter η = I/I c where I c is the critical current, for a fixed temperature, and in Fig. 3(b) we show ω for several temperatures, while keeping the current at a constant value I = 0.1I c . The numerical results presented in Fig. 3 were obtained from the transcendental equation (25) (where the arguments of the modified Bessel functions depend on κ = k 2 z − ω 2 /c 2 ). Note (see Fig. 3(a)) that the amplitudes of the AB oscillations of the frequencies are larger for higher currents through the tube, while at the same time the absolute values of the frequencies decrease for larger currents. The latter decrease of the plasmon frequency for higher currents was discussed in the context of Fig. 2 at the end of Sec. II. The increase of the amplitude of the oscillation of the plasmon frequencies for larger currents is due to the nonlinear dependence of the plasmon frequency on the effective concentration of superconducting electrons (ω ∼ N ef f s ) in Eq. (25). As a result of this nonlinearity, for a lower concentration of the superconducting electrons (caused by the flow of current, or due to higher temperature, with either of these resulting in effective lowering of the concentration of electrons which may participate in collective plasma oscillations) a given change in the concentration due to magnetically induced circular currents will yield a relatively larger variation of the plasma frequency. The decrease of the plasmon frequency as the temperature is increased is illustrated in Fig. 3(b).
B. Rings
We consider here a superconducting ring made from a wire of diameter d, with the radius of the ring, R, being much larger than d (i.e. R ≫ d, see Fig.1b). For small currents in the ring we can neglect interactions between different parts of the wire and consider the ring as a straight superconducting wire with imposed periodic boundary conditions. Let us fix a point on the ring as the origin of a local coordinate system and let x be the coordinate along the wire and r the coordinate perpendicular to the wire. The relation between the Fourier components of small perturbations of the charge density and the electrostatic potential can be written as where ν = 1, 2, 3, . . . is now the discrete dimensionless wave number of the plasma oscillations along the ring (which is related to the quantized wave vector k along the ring as ν = kR), and ν c is the cut-off parameter, ν c ∼ R/d, which arises because of the finite diameter of the wire from which the ring is made (ν ≪ ν c ). Using the relation between the scalar potential and the electric fieldẼ(ν, ω) = iνφ(ν, ω)/R , the equation of motion ωṽ(ν, ω) = −ieẼ(ν, ω)/m e and the continuity equation kĨ(ν, ω) = ωρ 1D (ν, ω)R written in the Fourier representation, we find the expression that connects the perturbation of the carrier velocity with the perturbation of the current δṽ(ν, ω) = δĨ(ν, ω) eν 2 ω 2 m e R 2 ln At this point we use again a linearized form of Eq. (7) where S = πd 2 /4 is the cross section of the wire making up the ring. Following the same arguments as those used by us in our discussion of the cylindrical system under the influence of a magnetic field, we can write again the expression for the uniform component of the velocity as Combining this result with Eqs. (26) and (27) we arrive at the dispersion relation for plasma oscillations in the ring (Fig. 4) For small wave numbers ν ≪ ν c the frequency of the plasma oscillation ω can be approximated by where c(ν) is the velocity of the plasmons showing an AB behavior, expressed as a periodic function of the magnetic flux, The oscillation amplitude of the plasmon frequency in large rings R > ξ is proportional to (ξ/R) 2 , which is similar to the case of the plasma oscillations in a tube (see Sec. III.A). Characteristic properties of the plasma oscillation frequencies in superconducting rings are illustrated in Figs. 4(a) and 4(b). In Fig. 4(a) we display the plasmon frequencies as a function of the mode number ν for different values of the dimensionless flux, Φ/Φ 0 ; the mode numbers are discrete because of the periodic boundary conditions in the ring. In accordance with Eq. (30) higher modes (larger ν) correspond to higher frequencies [27]. Applied magnetic flux through the ring induces a circular persistent current, which reduces the effective concentration of superconducting electrons participating in the plasma oscillations, with a consequent lowering of the plasmon frequency. Since the induced current is a periodic function of the magnetic flux, the frequency of the plasma oscillations for each mode is also a periodic function of the magnetic flux with a period hc/2e (see Fig.4(b)).
IV. SUMMARY.
In this paper we have studied collective charge density oscillations (plasmons) in superconducting microtubes and microrings. Using a simple two-fluid model for the superconductor, we derived the dispersion relation for plasmons in a cylindrical tube of radius R, i.e. the plasmon frequency ω as a function of kR. We have demonstrated that depending on the magnitude of kR, a crossover emerges where the plasmon dispersion relation changes from a linear dependence on kR (Eq. (21), the 1D limit) to a square root dependence (Eq. (22, the 2D limit). The behavior in these limiting cases is in agreement with previous theoretical predictions [5,9] and experimental observations [10,11,12].
We have also considered the effects of weak magnetic fields on charge density excitations in superconducting microtubes and microrings, and we have shown that the dispersion relations for the plasmons are oscillatory functions of the magnetic flux with a universal period of hc/2e, and an amplitude of the order of (ξ/R) 2 . Such behavior of the plasmons in superconducting microstructures is a manifestation of the Aharonov-Bohm effect.
In conclusion we discuss briefly dissipation effects in our systems. Our model does not take into account dissipations due to variations of the order parameter. We considered very long wavelength plasma oscillations characterized by small amplitudes |δ∆| ≪ |∆|, and frequencies that are restricted by the superconducting gap frequency (2∆/h). For these conditions dissipation due to variations of the order parameter ∼ |∂∆/∂t| 2 is negligibly small.
The expressions given in (20) and (29) for the dispersion relations of plasmons in superconducting tubes and rings are written for cases where the normal conductivity of the superconductors can be neglected. In general, a nonzero normal conductivity σ n adds to the dispersion relation an imaginary term which expresses energy dissipation and decay of the plasma oscillations. For example, for superconducting tubes one can write (ω = ω(σ n = 0)) where for the mode with m = 0 the ratio of the imaginary part of the frequency to the real part is γ/ω ∼ (σ n /ω s )k Rd I 0 (kR)K 0 (kR) For the limiting case of a thin wire (1D), kR ≪ 1, Eq.
(33) gives γ/ω ∼ (σ n /ω s )k Rd ln(1/(kR)). In the case of a thin film (2D), kR ≫ 1, damping effects are described by the expression γ/ω ∼ (σ n /ω s ) √ kd, in agreement with the results of Ref. [5]. The plasmon damping, given by Eq. (33) is small for dirty superconductors (σ n → 0). Higher normal conductivity results in stronger dissipation and decay of the plasma oscillations. This counterintuitive result can be explained qualitatively by the following arguments. When the collision time is small the coherent motion of the plasma waves is created by the superconducting electrons, whereas normal electrons are only partly involved in this motion, and the faster they achieve equilibrium by collisions the better they follow the collective motion of the other electrons in the plasma wave, and as a result the system evolves more adiabatically with less dissipation. Here ∆ is the energy gap for electron-hole-like excitations inside the superconductor, for which we use an estimate (near T c ) ∆ ∼ k B T c (T c − T ) [23]. For long wavelength (kR < 1, see inset) plasmons, the dispersion relation is approximately linear, whereas for plasmons with a shorter wavelength, kR > 1, the dispersion relation is similar to the one for plasmons in a thin superconducting film (ω ∼ √ k). The three different curves in the figure correspond to three different currents through the tube with I 1 < I 2 < I 3 , with the current I expressed in terms of the critical current, η = I/I c . For each value of the current, we have found from Eq. (7) the corresponding value of the uniform background velocity u θ and then substituted it in Eq. (20). Since higher velocities of the superconducting electrons correspond to lower effective concentrations N ef f s , the frequency of the plasma oscillations is lower for the larger currents through the tube. | 2008-12-23T23:27:25.000Z | 2004-03-01T00:00:00.000 | {
"year": 2008,
"sha1": "e36c0f2c5d732a35384c4d090b4b8c26c15fa7e6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0812.4462",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e36c0f2c5d732a35384c4d090b4b8c26c15fa7e6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
10794469 | pes2o/s2orc | v3-fos-license | Articles Classifying Approaches to and Philosophies of Elementary-School Technology Education
In 1974, Hoots classified historical philosophies of elementary-school industrial arts (ESIA) into four categories: subject matter, in which children would study technology and its social impacts (e.g., Gilbert, 1966); arts and crafts, consisting of what Hoots (1974) called “concrete manipulative activities” (p. 225); method of teaching, in which industrial arts delivered, reinforced, and enriched the traditional elementary curriculum (e.g., Henak, 1973), and tools, materials, and processes, focusing, as the name implies, on technological materials and processes (e.g., Miller & Boyd, 1970). Although practioners probably employed a combination of some or all of these, the primary debate among theoreticians at the time was whether elementary-school industrial arts should be promoted as a method of teaching or a subject matter. The liveliness of this debate notwithstanding, by the time Hoots’ paper was published, the ESIA movement was in a decline it was not to recover from for nearly twenty years.
As is clear from this lengthy definition, there was a fair amount of debate as to the purposes of industrial arts for elementary-school children.In fact, ESIA is not really defined here-this "definition" could be more aptly considered an "enumeration of contributions" industrial arts was thought to make to the education of children.The gravity of the content-method debate was evident in this position statement, which, for example, noted that industrial arts should (a) "deal with" technological change, although how this was to be accomplished was not immediately specified; (b) provide "opportunities for developing concepts through concrete experiences," although whether these were concepts related to the school curriculum, to industry and technology, or to both, was not explained; and (c) include "knowledge about technology" without specifying a method.In discussing the elementary level, LaPorte (1993) noted that to some extent, "the argument of whether industrial arts should be taught as content or method…continues today" (p.9).
Purpose of the Study
The profession should be aware of the variety of philosophies and approaches to ESTE accepted by practitioners and theorists.This could allowand might perhaps instigate-meaningful debate, enabling professionals with diverse conceptions of the field to work together toward common goals.The alternative may be for ESTE to experience the impasse faced by advocates of ESIA two decades ago.
In this study, philosophy of elementary-school technology education was regarded as an individual's belief as to the ideal role of technology education in the elementary school.The term approach to elementary-school technology education referred to an individual's opinion as to the most appropriate manner of implementing ESTE.The distinction is subtle but necessary; an educator's philosophy should influence his or her approach.In this sense, approach may in some cases be the practical manifestation of a philosophy.
The purpose of this study was to identify and classify prevailing philosophies of and approaches to elementary-school technology education.Specifically, the study sought to address two research questions: 1. What classification of philosophies of ESTE is ascertainable from the recent literature? 2. What classification of approaches to ESTE can be identified from existing data on the opinions of leaders in the field?
Classification of Philosophies of ESTE
To perform the classification of philosophies of ESTE required the review and analysis of recent literature.Three literature selection criteria were established.Literature considered pertinent was (1) published since 1985, when the American Industrial Arts Association changed its name to the International Technology Education Association (ITEA); (2) widely disseminated; and (3) that in which authors stated or implied a philosophical position on ESTE which specified (1) a rationale for ESTE or (2) a position on the nature of the ideal ends of ESTE, or both (see Kneller, 1964, p. 30-31).
Items of literature were initially classified with others that advocated or reflected similar rationales for ESTE.Next, items were classified with others that advocated or supported similar ideal outcomes for ESTE.These categorization schemes yielded similar results-in other words, items of literature supporting similar rationales for ESTE were very likely to support similar outcomes for ESTE.
In the final classification, characteristics were identified which might further differentiate between the categories.These were characteristics found in many, but not all, items under analysis.They were (1) nature of contribution of ESTE to the elementary school; the (2) role and (3) identity of subject matter; and (4) the nature of teaching methods advocated.Finally, examples were selected from the literature which seemed to exemplify the philosophies.
Classification of Approaches to ESTE
The classification of approaches to ESTE was accomplished by an ex post facto cluster analysis of data collected to identify the opinions of leaders in the technology education field regarding approaches to technology education.In a prior study (Foster & Wright, 1996 1 ), 131 leaders were asked to identify appropriate approaches to technology education at the elementary, middleschool, and high-school levels.Thus the data used in the present study was collected for the purposes of investigating approaches to technology education at all grade levels-not just ESTE.Nonetheless, it was clear that the elementary data could be extracted and that this existing information would be useful in addressing Research Question 2.
Participants in the original study represented leaders among teachers, supervisors, and teacher educators.Of the 131 respondents, 123 provided opinions relative to ESTE.The data from these subjects was analyzed as part of the study at hand.
Participants were presented with a list of twelve approaches to technology education (see Table 1) and asked to select and rank the three they regarded as most appropriate at the elementary level.Two respondents employed a "fill-in" option also presented on the instrument.I. modular approach J. socio-cultural approach K. student-centered approach L. tech prep Data analysis.Each participant's first choice of approach to ESTE was assigned a score of "3;" second choices were scored "2;" third choices "1."All items not selected were scored "0;" thus each of the thirteen items (twelve preidentified approaches and one write-in) was assigned a score by each participant.Because this data was not continuous, the appropriate quantitative classification procedure was cluster analysis, "a multivariate statistical procedure that starts with a data set containing information about a sample of entities and attempts to reorganize these entities into relatively homogeneous groups" (Aldenderfer & Blashfield, 1984, p. 7).
The analysis was performed with SPSS version 6.1.1 for the PowerPC.Given the exploratory nature of this cluster analysis, several variations of each available clustering method (Ward's, between-and within-groups average linkage, furthest neighbor, nearest neighbor, centroid, and median) were run.
The final solution set was obtained via Ward's method.This set, which consisted of solutions ranging from two to five clusters, was the most interpretable.Ward's method produces clusters which are easily distinguished from other clusters and which tend to be tightly packed (Aldenderfer & Blashfield, 1984).Squared Euclidean distance, which is sensitive to both shape and magnitude, was chosen as a measure.When the data was subjected to the same cluster analysis with Euclidean distance substituted for squared Euclidean distance, the same solution set was obtained.Standardized scores (z-scores) were used because the wide variation in item scores was causing high-scoring items not to cluster when raw scores were used.
Research Question 1: What classification of philosophies of ESTE is ascertainable from the recent literature?
Three philosophies of ESTE were evident from the literature.They were labeled content, process, and method.
Technology education as content.Proponents of the content philosophy see ESTE primarily as providing students with knowledge about technology.To these writers, technology (or alternately, technology education) is a discipline.Not all examples of literature supporting the view of ESTE as content identified the same content structure.One frequently cited structure was DeVore's (1980) three-dimensional matrix representing technological endeavors (communication, transportation and production), technological resources (tools, machines, etc.) and cultural contexts (prehistoric, craft era, mechanization, etc.).
The DeVorian view is clear in Teaching Technology to Children (Minton and Minton, 1987), a book intended for pre-service elementary schoolteachers.ESTE is viewed as having its own content; indeed, technology is defined as "technical knowledge" (p.4; italics added), divided into DeVore's (1980) content areas of production, communication, and transportation.Peterson (1986) also applied the DeVorian content formula to the elementary program.Kieft (1988) summarized the view that although ESTE was to be integrated with other subjects, it involved certain content of its own-content organized per the popular Jackson's Mill curriculum (Snyder & Hales, n.d.).He described ESTE as taking the form "of units of study with activities to introduce, reinforce, or clarify some of the technology concepts…The content usually focuses on an aspect of transportation, communication, manufacturing, construction, or energy" (p.29).Thode's (e.g., 1989Thode's (e.g., , 1996) ) works also represent the content philosophy, although they do not rely upon traditional content structures."The curriculum must cover the fashionable current technologies as well as basic technologies and the emerging technologies" (Daiber, Litherland, & Thode, 1991, p. 193).To Thode, "technology education is a defined discipline" (1996, p. 7).
Technology education as process.A second philosophy identified in this study regards ESTE as a process or skill which should be taught to children, and which has attendant content related to replicating the process.But the exact identity of the process seems to be in question.Two related but distinct variations of the process philosophy are evident in the literature.
In one variation the process being taught is "design."In this conception students design solutions to problems.ESTE is considered "children's engineering" (Dunn & Larson, 1990, p. 37).As a form of engineering, it eventually becomes concerned with the physical sciences and their laws.Todd and Hutchinson (1991) expressed the ideas involved in the conception of ESTE as "design and technology."To them, design and technology was not a separate subject, or even an integrated one-but a "new paradigm" for education itself.
A second variation of the process philosophy regards problem solving as the process of technology education.Here, problem solving is a broad skill which should be taught to all children.ESTE is viewed as supporting the larger elementary program-not the larger technology program (Forman & Etchison, 1991;Sittig, 1992).
Advocates of the problem solving variation regard the process of ESTE as more important than the content of the problem being solved; advocates of the design variation view the content of the whole school and of the design processes as primary.This distinction is illustrated in Figure 1 below.
Technology education as method.In the final philosophy, method, ESTE "begins with three things in mind.The first and certainly the most important is the child, the second is the elementary school curriculum, and the third is an appropriate technology activity" (Kirkwood, 1992, p. 30).Often, as LaPorte (1993) suggested, the content has an industrial or technological nature.Nevertheless, the content is drawn from the existing elementary curriculummath, social studies, language, and science-not a technology education curriculum.
As Braukmann (1993) wrote, "enough goals already exist in the areas of reading, communication, math, science, and the social studies to fill a curriculum."(p.23) Even though it might be important to treat the subject of technology separately, he wrote, "little time is left for it" (p.23).ESTE, in Braukmann's view, does not exist for its own sake; rather, it should support "existing goals in science, math, and communication skills" (p.23).Supporters of this philosophy are typically as unapologetic about slighting technology content as champions of technology-as-subject-matter are about having to lecture occasionally to deliver that content.
Elements of the philosophies
Figure 1 is a tabular representation of the final classification of philosophies of ESTE evident from the literature.Six characteristics of each philosophy are specified to facilitate comparisons among the philosophies.In addition, an example from the literature is identified for each philosophy.Brief descriptions of the characteristics follow.
Nature of contribution to elementary-school content.Unlike the other identified philosophies, the method philosophy does not regard its contribution to the content of the elementary curriculum as necessarily unique.In this view, ESTE is a method for delivering the traditional curriculum.It does not offer unique knowledge.The remaining philosophies regard ESTE as an ideally integrated, yet essentially distingusihable, subject in the curriculum.
Rationale.As a result, the rationales advanced by advocates of the content and process philosophies point out ESTE's unique aspects.Both rationales imply that the elementary curriculum would be essentially incomplete without technology education.
Nature of ideal outcome.Both variations of the process philosophy view specific skills as the ideal outcome of an elementary program of elementary education; in the content philosophy, knowledge is the primary outcome.In the method philosophy, ESTE is viewed as only one means of helping students acquire the skills and content in question.
Role and identity of subject matter.The literature indicates that design technology has associated and necessary knowledge relating to the process of design, as well as to scientific principles.There seems to be little indication that problem solving, as a conception of ESTE, has unique content directly related to problem solving (although problem solving strategies abound and are occasionally taught to elementary-school students).
Teaching methods.While all of the identified philosophies appear to support hands-on learning, it should be noted that in the method philosophy, ESTE is a method, and as a term is essentially synonymous with "constructive methodology"-what Bonser and Mossman (1923) referred to as making "changes in the forms of materials to increase their values" (p.5).
Research Question 2: What classification of approaches to ESTE can be identified from the opinions of leaders in the field?
As aforementioned, the solution set consisted of four possible solutions generated via cluster analysis.Since Research Question 1 had already been addressed when these solutions were examined, it was theorized that three basic philosophies of ESTE were evident in the literature.Thus, a three-cluster solution of approaches to ESTE was sought.However, the four-cluster solution (Table 2) was found to be most interpretable.The clusters were output in an arbitrary order.The first cluster, secondary, consisted of the four items on the instrument which most clearly illustrated the view regarding elementary-school technology education as appropriately implemented employing traditionally secondary-school means, such as the applied-science view (exemplified on the instrument by the high-school Principles of Technology curriculum), extra-or non-curricular activities, techprep, and an emphasis on careers.
The second cluster, progressive, seemed to represent the ideals of the founders of industrial arts-the progressives Bonser and Mossman (e.g., 1923)-and later exemplified by Maley (e.g., 1973Maley (e.g., , 1979) ) and others.Items in this cluster included constructive methodology, the socio-cultural approach, and the student-centered approach.
The third cluster was labeled modern.In contrast to the more traditional progressive approach, it was comprised of two items-modular technology education and computer emphasis-which have only recently been advocated in the literature for ESTE.Both items refer to systems of organizing technology education (e.g., Neden, 1990;Hornsby, 1993).
The final cluster, design/science, appears representative of the British design and technology movement (e.g., Dunn & Larson, 1990;Williams & Jinks, 1985) and its variants in the U.S. The items comprising this cluster were design/problem solving, engineering-systems approach, and math/science/technology integration.
Discussion
Approaches and Philosophies compared.Although one-to-one correspondence was not identified, there were some strong relationships between certain approaches to and philosophies of ESTE.For example, the design/science approach strongly reflected the process philosophy while having little in common with either of the other philosophies.This approach subsumed math/science/technology integration, design/problem solving, and engineering systems-which as a whole reflect the philosophy, described above, of ESTE as a process.
The progressive approach had a perceptible connection to the method philosophy; witness Kirkwood's (1992) aforementioned statement that the hierarchy of concern in ESTE was (1) the child, (2) the curriculum, and (3) the technology activity.Two of the constituent parts of the progressive approach were student-centeredness and constructive methodology.The third aspect of the approach, a socio-cultural focus, does not appear to be incompatible with the method philosophy, but is not strongly brought out in the literature supporting this philosophy.
Less firm is the relationship between the modern approach and the content philosophy.The modern approach consisted purely of delivery systemsmodular and computerized-and from the analysis, no content was implied.Nonetheless, the modular approach in this context itself implies technical content, and further implies that this content is important enough to justify the purchase of modules (see Petrina, 1993).Given Petrina's (e.g., 1993Petrina's (e.g., , 1994a) ) definition of modular technology education, several commercial programs for modular ESTE are available, such as Time-Travelers, a "technology education system designed especially for the elementary level" (Applied Technologies, 1996, p. 1), and the Techno-Train (Bedford Science Supply, n.d.).
The secondary approach to ESTE may have some relation to both the content and the method philosophies, as its constituents include both delivery systems and content areas.While this approach may well reflect a specific philosophy of technology education, it might be suggested that the secondary approach shares no special relationship with any philosophy of ESTE found in this study.This approach is supported by very little of the literature reviewed in addressing Research Question 1.
This brings up an important point.Those who wrote the literature exemplifying philosophies, and those whose responses were analyzed here to identify approaches, were not samples of the same population.This is to be expected-theoreticians make philosophies; practitioners take approaches.So those advocating a secondary approach to ESTE may simply be advocates of secondary technology education.
Relationship between the findings of this study and Hoots' classification system.Hoots' (1974) aforementioned historical philosophies were discerned from the literature of the preceding semicentury (approximately 1923-1973).
This system seems to be an expansion of the more traditional classification of content and method (see Miller, 1979)."The Industrial Arts Issue" (1958) of the California Journal of Elementary Education referred to two groups of educators with different emphases for ESIA.One group emphasized studying the technical aspects of industry, while another emphasized a more liberal study of technology.Both are content-driven views.The former represents Hoots' "tools, materials, and processes" philosophical category; the latter his "subject matter" category.Together these may be asserted to comprise a single "content" philosophy.
Gerbracht and Babcock (1959), whose arguments that "industrial arts is not another 'subject'" and that "industrial arts justifies its existence on the basis of the help it gives the school" (p. 1) identified them with the method philosophy, provided a range of emphases for ESIA.As Hoots (1974) noted, Gerbracht and Babcock epitomized not only the "method" philosophy, but the "arts-and-crafts" as well.Thus these two may be considered as constituents of one larger "method" philosophy.
It is rather straightforward, then, to associate the findings of Research Question 1 with Hoots' historical philosophies.The content philosophy of this study subsumes Hoots' "subject matter" and "tools, materials, and processes;" method includes his "arts & crafts" and "methodology."There is no analog in his system to the process philosophy identified here.
Associating the findings of Research Question 2 with Hoots' categorization was more difficult.This difficulty, however, further demonstrated the lack of parallelism between approaches and philosophies.
To some degree, the progressive approach identified in Research Question 2 was similar to Hoots' "methodology," which, he (1974) notes, argues that ESIA's contribution to the school "is in the psychological and sociological areas of child development and in the area of cognitive learning in other subject matter disciplines by providing realistic and concrete experiences related to those disciplines" (p.226; italics added).Further, the modern approach to ESTE resembles, to a degree, Hoots' "tools, materials, and processes" philosophical category of ESIA.Hoots ' (1974) criticisms of this category echo modern concerns about modular technology education, especially when he discusses the ease with which a teacher can overlook pedagogical concerns and "get into implementation-the actual classroom activities-and end up with a tool-and material-centered [program]" (p.227).
There seem to be no strong relations between the remaining approaches identified here-design/science and secondary-and Hoots' categories.Figure 2 is an illustration of the relationships among the three categorization systems.
Directions for Further Research
In reviewing the literature to address Research Question 1, several articles were found which described ESTE programs or activities.Some of these were rich descriptions of the learning which can take place in the elementary classroom.Few of these articles simply reflected a single philosophy of ESTE.However, upon further inspection, it became clear that many reflected a single approach to ESTE.Hornsby's (1993) description of an ITEA-award winning ESTE program in Kentucky makes it clear that program implementors have taken a modern approach; Kirkwood's (1992) approach was progressive.An appropriate extension of this research may be to analyze ESTE programimplementation articles in an effort to challenge or validate the results of the cluster analysis described herein.
In a prior study (Foster & Wright, 1996), it was found that technologyeducation leaders advocated different approaches for ESTE than they did for secondary programs.Nonetheless, research by Zuga (1989), Petrina (1994b), and others who have categorized or discussed categories of curricular approaches and philosophies of technology education, may shed some light on the findings reported here.Further investigation is needed associating approaches to and philosophies of ESTE with their counterparts in secondary technology education.
Final Thoughts
At the 1996 annual conference of the International Technology Education Association, two attendees, presumably secondary-school teachers, were overheard bemoaning the overabundance of elementary sessions in the conference program.Fewer sessions appropriate to high-school technology education-the profession's longtime bread-and-butter-were being offered, it seemed, to make room for ESTE presentations.
Scholarly productivity in ESIA seems to have dropped off in the mid-1970s when it became clear that the content-method issue wouldn't be easily reconciled.A decade later, with the acceptance of technology education, scholarly focus was being placed firmly on subject matter, not children, so conditions weren't right for a resurgence in interest in ESTE.Since then, the conditions seem to have improved considerably.
One may infer from the comments overheard at the ITEA convention that ESTE may not be welcome for long if it remains solely a topic of discussion at conferences.One also may infer from historical example that as ESTE moves from theory to practice, a variation of the content-method debate will almost certainly emerge.
This would be a dangerous combination: lack of support from rank-and-file technology teachers paired with infighting among ESTE advocates, most of whom are university faculty.Perhaps this can be avoided if supporters of ESTE can reach some degree of genuine philosophical agreement.Clearly a "kitchen sink" compromise such as the aforementioned 1971 definition from the National Conference for Elementary School Industrial Arts will not suffice.
This study identified a variety of approaches to and philosophies of ESTE.Unfortunately, no debate has emerged regarding their relative merits.And until one does, a vast majority of elementary school children are unlikely to experience technology education.
Figure 2 .
Figure 2. Relationships among the three categorization systems.
Table 2
Four-cluster solution classifying approaches to ESTE | 2014-10-01T00:00:00.000Z | 1997-01-01T00:00:00.000 | {
"year": 1997,
"sha1": "4d967ccebe64065fa0a00b97f4bfce73c79d7a60",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21061/jte.v8i2.a.2",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "4d967ccebe64065fa0a00b97f4bfce73c79d7a60",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
3809008 | pes2o/s2orc | v3-fos-license | Mothers’ accounts of the impact on emotional wellbeing of organised peer support in pregnancy and early parenthood: a qualitative study
Background The transition to parenthood is a potentially vulnerable time for mothers’ mental health and approximately 9–21% of women experience depression and/or anxiety at this time. Many more experience sub-clinical symptoms of depression and anxiety, as well as stress, low self-esteem and a loss of confidence. Women’s emotional wellbeing is more at risk if they have little social support, a low income, are single parents or have a poor relationship with their partner. Peer support can comprise emotional, affirmational, informational and practical support; evidence of its impact on emotional wellbeing during pregnancy and afterwards is mixed. Methods This was a descriptive qualitative study, informed by phenomenological social psychology, exploring women’s experiences of the impact of organised peer support on their emotional wellbeing during pregnancy and in early parenthood. Semi-structured qualitative interviews were undertaken with women who had received peer support provided by ten projects in different parts of England, including both projects offering ‘mental health’ peer support and others offering more broadly-based peer support. The majority of participants were disadvantaged Black and ethnic minority women, including recent migrants. Interviews were audio-recorded and transcripts were analysed using inductive thematic analysis. Results 47 mothers were interviewed. Two key themes emerged: (1) ‘mothers’ self-identified emotional needs’, containing the subthemes ‘emotional distress’, ‘stressful circumstances’, ‘lack of social support’, and ‘unwilling to be open with professionals’; and (2) ‘how peer support affects mothers’, containing the subthemes ‘social connection’, ‘being heard’, ‘building confidence’, ‘empowerment’, ‘feeling valued’, ‘reducing stress through practical support’ and ‘the significance of “mental health” peer experiences’. Women described how peer support contributed to reducing their low mood and anxiety by overcoming feelings of isolation, disempowerment and stress, and increasing feelings of self-esteem, self-efficacy and parenting competence. Conclusion One-to-one peer support during pregnancy and after birth can have a number of interrelated positive impacts on the emotional wellbeing of mothers. Peer support is a promising and valued intervention, and may have particular salience for ethnic minority women, those who are recent migrants and women experiencing multiple disadvantages.
Background
The perinatal period and transition to parenthood is a vulnerable time for mothers' mental health. Approximately 9-13% of women experience depression at some time during pregnancy [1][2][3] and approximately 13-15% experience anxiety during pregnancy [4,5]. Approximately 13-21% of women experience depression at some time in the year after birth [2,6] and approximately 13% experience anxiety in the year after birth [4]. Women are more likely to experience antenatal and postnatal depression and anxiety if they are socially isolated and perceive themselves as having low social support, if they are single parents or have a poor relationship with their partner, if they have low self-esteem, if they are poor, or they are under 18 [7][8][9][10]. In addition to the impact of these mental health problems on the mother's quality of life, there is evidence that the mother's poor mental health both before and after birth can adversely affect her baby's physical [11], psychological [12,13], mental [14], emotional and behavioural [15] development, particularly in socio-economically disadvantaged families [16].
Because lack of social support is a significant risk factor for perinatal depression and anxiety [7][8][9][10], one intervention used to assist mothers with or at risk of perinatal mental health problems is peer support, described by Mead and MacNeil as being in general "defined by the fact that people who have like experiences can better relate and can consequently offer more authentic empathy and validation" [17]. Social support generally, and peer support specifically, are often described as comprising emotional, appraisal (affirmational), informational and sometimes instrumental (practical) support [18,19]; and Leger and Letourneau argue that "peer support offers a fifth dimension of empathetic support" [20].
One peer support intervention for postnatal depression is to bring affected women together in support groups where they can feel 'safe' to talk about their feelings of distress, whereas outside the support group they may become isolated with their difficult emotions because of shame at having 'failed' at an idealised version of motherhood [21]; there is, however, no high quality evidence of lasting impact of peer support groups on symptoms of depression [22]. A second model of peer support for postnatal depression is telephone support from a briefly trained volunteer who has herself recovered from the condition, which has been reported as effective in preventing postnatal depression among women who are at high risk of developing it [23], and potentially in assisting recovery in women who have depression [24]. A third model is one-to-one visits from trained volunteers (who may or may not themselves have experience of mental health problems). The evidence of effectiveness is mixed. A small pilot randomised controlled trial found weekly peer support visits to be effective in reducing symptoms of postnatal depression as measured by the Edinburgh Postnatal Depression Scale [25], and a before and after study found volunteer visits to be associated with reduced anxiety and depression [26], but a cluster randomised study found that one-to-one volunteer visits did not prevent the onset of postnatal depression in women considered 'at risk' [27].
Many pregnant women and new mothers who do not have a diagnosed mental illness experience subthreshold symptoms of depression and anxiety as they adapt to their maternal role [9], or stress, which can in itself adversely affect the developing baby [28]. It is also common for new mothers to experience low self-esteem and feelings of inadequacy when encountering discrepancy between their socially-conditioned expectations of motherhood, and its challenging reality [29,30]. Less is known about the impact of receiving volunteer or peer support on these women's emotional wellbeing more broadly, and the evidence is mixed. One randomised controlled trial showed that monthly postnatal visits from a minimally trained "community mother" could improve mothers' self-esteem [31], while another randomised controlled trial found no impact on maternal mental health at one year [32]. There is as yet little qualitative research on women's subjective experience of receiving perinatal volunteer or peer support outside the context of depression and in the antenatal as well as postnatal period. However, there is some evidence that regular visits from a trained volunteer doula (a woman who supports other women during pregnancy and birth) can help disadvantaged pregnant women and new mothers to feel less isolated, less unhappy, less afraid of birth and more confident [33,34]. There is also some evidence that mothers who receive home visits from volunteers in postnatal programmes feel that having someone to talk to improves their emotional wellbeing [35][36][37][38], reduces stress [39], and makes them feel better about themselves and their parenting [40].
Aiming to fill this gap in the qualitative literature, this paper reports original qualitative research, carried out in England, that explores mainly disadvantaged and migrant women's views about the impact of organised peer support on their emotional wellbeing during pregnancy and after birth, and their understanding of the mechanisms involved. It also considers whether these impacts and mechanisms, as identified by the mothers themselves, differ according to whether the mother and peer supporter have experience of a diagnosed mental health condition.
Study design
Because the purpose of this study was to explore mothers' own perceptions and lived experiences, without generating or superimposing theory, an experiential qualitative descriptive design was chosen [41,42], based on semi-structured interviews, and informed by the theoretical perspective of phenomenological social psychology [41]. This "low-inference" design [42] allows mothers' voices to be heard while acknowledging the role of both participants' understandings and the researchers' interpretations in the production of knowledge [43].
Participant recruitment
A researcher first contacted the co-ordinators of 10 peer support projects providing perinatal peer support in Bradford, Bristol, Burnley, Huddersfield, Halifax, Hull, London and rural North Yorkshire, to gain an understanding of the individual projects and to describe the research aims and process. Two projects specifically targeted 'mental health' and employed women with experience of perinatal mental illness to offer mothers counselling or more general peer support. Eight other projects were broadly-based, training unpaid volunteers, almost all of whom were mothers, to support a range of target groups: mothers living with HIV, mothers with very complex needs, young mothers, South Asian mothers, refugee and asylum seeker mothers, and mothers from a defined geographical area (but with a focus on disadvantaged women); the projects are shown in Table 1. Full details of the volunteer-based projects, some information about training and the key elements of the peer support provided have been reported elsewhere in an earlier paper where the objective was to describe the different models and key features of volunteer peer support currently in use [44]. These included active listening, providing information, signposting to local services and providing practical support.
The project co-ordinators described the research to supported mothers using the study information leaflet and either asked permission for the researcher to contact them, or arranged with those who wished to participate a time for interview. One mother decided not to participate when contacted by the researcher, citing logistical problems of caring for her baby.
Data collection
The researcher met the women who agreed to participate, and obtained written informed consent before carrying out a face-to-face semi-structured interview based on the topic guide. Each interview explored the mother's experiences of using the maternity services; how she heard about and decided to take up the peer support; the nature of the support and what she felt its impact had been; whether she felt there was any difference between receiving support from a volunteer and from a professional; how she felt about the ending of the support; and whether she would recommend any changes to the peer support. The duration of interviews varied (range 16-90 min, median 44 min); the shorter length of a few interviews was due to mothers needing to attend to their young children. Professional interpreting for participants whose first language was not English was offered, however, none took up the offer, although at the mother's request one interview was informally interpreted by a peer supporter. All the interviews were audio-recorded and fully professionally transcribed.
Data analysis
The mothers' interviews were analysed using inductive thematic analysis [45]. Transcripts were first checked against the audio recording, and then read and reread, and codes were identified inductively and recorded using NVIVO software. Codes were refined, combined and disaggregated as data collection continued, and emergent themes identified; initial codes and emergent themes were reconsidered in the light of subsequent interviews using constant comparison [46]. To ensure the validity of the analysis, one researcher (JM) undertook thematic analysis of all the transcripts and the other (MR) analysed a subset. Codes and emerging themes were discussed and agreed. Both researchers were aware of the need to approach the analysis reflexively, putting aside their existing knowledge of the topic so that the analysis remained close to participants' accounts, and acknowledging the potential impact of their own perspectives as White, UK-born women with children.
Results
A total of 47 mothers who had received peer support during pregnancy and after birth took part in semistructured qualitative interviews between July 2013 and September 2014. 46 interviews were carried out face-to-face and one interview was carried out by telephone at the mother's request (oral informed consent was given and recorded in writing). This section describes firstly, the participants' characteristics, and secondly, the results of the thematic analysis. There were two key themes: (1) 'mothers' selfidentified emotional needs' and (2) 'how peer support affects mothers'. These key themes and their associated subthemes are shown in Fig. 1.
The participants
The 47 mothers ranged in age from 19 to over 40. Seven were supported by the two 'mental health' projects and 40 were supported by the eight 'broadly-based' projects. Twenty seven were first time mothers (including one currently pregnant with her first child), ten had two children, six had three children, two had four children and two had five children. One was a grandmother with legal care of her grandchild. Twenty six were single parents (without a partner) but only one had actively chosen single parenthood. One mother was living apart from all five of her children and a further five had left older children behind in their home country. Thirty one were born overseas, coming from Africa, Eastern Europe, South America, South Asia, East Asia, the Caribbean and the Middle East; the most common ethnicity for these migrants was Black African (17 mothers). Thirty did not speak English as their first language. Of the 16 mothers born in the UK, one was Asian, two were Black Fig. 1 Two key themes and associated subthemes in women's identification of their emotional needs, and the psychological factors involved in the peer support process that impacted on their wellbeing and one was White. All of the mothers supported by the 'mental health' projects were White British.
The mothers interviewed had experienced a range of traumatic experiences before pregnancy, including forced migration, having children taken into care, and the death of a child or partner. Ongoing stressful experiences during pregnancy and afterwards included unemployment, poverty, homelessness, chronic ill health, domestic abuse, children with health or behavioural problems, and insecure immigration status.
Mothers' self-identified emotional needs
This key theme reflects mothers' descriptions of their needs and the factors which they saw as affecting those needs. Four subthemes emerged: 'emotional distress' , 'stressful circumstances' , 'lack of social support' , and 'unwilling to be open with professionals'.
Emotional distress
This subtheme relates to mothers' own descriptions of their distress. Three quarters of the mothers identified themselves as suffering from depression, anxiety or panic attacks during pregnancy or after birth (including all the mothers supported by the 'mental health' projects and two thirds of the mothers supported by the 'broadlybased' projects), although only half of these said they had received a formal diagnosis. Five mothers described having suicidal thoughts during pregnancy or afterwards (one in a 'mental health' project and four in 'broadlybased' projects): 'I crashed really badly. To the point where I was feeling suicidal' (M042). Some described specific anxieties focused on miscarriage, birth or the baby's health: 'I felt quite anxious about the birth and being out of control… In my mind was all about, "It can only go wrong"' (M015); 'I was constantly thinking, "Is she still breathing? Is she still breathing? Hasn't she forgot to breathe?"' (M008).
Many mothers described a collapse in their selfconfidence when they compared themselves unfavourably to idealised images of 'perfect' motherhood: "I completely lost my confidence that time…Always there is a feeling in the back of my head that "Am I doing it right? Am I a good mother?"' (M002), and for some this included losing social confidence because of fears of being judged by other mothers: 'I became a little bit with-drawn… I didn't want to go into details and open up to complete strangers, 'cause I didn't want them to judge me' (M019). For some mothers experiencing mental illness, feelings of inadequacy had precipitated a loss of selfesteem and the development of a negative maternal identity: 'I felt like I'd lost my identity… I was just convinced that I was the world's crappest mum' (M038); "You feel like you're not doing a good enough job as a parent. And I used to feel like I was failing" (M029).
Stressful circumstances
This subtheme reflects mothers' accounts of the emotional effects of their difficult life experiences. Almost all the mothers referred to ways in which current difficult circumstances had affected their emotional wellbeing during pregnancy or afterwards, saying that their situation made them feel 'stressed' , 'sad' , 'scared' , 'afraid' , 'frightened' , 'desperate' , or 'cursed'. Some also said explicitly that past traumas continued to weigh on their minds: 'Every time you're asleep you dream about [the gang rape], the flashback, it's killing. I don't pray for anybody to be in that situation when they are pregnant' (M033). Several mothers described how they had simply felt overwhelmed by their situation and lacked the resources to cope, feelings consistent with low self-efficacy: 'And it was the whole world is upon me… How am I going to cope?' (M028); 'I am struggling to survive, I am struggling to look after myself and my kids…sometimes I feel old and beyond my ability' (M036). One mother had fantasised about going to sleep until her postnatal depression had passed: "I just didn't want to be there anymore… there was times when I'd look at a bus and think, 'If I could just get run over and go in hospital, even for a few months, I'll wake up and then I'll be okay"' (M038).
Lack of social support
This subtheme considers how all the mothers described their lack of meaningful social connection. Most of the mothers were in situations where they had considerable unmet needs for social support, as defined by Brown (cited by Oakley): "information, nurturance, empathy, encouragement, validating behaviour, constructive genuineness, sharedness and reciprocity, instrumental help, or recognition of competence." [18] Their sense of social isolation had a number of components, explored below.
Many mothers felt physically alone. Thirty one were migrants to the UK, including nine who had arrived seeking asylum or were victims of people trafficking. They commonly had limited social connections in the UK and some had no connections at all: 'I don't even know anybody here, I don't know where to start' (M021). Some mothers had not had time to build a local social network because they had been dispersed under the asylum support system to unfamiliar places or had experienced homelessness and the frequent moves associated with living in temporary accommodation: 'We lived in [one area], and then I had to move to [another area], and then Council again moved me here… We don't have any friends here. Only by ourselves' (M001). Some found it hard to make friends in the transient community on an Army base: 'Everybody seems to be, keep themselves to themselves… I've been housebound basically. Just don't know anyone' (M023), despite their efforts to join local groups: "I've walked into the room and everyone's already got their friend groups and they don't want to talk to you" (M027). Many mothers spoke about how this physical isolation engendered loneliness: 'Lonely…sometimes [my husband] would be the only person I'd see for weeks, that I'd actually talk to' (M026).
Some mothers might have appeared to have social support around them, but said they felt unable to share difficult thoughts and feelings with family and friends, for a range of reasons. Some mothers had a partner who was unsupportive or abusive: 'He's very focused on him-self…it wasn't good for my self-esteem at all 'cause he really was cruel to me' (M039). Some said it was because family and friends would respond with criticism or inappropriate advice, or because it would feel like an admission of failure: 'Your friends, your family are telling you, "You must do this, not do that" (M043); 'I can't tell people I can't cope. In Africa they would say "Then why did you get pregnant?" (M006). For others it was because their family were too emotionally invested and would deny the validity of the mother's concerns or would be upset by her feelings: 'Sometimes [my husband] says, "No, it's in your mind". But it's not always in my mind…I am always worried. I didn't tell this to my husband or my mum, I thought maybe they would feel more worried' (M002). Some mothers with diagnosed mental health issues concealed their feelings because they found friends and family judgemental about mental illness, or felt ashamed and guilty that they were not enjoying motherhood: 'I pretend to be happy, I do that with my family as well 'cause I haven't told them [about my depression]… They don't react nicely' (M040); '[People say,] "I don't understand how she can be depressed when she's just had a baby, one of the most beautifullest things in the world" …That makes you go even more into your shell and feel more embarrassed and distraught…so ashamed' (M038).
Some mothers were estranged from family, friends or community, because they had transgressed cultural norms: 'We are Muslim and my mum she's like strict, and [for] the woman to be pregnant, she have to be married first…when I was pregnant three months she told me I have to go away' (M010). Some young mothers had found their friends had lost interest in them: 'I lost most of my friends really when I had a baby…People who are my age without kids, they just want to go out all the time drinking and stuff whereas I'm not like that obviously' (M027). Other mothers had deliberately isolated themselves because they felt vulnerable to gossip and criticism: 'We had very dense community and everybody know each other and they talk…and they put their nose in every issues…they make gossip.. And then day by day I stopped with my friends' (M036).
Unwilling to be open with professionals
This subtheme describes how most mothers did not feel able to make use of professional help for their feelings of emotional distress. Many extended the self-censorship which they practised with family and friends, to their interactions with health professionals. For example, midwives were consistently said to be too busy to listen to women's concerns: 'If you are at the hospital, midwife [is] limited. I can't really ease my worry [to] those people' (M046); and to have a professional agenda that did not include meaningfully addressing emotional needs: '[The midwives] were all really nice… but I feel they actually had their own agenda…The checklist -"Blood pressure, is it fine? Are we having the urine test? And let's feel the baby." So they do ask, "Oh how are you feeling?" But that's very much at the bottom of the priorities… they don't have the knowledge to actually deal with it'. (M015) Several mothers with diagnosed mental health problems distrusted mental health and social care professionals, who they felt lacked empathy or genuine interest: 'It's just their job. So just they're not really interested in what I've got to say or they're not really bothered' (M041). They were guarded in their interactions with these professionals who had, they believed, stereotyped responses to people with mental health problems: 'If you ring social services …well I don't dare ask for any help, [their only response is] child protection, child protection, child protection' (M040); and were primarily watching them for signs of failure: '[The perinatal psychiatric team] were there just to observe me… I always felt like they are picking up on what I am not doing right' (M006).
How peer support affects mothers
This key theme reflects women's accounts of the impact of receiving peer support during pregnancy and after birth on their emotional wellbeing. Seven subthemes were identified: 'social connection' , 'being heard' , 'building confidence' , 'empowerment' , 'feeling valued' , 'reducing stress through practical support' and 'the significance of 'mental health' peer experiences'. Figure 1 illustrates how they relate to the mothers' self-identified needs. There were no identifiable differences related to mothers' socio-demographic characteristics in the way that they described these impacts.
Social connection
This subtheme examines how, for the mothers without accessible social networks, visits and calls from their peer supporter were in themselves an important source of morale-boosting social contact: 'If [the peer supporter] wasn't there I would feel like alone, crying every day' (M010). Many of the peer supporters also accompanied mothers to local parent groups where they could meet others, and which they had lacked the confidence to attend alone. In four projects the peer supporters ran their own discussion or activity groups to bring mothers together for mutual support, increasing women's sense of social connection and confidence: "It's made me a lot more confident because I'm getting out more and I'm seeing people more" (M025); "A lot of mothers with depression just feel like they're alone, and when I got [to the group] I didn't feel alone anymore" (M029).
A few women described how accepting the peer support felt like taking a risk because it might impose an additional stressful social obligation, but in all cases they were quickly won over by positive experiences of the peer support relationship: 'I just thought, "It's going to be somebody that's going to come round every other day and do my head in … [But] I stuck with it and… I'm glad I actually got the support because they are actually like really, really friendly' (M020).
There were some examples where a peer support project offered a fairly structured programme of (non-emotional) mentoring, but where the supported women nonetheless described the support primarily in terms of the emotional connection they derived from it: 'I found so lovely and nice…[the peer supporter] come here and we talk together. A bit, I feel I have someone to talk to' (M036). However, one mother expressed the view that having monthly 'mentoring' visits was insufficient for a strong relationship to develop: 'I would like to have seen [the peer supporter] a bit more often… you don't really develop a huge relationship' (M003).
Being heard
Across all of the projects, peer supporters had been trained in non-directive listening and this was the aspect of their support that was most frequently mentioned by the mothers, who described the emotional release of being able to talk openly, particularly about their feelings of emotional distress: 'When the problem is really, really much I feel depressed, I just call her and she listens to me. I just smash everything on her and she listens to me' (M028). By contrast with mothers' experiences with family and friends, the peer supporter listened without passing judgement or giving advice, and accepted the mother's feelings: 'You can be open and you can be yourself, and if you have got something on your mind you know you can say it without being judged' (M019). Mothers who felt stigmatised by their situations found it a relief to be able to confide in volunteers who were outside their normal social circle, and were bound by a duty of confidentiality: 'The circle of people that you are around, some things you can't share with them because they will go and tell other people…so I've been keeping things inside me for a very long time. So when I get to meet [a peer supporter] where I can share things with, it feels like there's a burden lift off my shoulder.' (M012) Many mothers juxtaposed what they saw as midwives' superficial invitation to disclose feelings with the real emotional support from their peer supporter, who had built up a relationship of trust with them. Some said that because of this relationship, the peer supporter was the first person with whom they had been able to be honest about how they felt: '[The peer supporter] was someone who talked to me all the time, kept in touch with me all the time, so if someone is talking to you, is building that kind of relationship, you kind of feel confident to share with them anything' (M045). One mother contrasted her disappointment at confiding in a midwife who did not continue to provide antenatal care, with her ongoing relationship with a peer supporter: 'My midwife know about my situation… [Then] I couldn't see her anymore.… I was trusting her, I tell her all my life, and then later she showed me her back…[The peer supporter] was good…just like a sister' (M014).
Many of the mothers who had a diagnosed mental health condition described their relief at being able to talk openly about their mental health to a peer supporter, compared with unsupportive family or friends: '[Your family] might think you're perfectly fine… but you feel like you're going absolutely to bits … Having [a peer supporter] to talk about your issues with every week is a big release' (M039). They also contrasted what they found to be the off-putting attitudes of health and social care professionals with the positive attitude of their peer supporters, both those who had their own experiences of mental health problems and those who did not: '[The peer supporters] really supported me wholesomely to make sure that I went through the whole process okay… whereas in the hospitals or the midwives I always knew whatever I say to them will be on my notes'(M006, broadly-based project) 'I thought if I said something I would get Social Services back and they'd come and take [my baby]… You can talk to [the peer supporters] about anything and they'll try and help if they can…they're more supportive, down-to-earth, caring.' (M040, mental health project)
Building confidence
This subtheme considers the ways in which peer supporters helped mothers to rebuild their self-confidence. Peer supporters in the 'broadly-based' projects consistently gave the mothers positive feedback and focused on their strengths: '[The peer supporter] made me feel better because [she was] speaking always good things [about] me' (M013). They also challenged mothers' self-perception that they were abnormal or inadequate, by normalising their parenting concerns: '[The peer supporter] gave me the confidence…the first thing she said to me was, "You're doing OK and this is normal"' (M003). In some cases peer supporters achieved this by drawing on their own lives as mothers to demonstrate that becoming a parent is a learning experience for everyone: '[The peer supporter] talk about…her personal experience. Or how she look after her kids, and that's made me a bit calm and I say, "Oh my gosh, just not me I have this difficulty. People had before"' (M036). Where the peer supporters led groups for the mothers, members of the group could provide this validation of feelings and experiences: "It's nice to know that other people have felt the same…you feel like it's normal to have those feelings rather than just be sat at home thinking you're the only one feeling like that" (M027). Where the group was skilfully led and structured inclusively to avoid the social cliques that mothers had experienced in drop-in groups, this also enabled mothers to succeed at social interactions and increased their social confidence: "When you see that they are actually interested in what you're say-ing… that's made me a bit more confident that I should just be a bit more positive about speaking to people and not just think they don't want to talk to me" (M027).
Mothers at the 'mental health' projects described the beneficial dynamic when their peer supporters talked about some of their own experiences of mental illness and offered non-judgemental acceptance of women's difficult feelings: 'You've got the acceptance here and it kind of gives you a bit of acceptance of yourself…It's like unconditional love really' (M039). Peer supporters also role-modelled the pathway to recovery, which inspired mothers with confidence about their future: 'Everything that [the peer supporters] used to say, I felt like I could trust in that, because I could see that they were well.
[They're] living proof ' (M038). This normalisation of their mood and experience was particularly important for the self-confidence of those depressed or anxious mothers who struggled with a sense of profound failure when they compared themselves with other women who appeared to be succeeding effortlessly at motherhood: 'I used to feel like everyone was watching me … like they'd all be judging me…[Other women] made it look so easy, and their babies were just so well behaved and they all looked so perfect…and I was just struggling just to get out the house in the morning…I always looked a mess and I just used to feel really sweaty and minging all the time… I knew that [the peer supporter]'d understand 'cause she'd been through it.' (M038) As for mothers in the 'broadly-based' projects, the groups run by peer supporters for mothers with mental health problems could also provide a powerful experience of normalisation: "To be able to come and not feel different…. This was like my safe haven for two hours a week" (M038); they were also places where women could be honest about their feelings: 'Everybody else is in the same boat so you can talk to them about [depression] and they don't criticise you like other mums do. Made me feel more confident… I can succeed' (M040). For one mother with anxiety, however, the group environment was more than she could cope with: 'It was just awful… it was too overwhelming. I felt too many strong overpowering feelings' (M039).
Empowerment
This subtheme looks at how mothers felt empowered by the informational, motivational and moral support aspects of peer support, and how this boosted their selfefficacy in the face of serious challenges. In most projects, an important part of the peer supporters' role was enabling women to find solutions to their problems and to make informed decisions about their maternity options and other issues. They did this by offering evidence-based information or signposting to reliable sources of information, and then helping the mother to reflect on the different options and come to her own decision. The peer supporter training in most projects strongly emphasised the importance of giving nondirective information rather than 'advice' (even if the mother asked for advice), so the mothers remained in control of their decisions: '[The peer supporter]'ll never give you the answers, she'd just suggest stuff… she'll say, "Have you tried this, have you tried that?"' (M003). Another aspect of empowerment was orientating women around their communities so that they understood the services available to them locally, and were thus able to resolve some of the stressful practical issues they faced: 'Lots of people [are] there for your help but if you don't know, you can't get any help…When you have [a peer supporter] they have contact with everywhere' (M011).
The mothers described the impact of this empowering approach as reducing their anxiety and making them feel more in control: 'The first time I met my volunteer she asked me what's my biggest concern. At that time my biggest concern is, I will give birth on the way [to hospital]…The volunteer did not laugh [at] me, she just helped me to find some information from books, from internet, that discuss whole three stage of labour. Then I feel ready to understand what will happen.' (M043) For some mothers with low self-efficacy, the peer supporters' unconditional support and affirmation gave women a renewed sense of agency over their lives: Even where they were unable to help with solving intractable social or legal problems which made some mothers feel trapped and powerless, some peer supporters offered mothers relief by being a supportive presence with solidarity and hopeful words, and where there was a shared religious faith, sometimes by praying together: 'You are able to pour all your stress out by talking to someone, even if they don't have to tell you what to do…[The peer supporter] was really loving and she also helped me spiritually…she would pray with me' (M045). Several of the Black African mothers described this moral and spiritual support as "encouragement" and said that it made them feel stronger and better able to cope with their problems: '[The peer supporter] gave me courage… Sometimes when you feel yourself lonely and you are down, [if] you have somebody [to] encourage you, so that is the difference, it's not like you are alone' (M014).
Feeling valued
A consistent subtheme across all projects was how mothers experienced the peer supporters' relationship with them as strongly contributing to increased feelings of self-esteem. The mothers felt that the peer supporters cared about them as individuals, and were unconditionally woman-focused rather than primarily interested (like statutory services) in their children: For many of the mothers in the 'broadlybased' projects, the fact that their peer supporter was a volunteer was an important component in this feeling of being valued, because they perceived her as caring enough to give them her own time: 'The lonely life one lives in this country, I felt it was really heart-warming for someone to come in and see me and talk to me and find out how things are going… I come from Africa, so I know when someone comes to visit you, they've taken their time and they are just thinking about you.' (M045) In some cases, this consistent, long term peer support relationship had a transformative effect on the mother's feelings about herself: 'Before I'd be like, "Oh, I don't want to even get dressed." … [The peer supporter]'s kind of boosted my confidence and self-esteem. Like now I'll actually take time and … do my hair and do a bit of make-up and go out and look nice, and it's like before I really couldn't be bothered because I were constantly feeling low about myself.' (M020) Several of the mothers described how having a peer supporter alongside them was an emotional lifeline when they felt completely alone in their difficult situation and were having suicidal thoughts -the consistent, caring contact broke through their sense of isolation and despair: 'The whole stress was just too much, everything was heavy on me… During that time I was thinking, "It's the end"…[The peer supporters] didn't allow me to think I don't have anyone, nobody to look after me. I can see a brighter future now.' (M004) However, for one vulnerable mother contact with a peer supporter had left her feeling undervalued because she felt that her concerns were not taken sufficiently seriously (the project then allocated her a different peer supporter): 'I felt [the] volunteer was trying to put me down, like when she made some calls to some charity for help and I did a follow-up call to say, "So what happened?", she was like, "Oh it was just the other day, you know, I've got other people [to support]."' (M034) Although most of the projects offered support for a defined period (ending at between 6 weeks and two years after birth), in many cases the supporters remained informally in contact with the women they supported after the end of the 'official' support. Where this occurred it reinforced the mothers' belief that the peer supporters had a valued and real relationship with them and were not just offering a 'service': 'We staying friend, even yesterday I was spoken to them, so we still alright…It was, "How are you?
How are things going? When I'm going to see you now?" Like friends' (M014).
Reducing stress
Many of the projects offered practical support such as second hand baby clothes and equipment, or help with fares to hospital; and some individual volunteers spontaneously gave mothers help with shopping and cooking after the baby was born, transport, interpreting, or looked after the newborn baby for a short time so that the mother could rest. Mothers who had received practical support usually said that this was as important to them as emotional support, and some said it carried an emotional meaning: '[The peer supporter] was encouraging. Not only with words…When I am stressed, the way she would make food for me, it has given me encouragement' (M028). Some women explicitly described how the support affected their emotional wellbeing by reducing anxiety about practical problems: 'Everything was just sort of either black or white, and there were no grey areas. So if you don't have a house you have nowhere to sleep, if you don't have clothes the child won't have anything to dress in. And that would give me a lot of stress. [The peer supporters] assured me they are going to provide me with the first clothes she would wear, so that really settled me up, so I'm thinking, "Really I don't need to worry."' (M006) The significance of 'mental health' peer experience Most of the mothers supported by the 'mental health' projects believed that the shared experience of mental illness underpinned the effectiveness of peer support by creating a safe space for self-disclosure and inspiring hope for recovery: 'Talking to someone who'd gone through [postnatal depression] made me feel okay about divulging some of the things that I was thinking and feeling' (M038). However, one mother took the view that shared experience was helpful but not essential, and that the most important quality of a supporter was the right attitude: 'As long as you've got that personality behind you, like to sympathise with the person and you want to help, then I think there is people [without depression] that can help others with depression' (M029).
This view was implicitly supported by the two thirds of the mothers in the 'broadly-based' projects who identified themselves as suffering from depression, anxiety or panic attacks, but whose peer supporter had not (necessarily) herself experienced mental health issues during pregnancy or afterwards. For almost all of these mothers, poor mental health was just one of the difficult issues in their lives. In some projects, the 'peer' element was conceptualised as based on a different issue (for example, shared experience of living with HIV, seeking asylum, being a young parent or having a shared cultural background.) In other projects there was no requirement of having a shared 'issue' , and the 'peer' element came almost entirely from a shared experience of motherhood. All of these mothers nonetheless felt able to talk openly about their mental and emotional health to their supporter without fear of stigmatisation and said they felt better for having done so: 'When I got pregnant I got depression … start to cry all the time…. I was thinking, "How am I going to tell to the person which I never met what happened to me?" …[The peer supporter] said, "Don't worry or anything, everything will be fine." And she is saying that way that you believe her.' (M035, broadly-based project)
Discussion
The impact of peer support was studied at a critical time in the lives of women who were commonly experiencing multiple disadvantages. Robust quantitative studies of one-to-one peer support are difficult to carry out, given the heterogeneity, relatively small scale and largely short-term funding of projects involved, and the difficulties inherent in trying to standardise the encounters between individual peer supporters and those they support. [47][48][49] This qualitative study adds to the literature by showing how mainly disadvantaged mothers experience and describe one-to-one peer support from a trained supporter, during pregnancy and after birth, as having a number of substantial and interlocking positive impacts on their emotional wellbeing.
Research on social support has consistently found that "a person's perception of the availability of others as a resource contributes significantly to the individual's selfregulation of distress" [50], and the absence of social support is a significant risk factor for antenatal and postnatal depression and anxiety [7][8][9][10]. It was notable that all of the mothers in this study were either actually highly socially isolated (the structural dimension of social support), or did not perceive themselves as having sufficient functional social support because they felt unable to confide in their partner, family or friends.
Beyond the basic human need for social connection, the most prominent theme in these interviews, was 'being heard'mothers' relief at having someone nonjudgemental to talk with honestly about their problems, fears, concerns and other feelings. This emphasis on skilled listening is reflected in other studies that report mothers' perceptions of the impact and benefits of organised peer or volunteer support. [26, 33, 35-37, 39, 40, 51, 52] 'Being heard' by a peer support volunteer or peer counsellor appears to have a comparable function to the 'safe arena' for connecting with others, sharing experiences, and 'unsilencing voices' , identified by Jones et al. [21] as key to alleviating the burden of distress for mothers with perinatal mental health issues. It was striking that for some women who had experienced considerable adversity and were living in very difficult circumstances, it was a source of consolation and encouragement simply to have a peer supporter expressing moral support and solidarity, even if there was little she could do to improve the situation.
The mothers described a range of specific reasons why they concealed their thoughts and feelings in social situations and from health professionals, but not from peer supporters. Like the mothers interviewed by Tammentie et al. [53], they were unwilling to speak honestly about their feelings because they felt family and friends would not understand, would be upset and deny the validity of the woman's feelings, or were likely to respond with inappropriate advice, criticism or gossip. Although health professionals are expected to ask women about their mental health history and their current emotional wellbeing at structured points during pregnancy and the postnatal period [22], many of the mothers had not felt able to discuss their feelings honestly in response to these questions. They characterised health professionals as too time-pressed to genuinely listen, focused on parenting deficits and safeguarding risks, likely to make assumptions, and interested in the baby but not the mother; the absence of continuity of care was also an obstacle. This finds strong echoes in Raymond's study in the context of antenatal depression [54], where women identified barriers to disclosing depressive feelings to a midwife as having multiple caregivers during pregnancy, not being taken seriously, being rushed, and not being encouraged to talk. These potential missed opportunities for diagnosis are also reflected in midwives' concerns about asking pregnant women about their mental health [55].
Meetings with the peer supporters over time changed mothers' feelings about themselves. Consistent positive feedback (appraisal) and the peer supporters' sharing of their own parenting experiences helped to normalise their concerns and build their self-esteem and selfconfidence in their parenting role; while the peer supporters' support for informed decision making increased feelings of self-efficacy and empowerment, reflecting the impact of volunteers found by Granville and Sugarman [34] and Spiby et al. [33] Like the mothers interviewed by Mauthner [56] and Letourneau et al. [57], the mothers in the 'mental health' projects described how comparing themselves to non-depressed mothers made them feel alienated and abnormal, and they had struggled with feelings of shame and motherhood failure. By contrast, receiving counselling or social support from workers who had experienced and recovered from perinatal mental illness, made them feel understood and accepted; they felt safe disclosing difficult thoughts and feelings and felt more optimistic about recovering when they met other women who had recovered (as reported by Mongomery et al. [58]), a process described by Jones et al. as "an essential aspect of the transition into maternal self-efficacy" [21].
The self-esteem of mothers in all the projects was enhanced by believing that their peer supporters genuinely cared about them, and that they had a real relationship (the nature of these relationships between women and their peer supporters is explored more fully in an earlier paper describing the different models of peer support used in the different projects [44].) Where the peer supporter was a volunteer, the support carried an additional emotional meaning: the mother's worth was affirmed by the volunteer choosing to invest significant amounts of her own time in the relationship.
It is an inherent challenge in peer support to manage the ending of the relationship appropriately [32] [32] [32] and this may be particularly important when the peer support is offered during an emotionally intense life transition such as having a baby and withdrawn during the weeks after birth. In a study of support from volunteer doulas during pregnancy and birth and for 6-12 weeks after birth (with no further contact permitted), Spiby et al. found that a third of the women felt the support had ended too soon and "the feelings of loss associated with endings were identified as itself constituting an impact for women." [33] By contrast, endings were handled less abruptly in the projects in this study, and some mothers described how their peer support had evolved into an enduring friendship, affirming the emotional validity of the original relationship.
It has been recognised that peer volunteers can potentially have negative impacts on those they support, for example, Dennis [51] reported that 10% of mothers receiving telephone peer support said that their volunteer had minimised their problems, possibly (the author suggests) in an effort to normalise their situations. There was only one report of a negative impact on emotional wellbeing in this study, with a mother who felt that her peer supporter had belittled her concerns.
When the accounts of the seven mothers who received 'mental health' support were set alongside the accounts of the forty mothers who received more broadly based support, clear parallels emerged. Under the subthemes of 'being heard' and 'building confidence' , the experiences of mothers receiving support from 'mental health' projects were effectively amplified versions of the experiences of mothers supported by the other projects. Whether or not the mother had a mental health problem, the core issue was women feeling alone with their problems, emotionally and physically isolated, and not like other mothers. The core contribution of peer support to emotional wellbeing was to enable them to confide honestly in someone who listened unconditionally and affirmed their competence and value, either directly or by involving them in a group that could function constructively as an appropriate reference group for normalisation.
The seven women receiving 'mental health' peer support all described the specific and profound benefits of this support, consistent with previous research [21], but one challenged the notion that only people with experience of mental health problems could give effective support. This reflects the accounts of the other women who received support from projects without a specific mental health focus, many of whom described themselves as suffering from depression, anxiety or panic attacks. In spite of the lack of personal 'mental health' peer experience, these mothers all said they were able to talk openly to their peer supporter about their depression and anxiety, reporting an increased sense of emotional wellbeing as the consequence of the contact.
Leger and Letourneau [20] argue that an essential element of being a peer supporter is the shared experience and knowing what it is like to cope with and recover from postpartum depression. This is the definition of peer support as commonly used in the field of mental health [17]. However, for many of the mothers in this study, the 'peer' aspect of the support came simply from a shared experience of motherhood. For some, the 'peer' aspect was more specific, but did not arise in relation to mental health. Irrespective of the 'issue' by which the different projects chose to identify the recipients of peer support, the most important thing for mothers was that the peer supporter listened to the mother in the context of a warm, unconditional, non-judgemental relationship [59]. This suggests that having specific 'peer' experiences was an important mechanism for building trust for some vulnerable women, but that overall, attitudes were more important than circumstances in enabling mothers to speak honestly to the peer supporter about their feelings and to derive emotional support from the encounter. This is consistent with the findings of Letourneau and Secco [24], that many women with postnatal depression preferred one to one support from a professional or peer who had an understanding of symptoms and treatments, was non-judgmental, and ideally but not necessarily had experienced and recovered from the illness. It may thus be the case that, where pregnant women and new mothers are experiencing multiple disadvantages besides their mental health issues, careful selection and training of peer supporters is the key to providing effective emotional support, rather than necessarily the matching of women to supporters with specific mental health experiences of their own. Maternity care professionals should be sensitive to the possibility that a pregnant woman or new mother who may appear to be supported by a partner and local social network, may in reality be profoundly lacking in meaningful social support, and may therefore benefit from peer support.
This paper contributes to the literature through its dual focus on mothers with and without diagnosed mental health problems, and the participant groups included in this peer support study. A strength of the research was the use of in-depth qualitative interviews to explore the peer support experiences of 47 mainly disadvantaged mothers of very diverse multicultural backgrounds and with a range of challenging life experiences, whose voices are often not heard in research. Another strength was that the mothers were drawn from 10 peer support projects around England, enabling the experiences of mothers who received support from projects with and without a 'mental health' focus to be presented together. One limitation was that participants were contacted through the project co-ordinators -this was essential to gain the trust of some very vulnerable women, but meant that the researchers were not aware of how many declined to participate at that stage. One mother's interview was informally interpreted by her peer supporter at her request, so her comments about the impact of the support she had received had to be considered in this context (she is not quoted in this paper). A further limitation was that, because of the possibility that women receiving some types of state support could be moved to another part of the country at any time, some mothers were interviewed sooner than was originally planned and had not yet experienced the ending of their peer support.
Conclusion
Qualitative evidence from the study suggests that peer support can contribute to reducing low mood and anxiety by overcoming feelings of isolation, disempowerment and stress, supporting improvements in mothers' feelings of self-esteem, self-efficacy and parenting competence. Identified benefits for maternal mental health and wellbeing indicate that peer support is a promising and valued intervention at a critical time in the transition to parenthood and may be particularly valuable for migrant women and women experiencing multiple disadvantages.
Care provision and funding for pregnancy and postnatal peer support projects should recognise the positive impact of receiving face to face, organised support from trained supporters. Further research could explore, both qualitatively and quantitatively, the extent and ways in which perceptions of peer support and its impact on emotional wellbeing differ for mothers from a range of different cultural and socio-economic backgrounds, with diverse and varying challenges in their lives, and with varying degrees of severity of mental illness. | 2018-04-03T05:25:16.276Z | 2015-05-27T00:00:00.000 | {
"year": 2017,
"sha1": "7d0739ad6733bbb28b3a21de4828e25c42d76b4c",
"oa_license": "CCBY",
"oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-017-1220-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d0739ad6733bbb28b3a21de4828e25c42d76b4c",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
225096541 | pes2o/s2orc | v3-fos-license | COVID-19 and educational inequality: How school closures affect low- and high-achieving students
In spring 2020, governments around the globe shut down schools to mitigate the spread of the novel coronavirus. We argue that low-achieving students may be particularly affected by the lack of educator support during school closures. We collect detailed time-use information on students before and during the school closures in a survey of 1099 parents in Germany. We find that while students on average reduced their daily learning time of 7.4 h by about half, the reduction was significantly larger for low-achievers (4.1 h) than for high-achievers (3.7 h). Low-achievers disproportionately replaced learning time with detrimental activities such as TV or computer games rather than with activities more conducive to child development. The learning gap was not compensated by parents or schools who provided less support for low-achieving students.
Introduction
To inhibit the spread of the COVID-19 pandemic, many countries closed their schools for several months during the first half of 2020. These closures affected over 90% of school children (1.5 billion) worldwide (UNESCO, 2020a). A defining feature of school closures is that students do not have the same support of teachers as in traditional in-person classroom teaching. Many have argued that the school closures may increase inequality between children from different family backgrounds (e.g., UNESCO 2020b, European By documenting how the discontinuation of in-person teaching differentially affects low-and high-achieving students, we contribute to the broad literatures on educational production (e.g., Hanushek 2020), skill formation (e.g., Cunha and Heckman 2007), and educational inequality (e.g., Björklund and Salvanes 2011). Our results complement the English time-use study during COVID-19 by Andrew et al. (2020) by investigating inequality along the achievement dimension as well as compensating activities of parents and schools. Our study of a range of substituted conducive and detrimental activities also complements several other contemporaneous studies on how COVID-19-induced school closures affected learning inputs and outcomes such as online learning (e.g., Chetty et al. 2020 for online lesson completion and Bacher-Hicks et al. 2021 for household search for online learning resources in the United States) and standardized tests (e.g., Maldonado andWitte 2020, for Flemish Belgium andEngzell et al. 2021 for the Netherlands), neither of which has a focus on differential effects by the achievement dimension. 2 Our findings contribute to the rapidly emerging literature on effects of the COVID-19 pandemic on other economic and social outcomes such as labor markets, families, and well-being (e.g., Alon et al. 2020, Chetty et al. 2020and Fetzer et al. 2020.
The remainder of the paper is structured as follows. Section 2 provides a brief conceptual framework and institutional background on schooling during the COVID-19 pandemic in Germany. Section 3 introduces our data and research design. Section 4 presents results on how the COVID-19 school closures affected learning and other activities of low-and high-achieving students. Section 5 presents results on support structures by parents and schools. Section 6 reports results on differences by parental education background, child gender, and school type as additional dimensions of inequality. Section 7 discusses the findings, and Section 8 concludes.
Conceptual framework and institutional background
This section provides a conceptual framework (Section 2.1) and institutional background (Section 2.2).
School closures in the framework of an education production function
To frame ideas, we conceptualize the potential effects of school closures on educational inequality in the framework of a standard education production function (e.g., Hanushek 1986Hanushek , 2020. The production of educational output is expressed as a function f of student ability A, family inputs F, and school inputs S: where ΔY i is the change in educational output, or learning, of student i. While educational output can be conceived generally as the acquisition of skills, ΔY i will be approximated by student i's daily learning time in our empirical application. We will discuss the implications of this approximation for the interpretation of changes in educational inequality below. In this framework, school closures can be thought of as a reduction in school inputs S i . Specifically, a defining feature of school closures is that there is no teacher in the room to help students with their learning. As teachers are probably the most important school input factor for student learning (e.g., Hanushek 1971, Rivkin et al. 2005and Chetty et al. 2014, students are missing out on key support, and their learning is left more to the discretion of themselves and their families. In standard applications, the education production function is often simplified to be additive in the different inputs. In this case, the effect of a uniform change in school inputs would have the same effect on children from different family backgrounds and different ability levels, thereby leaving educational inequality unaffected. For school closures to affect educational inequality, either the amount or the production elasticities of the other inputs must depend on the extent of school inputs. 3 One often hypothesized aspect is that the extent to which families compensate for reduced school inputs may depend on their socioeconomic background (SES). Their child's education may enter the utility function of high-SES parents more strongly, higher education may make them better substitute teachers, and they may have weaker budget constraints. As a consequence, high-SES parents may make sure that their child spends more time learning, may increase their family inputs more strongly, and may be in a better position (either financially or in terms of managing the curricular content) to support their child's learning activities. Formally, provided family inputs may depend on provided school inputs, and high-SES families (h) may react more strongly (in absolute terms) to a decline in school inputs than low-SES families (l): As high-SES parents compensate more of the lost school inputs than low-SES parents, inequality in educational output will increase in the SES dimension.
2 For additional descriptive evidence on overall learning engagement of students during the school closures in Germany in specific samples, see Anger et al. (2020) and . 3 The exposition here assumes that school closures entail the same reduction in school inputs for all students. Another way in which school closures could affect educational inequality is that the decline in effective school inputs may differ for different students, e.g., when high-SES parents are more likely to lobby for or support the implementation of better distance-teaching measures or when schools implement specific measures to reach out to low-SES or low-achieving students. Such mechanisms would give rise to differences in the extent to which schools compensate the lack of in-person teaching by other school inputs in one way or the other.
Here, we emphasize another dimension of inequality, the one between students of different initial achievement. The sharp decline in teacher inputs that defines school closures implies the necessity of self-regulated learning. Outside the school context, students must acquire and understand the academic content more independently without the support of trained educators. Given dynamic complementarities in the skill formation process (e.g., Cunha et al. 2006, Cunha and Heckman 2007and Cunha et al. 2010, the effectiveness of self-regulated learning will depend on individual students' ability and prior achievement. As a consequence, the presence or absence of school inputs, in particular teachers, will affect the production elasticities of students' own prior achievement. The easiest way to conceptualize this aspect is to depict the extent to which students with different levels of initial achievement A can add to their learning as a negative function of the extent of school inputs: That is, the extent to which high-achieving students acquire larger learning gains compared to low-achieving students will be larger in home schooling than in classroom teaching because high-achieving students have a better skill base for self-regulated learning. As a consequence, school closures are expected to widen educational inequality along the achievement dimension.
To the extent that family SES and students' initial achievement are correlated, the two described mechanisms will exacerbate each other: Socioeconomic differences in family inputs may be one driver for the learning differences between low-and high-achieving students, and differences in initial achievement may be one driver for learning differences between children from low-and high-SES backgrounds.
In our empirical application, we proxy for students' educational outcomes by the amount of learning time as captured in a time-use survey. For the very reasons discussed, one may expect children from higher-SES families and higher-achieving students to acquire more skills per hour of learning at home than their counterparts. In this case, the true effects of school closures on the inequality in students' skill acquisition along these two dimensions are likely underestimated by any estimated effects on learning time. The same is true when disadvantaged children are more likely to substitute the reduced learning time by other activities that are otherwise detrimental rather than conducive to child development.
Institutional background
Germany reported its first official COVID-19 case in late January 2020. As infection numbers continued to grow over the following weeks, federal and local governments adopted a broad range of measures to slow down the spread of the virus, such as social-distancing requirements, contact limitations, quarantine after travelling, and closures of shops and restaurants. A first district with a local spike in infections closed its schools on February 28. 4 On March 13, 2020, the 16 federal states closed all educational institutions throughout Germany (Anger et al., 2020). Only young children (up to age 12) of parents who both work in so-called system-relevant occupations (e.g., health, public safety, public transportation, and groceries) were exempt and could attend emergency services in schools (Notbetreuung). The implementation of emergency services varied across the federal states. In April, the first states began relaxing the requirements for emergency-service attendance, e.g., by expanding the list of system-relevant occupations, including families in which only one parent worked in such an occupation, as well as children of single parents. Children admitted to emergency services were usually not taught regularly, but only supervised.
There was no standardized concept to implement distance teaching during the closures. The state ministers of education also did not formulate specific rules on which subjects should be prioritized during school closures. Instead, decisions regarding the organization of distance-teaching activities were left to the discretion of schools and teachers. Regardless of their specific subjects, all teachers were generally expected to engage in distance teaching. While many schools formally implemented certain distance-teaching activities, in practice teachers' activities were limited and left many students uninstructed (Anger et al., 2020). 5 Distance-teaching activities were further undermined by the lack of technical equipment in the schools and at students' homes. 6 4 This section provides an overview of German school policies during the COVID-19 pandemic between March and June 2020. See Appendix B for some general facts about the German school system. 5 A survey of teachers found that instruction was mostly limited to sending out assignments sheets: Less than half of teachers surveyed provided students with explainer videos, and online instruction via video was provided by fewer than one in five teachers (Bosch Stiftung, 2020). 6 Technical problems in distance teaching are not surprising in the German context: According to the European Commission (2019), the share of highly digitally equipped schools in Germany is substantially lower than the EU average (e.g., 9% versus 35% at ISCED-level 1 institutions; 48% versus 72% at ISCED-level 3 institutions). In addition, the teacher survey by shows that 56% disagree with the statement that the technical capacity at their school is sufficient for web-based formats.
With regard to student assessments, the states jointly decided that school exit exams should take place despite the pandemic. Most states postponed examinations for high-school diplomas (Abitur) from March to April or May. Unlike final exams, standardized student assessments scheduled for 2020 have been canceled because of the pandemic. Thus, no data are available so far to assess the impact of school closures on students' standardized test scores in Germany. 7 In late April 2020, education ministers decided to gradually re-open schools, with starting dates and procedures differing across states. Accompanied by political controversies given the continued risk of COVID-19 outbreaks, schools initially re-opened only for graduation classes, and with strict hygiene rules such as compulsory mouth-nose masks and social distancing. 8 Partial school operationsusually with alternating halves of students per classroom in daily or weekly shiftswere successively expanded to other grade levels during May and June (see Appendix Tables A1 for the timing of school re-openings by state and class type). Ultimately, most students had at least a few weeks of in-person teaching before the summer break. Many students lost up to twelve weeks of in-person classroom teaching as a result of the school closures, equivalent to one third of a school year (Woessmann, 2020). Unfortunately, the education ministries do not provide more specific information about the exact number of weeks during which in-person classes were canceled during the school closures in spring 2020.
After the summer break in August/September 2020, schools opened for all students. However, there were no universal guidelines yet on how to continue school operations through distance teaching in the event of future infection hikes. To the best of our knowledge, we provide the first encompassing quantitative assessment of distance-teaching activities during the school closures in Germany.
Research design and data collection
Using a survey of parents (Section 3.1), we elicit time-use data on a broad range of students' activities for the periods both before and during the COVID-19-related school closures (Section 3.2), complemented by information on parents' and schools' support activities.
The survey
Our survey of parents of school children was fielded as part of the ifo Education Survey 2020, which provides a representative sample of the German population aged 18 to 69 years. Carried out between June 3 and July 1, 2020, by the survey company Respondi via online access panels, the total sample consisted of 10,338 respondents. From the total sample, we asked all parents of school-aged children (N = 1099) to answer a series of questions on their youngest school-aged child before and during the COVID-19-related school closures. 9 As such, the subsample is a convenience sample of parents with students in all types of primary and secondary schools. However, due to the representativeness of the overall sample, it should provide a very good fit for students in Germany. In fact, comparing parental and child characteristics of our analysis sample to all school children in the representative German Microcensus 10 shows that the two samples are very similar in terms of observables (Appendix Tables A2), raising confidence in the generalizability of results. 11 The sociodemographic characteristics of the students and their surveyed parent (Appendix Tables A3) indicate an average student age in the sample of 12.5 years and a rather even gender split. The sample is roughly evenly distributed between students in primary (grades 1-4), upper-track secondary (Gymnasium), and other types of secondary school. Responding parents are also roughly evenly split by gender, and 27% hold a university degree.
To categorize students as low-or high-achievers, we asked parents about their child's school grades in mathematics and German. 12 According to their parents, 15.7% and 12.1% of students in our sample have grade 1 (best grade) in mathematics and German, 7 For details, see https://www.kmk.org/presse/pressearchiv/mitteilung/detail/News/kmk-pruefungen-finden-wie-geplant-statt.html and https:// www.kmk.org/presse/pressearchiv/mitteilung/detail/News/kmk-iqb-bildungstrend-im-primarbereich-verschoben-teilnahmeverpflichtung-anvera-3-und-vera-8-auf.html [accessed June 2, 2021]. Student achievement tests that were scheduled for 2020 but had to be canceled include the IQB Bildungstrend, VERA 3, and VERA 8 for grades three, four, and eight. 8 Teachers in particular were skeptical about the re-opening of schools. For example, when the federal state of Hesse announced it would return to normal school operations in all primary schools starting June 22, the teachers' union Gewerkschaft Erziehung und Wissenschaft (GEW) called this decision "unreasonable" (see https://www.gew-hessen.de/bildung/schule-fachgruppen/grundschulen/details/regelbetrieb-fuer-hessischegrundschulen-ab-22-juni0 [accessed June 16, 2021]). Similarly, the German Teachers' Association repeatedly warned against opening schools too quickly (see https://www.lehrerverband.de/warnung-schuloeffnungen [accessed June 16, 2021]). 9 The parent questions were quite detailed and therefore mentally taxing and time consuming. To minimize the risk that survey fatigue undermines data quality, parents with more than one child were only asked about their youngest school-aged child. Studying the youngest child helps to focus on the challenges of self-regulated learning (which are arguably greater for younger children) and on those whose returns to educational investments tend to be highest (e.g., Cunha et al. 2006). 10 Research Data Centres of the Federal Statistical Office and the statistical offices of the Länder, Microcensus, census year 2015. 11 Cases where parents reported that the child had zero hours of schooling on a typical weekday before Corona were excluded from the analysis sample as they cannot be identified as students. 12 The question was worded as follows: "What grades does your youngest child receive in the main subjects (mathematics and German) most frequently?" Respondents reported a separate grade for mathematics and German on the German grade scale (from 1="very good" to 6="failed").
respectively, 34.6 and 41.3% grade 2, 26.4 and 28.9% grade 3, 10.4 and 6.2% grade 4, and 2.3 and 0.6% grade 5. 13 Computing the median of the average grade in the two subjects separately for the three school types, we classify students at or above this median as high-achievers (55.5%) and those below the median as low-achievers (44.5%). 14 Thus, our achievement measure captures children's previous educational performance relative to other children in the same school type. A regression of a high-achiever indicator on sociodemographic characteristics (column 2 of Appendix Tables A3) indicates few significant observable differences between low-and high-achieving students, with the exceptions that high-achievers are more likely to come from high-income households, have the parent working in home office during Corona, and be younger. Child gender, family status, and parent's work hours do not significantly predict better student grades. We control for these background variables in our regression analysis. 15
Elicitation of time-use information before and during COVID-19
The core of our analysis is detailed time-use data on students' activities for the period of the COVID-19-related school closures. To be able to investigate whether any differences between low-and high-achieving students already existed before the closures or whether they emerged with the closures, we also elicited the same time-use battery retrospectively for the time before the school closures.
Inspired by the time-use module in the mother-child questionnaire of the German Socio-Economic Panel Study (Schröder et al., 2013), we carefully designed the time-use battery to capture relevant activities that students engaged in before and during the school closures. Parents had to specify how many hours (rounded to the nearest half hour) their child spent during a typical workday on each of the following activities: 16 1. School attendance; 2. Learning for school; 3. Reading or being read to; 4. Playing music and creative work; 5. Physical exercise; 6. Watching TV; 7. Gaming on computer or smartphone; 8. Social media; 9. Online media; and 10. Time-out (e.g., relaxing). We also provided an open field to specify "Another activity." 17 To be able to study whether and how parents adapted their home-schooling activities vis-à-vis the school closures, we also elicited how much time parents spent together with their child on the respective activities.
For our analysis, we group the activities into three categories: school-related activities (activities 1 and 2), other activities generally deemed conducive to child development (activities 3-5), and activities generally deemed detrimental (activities 6-9). Our categorization is reflected in parents' beliefs about how beneficial each activity is for their child's development, which we elicited after the time-use batteries. Almost all parents consider the two school-related activities (97 and 93%) and the conducive activities (82-95%) beneficial (Appendix Tables A4). In contrast, only 22-34% think that the different detrimental activities are beneficial. Importantly, these assessments do not differ substantially between parents of low-and high-achieving students, implying that any difference in time use cannot be assigned to different beliefs about the activities' developmental effects.
Complementing our time-use data, we also elicited parents' assessment of how the school closures affected their family and learning environment at home, as well as information on the distance-teaching activities undertaken by schools. The five questionnaire items on the home environment capture topics such as how the family coped with the situation, whether it was a psychological burden for the child and the parents, and an overall assessment of the child's home learning environment (see notes to Appendix Tables A7 for question wordings). Schools' distance-teaching activities during school closures were elicited by seven questionnaire items on activities such as shared remote lessons, individual teacher contacts, use of educational videos or software, and providing work sheets (see notes to Table 4 for question wordings).
The survey-based, partially retrospective elicitation of information about children from their parents raises issues of validity and interpretation that we will discuss in Section 7 below. There, we also discuss evidence that several patterns in our data are consistent with alternative data sources, which raises confidence in the validity of our main findings. 13 Reassuringly, the grade distribution in our sample is similar to the distribution in the youth questionnaire of the 2018 wave of the German Socio-Economic Panel Study (GSOEP). Detailed results are available upon request. 14 Because of the rather coarse grading in primary school (33% of students have the median average grade of 2.0), a relatively large fraction of primary-school students (64%) falls into the category of at-or-above median grades, compared to 51 and 53 percent of upper-track and other secondary-school students, respectively. 116 students (10.6%) had to be excluded from this sub-group analysis because they do not receive numerical grades. Most of them (106) are in primary school, where children usually do not receive numerical grades in the early grade levels. In bounding analyses, we assigned children with missing grade information hypothetical achievement levelseither low or high achieving. Reassuringly, our main finding that the school closures increased the learning-time gap by student achievement turns out robust in this attrition analysis (detailed results available upon request). 15 The small number of observable differences likely reflects that the analysis neglects any variation between school types and that it is based on a multivariate model that holds the other variables constant. In fact, regressing the high-achievement dummy on each characteristic separately (accounting only for school-type dummies) yields the following significant coefficients (p<0.05) in addition to the ones in column 2 of Appendix Tables A3: parental university degree (positive), child not in household (negative), parental work hours (positive), and household income (positive). Detailed results are available upon request. 16 Question wording: "The following questions are about your youngest child attending school. What activities did your child do on a typical workday (Monday to Friday) before [during] the several weeks of Corona-related school closures?" The sum of reported hours spent per day was prevented from exceeding 24 hours. In our analysis, outliers in any answer category are top-coded at 12 h. 17 In cases where the activity specified in the open field corresponded to existing categories, we re-coded the respective category accordingly. Notes: Average hours spent on different activities on a typical workday. During Corona: period of school closures due to COVID-19. Before Corona: period before the school closures. Low-versus highachievers: students with an average grade in mathematics and German below versus at-or-above the median for their respective school type. Std. err.: standard errors stemming from regressions of hours spent on each activity on a high-achiever indicator. Significance levels: *** p<0.01, ** p<0.5, * p<0.1. Data source: ifo Education Survey 2020.
Time use of low-and high-achieving students before and during the school closures
This section reports results on how the COVID-19 school closures differentially affected low-and high-achieving students' learning time (Section 4.1), as well as their time investment in other conducive and detrimental activities (Section 4.2).
Learning time
To be able to investigate how the gap in learning time between low-and high-achieving students changed over time, we elicited information on time use for school-related activities on a typical workday both before and during the school closures. The schoolrelated activities include the two sub-categories of attending school and learning for school at home.
In the full sample, the school closures more than halved students' learning time. Before the school closures, students spent on average 7.4 h per day on school-related activities (Appendix Tables A5). This number dropped to 3.6 h during the closures. This reduction is due to a large decline in school attendancefrom an average of 5.9 to 0.9 h (emergency services) per daythat is hardly compensated by a much smaller increase in time spent on learning for school (from 1.5 to 2.7 h).
Differentiating between low-and high-achieving students reveals that the school closures strongly increased educational inequality. Columns 5-8 of Table 1 indicate that learning time before the school closures did not differ economically or statistically significantly between students initially achieving below versus at-or-above the median (7.4 versus 7.5 h per day). 18 By contrast, columns 1-4 show that high-achieving students spent 0.5 h more on school-related activities during the closures (3.4 versus 3.9 h, p<0.01). 19 Consequently, the increase in the learning-time gap between low-and high-achieving students relative to pre-closure times (columns 9-12) is a significant 0.4 h per day (− 4.1 versus − 3.7 h for low-and high-achievers, respectively; see also Fig. 1). Beyond the binary achievement indicator of our baseline analysis, Appendix Fig. A1 shows that the relationship between the reduction in learning time and student achievement is visible across the entire grade spectrum. E.g., learning time decreases by 3.6 h in the top and 4.2 h in the bottom of the five grade categories. Distinguishing between the two sub-categories of school-related activities, the decrease in school attendance was similar for low-and high-achievers (− 5.1 versus − 5.0 h), but low-achievers increased home learning less than high-achievers (+1.0 versus +1.4 h).
Fig. 1.
Activities of low-and high-achieving students before and during the school closures Notes: Average hours spent on different activities on a typical workday. During Corona: period of school closures due to COVID-19. Before Corona: period before the school closures. Low-versus high-achievers: students with an average grade in mathematics and German below versus at-or-above the median for their respective school type. See Table 1 for details. Data source: ifo Education Survey 2020. 18 Throughout, average results for the full sample are not a simple weighted average of high-and low achieving students because they include students who do not yet receive grades. 19 The difference in learning time between low-and high-achieving students during the school closures is visible throughout the entire distribution (Appendix Tables A6). For example, 43% of low-achievers spent at most two hours per day on school-related activities, compared to 33% of highachievers. Only 22 versus 30%, respectively, spent more than four hours per day on learning. For comparison, before the school closures 89 percent of students spent at least five hours per day on learning.
Going beyond mean differences between low-and high-achieving students, Fig. 2 depicts the respective distributions of learningtime losses for the two groups. The distribution of low-achievers is consistently shifted to the left (towards greater learning-time losses) compared to high-achievers. A two-sample Kolmogorov-Smirnov test rejects the null hypothesis that learning-time losses do not differ by student achievement (p = 0.014). Thus, average differences in learning-time losses as reported in Table 1 are not driven by extreme outliers but are rather observable throughout the distribution.
The learning-time gap between low-and high-achieving students can hardly be accounted for by other observed student and parent characteristics. Table 2 shows results of regressions of the learning time during the school closures on a high-achiever dummy, learning time before the school closures, and a series of student and parent characteristics: the student's school type, age, gender, a single-child dummy, the responding parent's gender, education, single-parent status, home-office status and work hours during the school closures, partner at home during the school closures, household income, and a West-Germany dummy. In all cases, including the additional variables leaves the difference between high-and low-achieving students highly significant and of similar magnitude as the unconditional gap. 20 Including all controls simultaneously (column 14) reduces the difference in learning time between high-and lowachieving students by less than one quarter. Thus, most of the large gap does not reflect differences in the observed characteristics, but rather seems to capture the genuine achievement dimension.
Other conducive and detrimental activities
Substituting the reduced learning time, both low-and high-achieving students only mildly increased the time spent on other activities that are generally viewed as conducive for child development. During the school closures, high-achievers (3.4 h) spent significantly more time on reading, playing music, creative work, or physical exercise than low-achievers (2.8 h; see middle panel of Table 1). However, most of this gap existed already before the closures, so that the difference in the increase in these conducive activities is only marginally significant (+0.2 versus +0.4 h for low-and high-achievers, respectively, p<0.1).
By contrast, low-achieving students particularly used the released time to expand activities such as gaming on the computer or consuming social media. During the school closures, low-achieving students spent 6.3 h on activities such as watching TV, playing computer games, and consuming social and online media that are generally deemed detrimental to child development (bottom panel of Table 1) -nearly three hours more each day than on school-related activities. In comparison, high-achievers spent 1.5 h less on the detrimental activities. Roughly half of this gap already existed before the school closures, so that the increase in time spent on detrimental activities was 0.7 h larger for low-compared to high-achieving students (+1.7 versus +1.0 h). The increase is mostly driven by increased gaps in computer gaming and social-media use, each of which increased by 0.3 h.
Fig. 2. Distribution of reduction in learning time by student achievement
Notes: Difference in average hours spent on school activities on a typical workday between the period before the school closures and the period of school closures due to COVID-19. Low-versus high-achievers: students with an average grade in mathematics and German below versus at-or-above the median for their respective school type. A two-sample Kolmogorov-Smirnov test rejects equality of the two depicted distributions with a p-value of 0.014. Data source: ifo Education Survey 2020. 20 In fact, the only noteworthy reduction does not come from any of the measures of socioeconomic background or family situation, but rather from student age (column 3), reflecting that younger students tend to get better grades and had a smaller reduction in learning time (due to lower before-Corona levels).
Table 2
Gap in learning time between low-and high-achieving students conditional on student and parent characteristics. (1) (2) Notes: Dependent variable: average hours spent on "attending school" and "learning for school" on a typical workday during the period of school closures due to COVID-19. Before Corona: period before the school closures. Low-versus high-achievers: students with an average grade in mathematics and German below versus at-or-above the median for their respective school type. Significance levels: *** p<0.01, ** p<0.5, * p<0.1. Data source: ifo Education Survey 2020.
Together, the results indicate that the school closures exacerbated educational inequality along the achievement dimension. The findings suggest that COVID-19 (i) increased the gap in learning time (and, mildly, in other conducive activities) between high-and low achieving students and (ii) increased detrimental activities especially among low-achieving students. Since low-achieving students are, basically by definition, less effective in turning learning-time inputs into knowledge and skills, we interpret the pronounced effect of the school closures on students' learning-time gaps as lower bound for the impact on gaps in actual learning. 21
Compensating activities by parents and schools
This section investigates to what extent parents (Section 5.1) and schools (Section 5.2) acted to compensate for the increased gap in learning time between low-and high-achieving students.
Parental support
While parents of both low-and high-achieving students increased the time they spent together with their child on learning during the school closures, both level and increase were smaller for low-achievers. 22 During the school closures, low-achievers spent 0.3 h per day less learning together with their parents than high-achievers (0.9 versus 1.2 h, p<0.01; Table 3). While part of this gap already existed before the closures, it further increased by 0.1 h during the school closures (p<0.1). Thus, even though parents increased the learning involvement with their children by half an hour per day during the closures, this aggravated rather than compensated for the increase in educational inequality.
By contrast, the increase in time spent together with parents on other conducive and on detrimental activities did not differ statistically significantly between low-and high-achievers. Still, parents of high-achieving students also spent significantly more time with their child on other conducive activities both before and during the school closures 23 .
Parents' assessment of the environment at home reinforces the finding that low-achieving students were more affected by the COVID-19 school closures. While most parents (87%) think that their family has coped well with the period of school closures (Appendix Tables A7), parents of low-achieving students evaluate the situation slightly worse than parents of high-achieving students (85 versus 90%, p<0.05). There is no significant difference between low-and high-achieving students in whether parents report that the phase of the school closures was a psychological burden for the child or for themselves (38% each on average). By contrast, parents of low-achievers are slightly more likely than parents of high-achievers to report that during the school closures, they argued more than usual with their child (30 versus 24%, p<0.1). They also assess the overall learning environment at home (e.g., in terms of available computers or working space) worse. These gaps hardly change when conditioning on observable child and parent characteristics (column 6).
School support
During the closures, schools and teachers carried out only a fraction of their usual teaching operations via distance teaching, which led to a drastic reduction in direct communication between teachers and students. Table 4 indicates that only 29% of students on average had online lessons for the whole class (e.g., by video call) more than once a week. Only 17% of students had individual contact with their teacher more than once a week. 24 The main teaching mode during the school closures was to provide students with exercise sheets for independent processing (87%), 25 although only 37% received feedback on the completed exercises more than once a week. School activities strongly correlate with children's learning time during the school closures: Children in schools with above-median intensity of distance teaching (with respect to online lessons, individual teacher-student contacts, and feedback on exercises) spent a significant 0.4 h more time on learning for school a day (2.92 h versus 2.55 h).
The distance-teaching measures over-proportionally reached high-achieving students. Low-achievers were 13 percentage points less likely than high-achievers to be taught in online lessons and 10 percentage points less likely to have individual contact with their teachers (column 4). Low-achievers were also less likely to be provided with educational videos or software and to receive feedback on their completed tasks. These gaps do not change noticeably when conditioning on child and parental characteristics (column 6). Thus, 21 Consistently, parents of low-achievers are 14 percentage points more likely than parents of high-achievers to report that their child learned "much less" during the school closures than usual (Appendix Tables A7). 22 The importance of parental inputs for children's skill development is underscored by the finding that children's educational activities are particularly productive when parents are involved (Fiorini and Keane, 2014). 23 In additional analyses, we find that parent involvement in learning and other conducive activities before and during the school closures decreases with child age, as does the increase in parental involvement in these activities induced by the school closures (detailed results available upon request). 24 Across the five answer categories, 6 (4)% had joint online lessons (individual teacher contact) on a daily basis, 23 (14)% several times a week, 14 (16)% once a week, 11 (22)% less than once a week, and 45 (45)% never. 25 96% of students received exercises at least once a week. Notes: Average hours parents spent with their child on different activities on a typical workday. During Corona: period of school closures due to COVID-19. Before Corona: period before the school closures. Low-versus high-achievers: students with an average grade in mathematics and German below versus at-or-above the median for their respective school type. Std. err.: standard errors stemming from regressions of hours spent on each activity on a high-achiever indicator. Significance levels: *** p<0.01, ** p<0.5, * p<0.1. Data source: ifo Education Survey 2020. Notes: Probability that the respective activity was conducted "daily" or "several times a week" (residual category includes "once a week," "less than once a week," and "never"). Question wording: "Which activities did the teachers/school of your child carry out during the several weeks of Coronarelated school closures? Shared lessons for the whole class (e.g., by video call or telephone); Individual contact with my child (e.g., by video call or telephone); My child should watch provided educational videos or read texts; My child should use educational software or programs; My child should work on provided exercises; My child had to submit completed exercises; Teachers gave feedback on the completed exercises." Low-versus highachievers: students with an average grade in mathematics and German below versus at-or-above the median for their respective school type. Std. err.: standard errors stemming from regressions of an indicator that the respective activity was conducted at least several times a week on a highachiever indicator. Conditional gap: see Table 2 for controls. Significance levels: *** p<0.01, ** p<0.5, * p<0.1. Data source: ifo Education Survey 2020. Notes: Average hours spent on different activities on a typical workday. During Corona: period of school closures due to COVID-19. Before Corona: period before the school closures. Low-ed: parents without a university degree. High-ed: parents with a university degree. Std. err.: standard errors stemming from regressions of hours spent on each activity on a high-ed and female indicator, respectively. Significance levels: *** p<0.01, ** p<0.5, * p<0.1. Data source: ifo Education Survey 2020. schools were not able to compensate for the adverse effects of the closures on educational inequality. To the contrary, those students more in need of additional support to keep up learning during the school closures were less likely to benefit from distance-teaching activities. 26
Other dimensions of inequality
This section investigates whether the school closures also amplified educational inequality along other dimensions than students' prior achievement, namely parents' educational background (Section 6.1) and students' gender and school type (Section 6.2).
Differences by parents' educational background
In the public debate, there is concern that the COVID-19-induced school closures could aggravate educational inequality between children from different socioeconomic backgrounds (e.g., UNESCO 2020b; European Commission, 2020). Family background has been shown to strongly impact students' educational success (e.g., Björklund and Salvanes 2011).
While children of university-educated parents invested more time in out-of-school learning activities before COVID-19 than children of parents without a university degree, the reduction in learning time during the school closures did not differ significantly between children of parents with (− 3.7 h per day) or without (− 3.8 h) a university degree (upper panel of Table 5). 27 While children of university-educated parents spent marginally significantly more time on school-related activities during the closures (3.8 versus 3.55 h), most of this gap already existed before COVID-19. 28 Children of university-educated parents did increase their time on other conducive activities more. They also spent less time on detrimental activities both before and during the closures, but the change over time was not significantly different from children of parents without a university degree.
At the same time, there are strong differences in school support during the closures by family background. For instance, children without university-educated parents were 12 percentage points less likely than children with university-educated parents to be taught in online lessons more than once a week, and 15 percentage points less likely to have individual contact with their teachers more than once a week (not shown). This pattern raises concerns that the school closures might have exacerbated inequality in student achievement by children's socioeconomic background, even though the learning-time gap did not widen.
Differences by students' gender and school type
Analysis by student gender indicates that the school closures reduced boys' learning time more than girls'. Before the closures, there was no significant gender difference in learning time (lower panel of Table 5). By contrast, boys spent half an hour less than girls learning at home during the school closures (3.4 versus 3.9 h, p<0.01). Boys substituted learning time mostly for playing computer games, whereas girls mostly increased their time on social media, reinforcing gender differences in both dimensions. The overall gender effect of the closures may exacerbate the "boy crisis" in education (e.g., Cappelen et al., 2019).
There are also noteworthy differences between students in primary, upper-track secondary (Gymnasium), and other secondary school. During Corona, primary-school students were more likely to attend emergency services in schools, which were open only to younger children (Appendix Table A8). Upper-track secondary-school students spent more time learning at home (3.2 h) than their lower-track and primary-school counterparts (2.5 h each). Still, in absolute terms, both types of secondary-school students lost learning time to a similar extent. Primary-school students expanded other conducive activitiesin particular, physical exercisemore than secondary-school students, who mostly expanded gaming and social media.
Discussion
The detailed time-use survey data provide novel and otherwise unavailable information on students' learning during the COVID-19-induced school closures. Still, several points should be kept in mind in interpreting the findings. First, students' time spent on learning and other activities are imperfect proxies for how much they actually learn (e.g., Hanushek and Woessmann 2008). Arguably, high-achieving students are more effective in turning learning time into knowledge and skills (see Section 2.1). In this case, our results 26 Consistently, the share of parents reporting to be satisfied with their school's activities during the school closures was 13 percentage points lower for low-than for high-achieving students (Appendix Table Tables A7). 27 Consistently, learning time during the school closures also did not differ between students with above and below median household income. Due to longer school attendance before the closures, the decline was actually larger for students from high-income households (results available upon request). 28 We find the same qualitative pattern of results when using a more fine-grained categorization of parental education (no degree, vocational degree, advanced vocational degree (e.g., Meister), and university degree). Detailed results are available upon request. likely constitute a lower bound for the impact of school closures on skill inequality by student's prior achievement. 29 Second, survey responses could be subject to social-desirability bias. For instance, parents may inflate reported learning time because they think it is considered socially appropriate. However, research shows that social desirability does not yield major bias in anonymous online surveys as ours (e.g., Das and Laumann 2010). In fact, parents reported that during the closures, their child spent much more time on detrimental activities such as watching TV or computer gaming than on learning. This pattern is inconsistent with a major influence of social-desirability bias on answering behavior. Furthermore, any remaining bias would imply that the large discrepancy between school-related and detrimental activities found in our data even underestimates the true difference.
Third, our analyses are partly based on retrospective reports on how much time children spent on different activities before the school closures. While we cannot rule out that selective memory leads to measurement error in the data (e.g., Zimmermann 2020), it is reassuring that the retrospective answers are plausible in the sense that reported hours spent in school before the closures correspond closely to the hours prescribed in the school curricula. Furthermore, our retrospective data closely resemble students' self-reported learning time elicited in the 2018 wave of the German Socio-Economic Panel Study (GSOEP), which further raises confidence in the validity of our retrospective time-use data. 30 Fourth, the survey data could suffer from measurement error because parents do not know exactly how much time their child spends on different activities. However, only 21% of respondents state that both they and their partner worked at least half a day outside the home during the school closures. The relatively intense parent-child contact in most households increases parents' ability to monitor their child's activities, so that most parents should be able to assess these activities reasonably well. Reassuringly, a survey of students in the final two grades of upper-track secondary school in eight German states by Anger et al. (2020) also finds that learning time during the school closures differs markedly by students' previous school grades, but not by parental educational background. This indicates that our results are unlikely driven by measurement error from lacking knowledge of parents in our data.
Fifth, survey fatigue can lead to respondents not answering some questions conscientiously. However, 500 of the 1099 parents in our sample used the provided open answer field to type in "another activity" in the time-use battery, which indicates that they were very conscientious in filling out the survey.
Finally, the extent to which our results for Germany are informative for other contexts is ultimately an empirical question that we cannot answer with our data. On the one hand, most countries were at least as affected by the COVID-19 pandemic as Germany, had broadly similar school-closure policies, had no previous experience with nation-wide school closures, and had no concepts in place for online school operations. Reports from many countries indicate that the organization of distance-teaching activities was challenging and caused major problems not only in Germany (e.g., Andrew et al. 2020, Chetty et al. 2020, Engzell et al. 2021and Maldonado and Witte 2020. On the other hand, there is some indication that Germany lagged other countries in the classroom usage of digital technologies before the pandemic (e.g., Beblavý et al. 2019 andFraillon et al. 2020), raising the possibility that some other countries may have fared better in providing online teaching for their students and particularly support the low-achievers.
Conclusion
We present novel time-use data on the activities of more than 1000 school children before and during the COVID-19 school closures in Germany. On average, the school closures reduced students' learning time by about half. This reduction was significantly larger for low-achieving than for high-achieving students. Especially low-achieving students substituted the learning time for detrimental activities such as watching TV and playing computer games, rather than for conducive activities. Neither parents nor schools compensated for the increased learning gap by students' prior achievement and actually provided less support for low-than for highachieving students. The reduction in students' learning time did not vary by parents' educational background (though children without university-educated parents received less school support during the closures), but it was larger for boys than for girls.
From a policy perspective, our results call for universal and binding distance-teaching concepts for school closures that are particularly geared towards low-achieving students. Leaving the decision over whether and how to maintain teaching operations during school closures at schools' or teachers' discretion has proven largely unsuccessful in our setting. In fact, proposals to instruct teachers to maintain daily contact with their students, require all schools to switch to online teaching if in-person classes are not possible, and enable online teaching by compulsory teacher training and providing digital equipment to students who cannot afford them have overwhelming majority appeal in the German electorate . Our results suggest that it is particularly the low-achieving students who suffer when support of teachers is lacking, so that any attempt to support their learning when schools have to close is likely to reduce future educational inequality. 29 In addition, an interesting interpretative question that remains unanswered from our analysis is what exact subjects were taught and at what intensity during the school closures. While some evidence speaks against a strong shift in teaching emphasis to core subjects such as mathematics or German (e.g., because teachers of all subjects were expected to engage in distance-teaching activities and because the majority of parents thinks their child learned "much less" than usual during the school closures), an in-depth analysis of distance-teaching curricula would be interesting for future research. 30 The GSOEP asks 12-15-year-olds: "How much time do you usually spend on homework and studying for school?" Answer categories are less than half an hour a day, half an hour to less than 1 h a day, 1 to less than 2 h a day, 2 to less than 3 h a day, 3 to less than 4 h a day, and 4 h and more a day. The average answer is 1.1 h of daily learning for school, compared to 1.5 h that parents of children in the same age range report in our sample. Importantly, the GSOEP data reveals no difference in learning time between low-and high-achieving students (using our grade-based classification), which is also in line with our results. Notes: Hours spent on "attending school" or "learning for school" on a typical workday during the period of school closures due to COVID-19. Lowversus high-achievers: students with an average grade in mathematics and German below versus at-or-above the median for their respective school type. Data source: ifo Education Survey 2020. Notes: Dummy=1 for respondents who say activity is "very beneficial" or "rather beneficial" for the further development of their child (on a five-point scale from "not beneficial at all" to "very beneficial"). Low-versus high-achievers: students with an average grade in mathematics and German below versus at-or-above the median for their respective school type. Std. err.: reports standard errors of regression from dummy=1 for high-achievers on hours in each category. Conditional gap: see Table 2 for controls. Significance levels: ***p<0.01, **p<0.5, *p<0.1. Data source: ifo Education Survey 2020.
Appendix B. General Overview of the German School System
To provide context for the presented results, this appendix briefly presents some stylized facts about the German school system. Germany's education system is decentralized, with each of the 16 states holding legislative and executive power over their respective school system. Although there are some differences between states, the general structure of the school system is similar across states. In general, enrollment in primary school is based on the catchment area in which a child lives. Generally based on their achievement in the fourth and final grade of primary school, children are usually sorted into one of two or three secondary-school tracks at age ten. The exact designations vary from state to state, but the possible tracks typically include a basic track (five or six years), a middle track (six years), and a high track (eight or nine years). The high track leads to the university entrance qualification (Abitur). Only a small share of 11 percent of schools in Germany are private schools (Destatis 2020), and many of these schools have ecclesiastic operators.
Educational inequality in Germany is quite high. For example, comparing PISA test scores of 15-year-olds in mathematics, students from families with low socioeconomic status (defined as being in the lowest decile of the PISA Index of Economic, Social and Cultural Notes: Rows 1-4 and 7: probability that statement "fully applies" or "rather applies" (on a five-point scale from "does not apply at all" to "fully applies"); question wording: "Our family coped well with the situation during the school closures."; "The phase of school closures was a great psychological burden for my child/for me."; "I argued with my child during the school closures more than usual."; "My child has learned much less during the school closures than usual in school." Row 5: average grade provided on 5-point scale (1="insufficient", 5="very good"); question wording: "How would you evaluate your child's learning environment at home during the period of several weeks of Corona-related school closure, e. g., in terms of available computers or space to work?" Row 6: probability that respondents are "very satisfied" or "satisfied" (on a five-point scale from "very unsatisfied" to "very satisfied"); question wording: "Overall, how satisfied are you with the activities your child's school carried out during the several weeks of Corona-related school closure?" Low-versus high-achievers: students with an average grade in mathematics and German below versus at-or-above the median for their respective school type. Std. err.: standard errors stemming from regressions of the respective outcome variable on a high-achiever indicator. Conditional gap: see Table 2 for controls. Significance levels: ***p<0.01, **p<0.5, *p<0.1. Data source: ifo Education Survey 2020. | 2020-12-17T04:13:21.124Z | 2020-10-29T00:00:00.000 | {
"year": 2021,
"sha1": "1963f32a00bda26111df53a73466815bd3a3745b",
"oa_license": "unspecified-oa",
"oa_url": "https://europepmc.org/articles/pmc8474988?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "86260a6dc47daf4dde360ceb3f34a50f38ca939c",
"s2fieldsofstudy": [
"Education",
"Economics"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
264477279 | pes2o/s2orc | v3-fos-license | Numerical simulation of heat transfer properties of large-sized biomass particles during pyrolysis process
During the pyrolysis process of large particles, the conduction between particles cannot be ignored. In the present work, a numerical simulation model for the pyrolysis of biomass particles was established, which takes into account the conduction within the particles. Based on this model, the temperature distribution inside the particle during the pyrolysis process was determined and the effects of particle size, moisture content, and gas velocity on heat transfer characteristics were analyzed. The results showed that the temperatures at different positions of the particles along the inflow direction were quite different, and the maximum temperature difference inside the particles was about 146.7 K for a particle diameter of 10 mm and a velocity of 0.2 m/s. During the pyrolysis process of biomass particles, there were two peaks of Nusselt number. The increase of moisture content prolonged the pyrolysis time. The pyrolysis. time of particles with moisture content of 15 % was about 1.5 times longer than that of dry particles when the particle diameter was 10 mm. Increasing the particle size decreased the difference between the two peaks and increased the time interval between the two peaks. Increasing the gas velocity can improve the heat transfer, but the effect of too high gas velocity on improving the heat transfer is limited. The present study is of great importance for a detailed understanding of the pyrolysis process of biomass particles.
Introduction
Excessive emission of greenhouse gasses leads to global warming.Biomass has a good potential to replace energy from conventional sources [1].In general, biomass includes forest wastes, municipal solid wastes, specially cultivated grasses, microalgae, food wastes, and many others [2].Biomass can provide heat through direct combustion.Biomass can provide energy and is a source of chemicals.Biomass as a carbon-neutral and renewable energy source utilization can reduce the amount of biomass that needs to be disposed of in the end and is an indispensable way to achieve the "double carbon" goal [3,4].In this paper, biomass particles refer to wood particles, including lignin, cellulose and hemicellulose.
Pyrolysis constitutes a pivotal thermal conversion process in the realm of direct combustion, gasification, and rapid pyrolysis of solid biomass materials [5], which is one of the most important methods of biomass utilization.Compared to incineration, pyrolysis reduces the production of toxic compounds such as nitrogen oxides and sulfur oxides [6].During the pyrolysis process, biomass must be broken down into particles.Biomass particles range in size, from hundreds of microns to several centimeters [7,8].Biomass is a poor conductor of heat with low thermal conductivity in the range of 0.21-0.71W/(m⋅K) [9].In most reactor-scale simulations, biomass particles are treated as isothermal spheres.The error introduced by this simplification is not negligible for large particles, since the slow conduction within the particles leads to a lower temperature at the center of the particles [10].For biomass particles within the thermally-thick regime, different conversion rates occur within the biomass particle [11], and the effects of heat conduction within the particles on the heat transfer properties and volatile devolatilization need to be thoroughly considered to obtain a more accurate understanding of the reaction mechanism.
During the actual pyrolysis process of biomass particles, heat conduction occurs between particles and particles-wall, while heat convection occurs between particles and the gas in the reactor [12,13].In simulations of biomass particle pyrolysis processes, a simplified model based on the lumped parameter method was usually used to calculate the temperature change, assuming that the heat conduction resistance of particles is much smaller than the thermal resistance of convective heat transfer and the temperature change of particles depends only on time, which is commonly known as the 0-dimensional model [14], and the effect of intra-particle conduction can be evaluated with dimensionless numbers such as Biot number (Bi).If Bi < 0.1, the distributions within the particles can be ignored.However, in the actual reaction process, the Biot number is much higher than the limit for thermally thin particles of about 0.1, and there is an obvious temperature gradient within large particles [15][16][17].Therefore, different reaction processes occur simultaneously in different spatial zones of the particle, which cannot be modelled by the lumped method [18].At the same time, during the pyrolysis process, the volatiles generated by pyrolysis leave the particle surface and generate a Stephen flow, which also affects the heat transfer efficiency between the particle surface and the surrounding gas.
Due to the fact that the 0-dimensional model is only suitable for cases with small Bi < 1 [19], one-dimensional model was developed to predict the effect of heat conduction between particles on heat transfer efficiency.The one-dimensional model considers only the radial temperature gradient in the particles, which is efficient and accurate for spherical particles and is a good approximation for long cylinders [20].Li et al. [21] used the drying, pyrolysis, carbon and ash layers to represent the changes of wet wood, dry wood, carbon residue and ash, respectively during the pyrolysis process for drying and degassing of biomass particles.Considering a one-dimensional radiant heat flux, Blasi [9] studied the pyrolysis process of wood particles under radiant heating conditions, Wardach-Świe ¸cicka and Kardaś [22] studied the thermal behavior of a single solid particle pyrolyzing in a hot gas flow.Although the one-dimensional model can provide spatial temperature variations in the particles, transport processes inside the particles and the coupling of internal and external flow fields are usually not considered.Assuming that the shape of the particles remains unchanged, Janse et al. [23] studied the pyrolysis of particles with cylindrical, spherical and other shapes.In fact, experimental and theoretical studies indicate that the shape and size of the particles change and that the density of the biomass, species, or heating method also affects the dynamics of the biomass particles, including drying, heating rate, reaction rate and pyrolysis products [24,25].
In summary, for large-diameter biomass particles, the currently established single particle model cannot simultaneously consider the complex heterogeneous reaction process of internal and external heat transfer resistance, pyrolysis characteristics of different components within the particles, and deformation caused by different reaction rates within the particles.The accuracy of the biomass particle pyrolysis process model prediction needs to be further improved.
In the present work, a numerical simulation model for the pyrolysis of three-dimensional single-particle materials was built based on the open-source OpenFOAM® (Open Field Operation and Manipulation) software, taking into account intra-particle conduction and migration of the reaction interface during the pyrolysis process.The effect of intra-particle conduction on the overall heat transfer was analyzed, and the effect of particle size and moisture content on heat transfer properties was investigated.The present study is of great significance for the detailed understanding of the biomass particle pyrolysis process and for providing a practical technical guide.
Conservation equation in porous media region
During the calculation process, particles are composed of cells, chemicals, lignin, charcoal, and water, and each component is evenly mixed together.As the temperature of the particles increases, water begins to evaporate.When the temperature of the particles reaches the pyrolysis temperature of each organic component, the organic component begins to decompose and generate volatile.During the pyrolysis process and with the degassing of the volatiles, the charcoal forms outside the particle.The particle refers to the unreacted area inside the particle and the porous char zone on the particle surface after the reaction.Water vapor generated by water evaporation and volatile generated by pyrolysis of organic components diffused outward through the porous medium area on the surface of particles, and further diffuse to the space outside the particles.
As the pyrolysis reaction progresses, the water and organic components in the particles are converted into vaper and volatile, resulting in a decrease in the mass and volume of the particles.The mass conservation equation for solid particles is expressed as follows: [1,Ns] ε si (2) F.-y. Han et al. [1,Ns] ε si ρ si ε s (3) where ε si and χ i denote, respectively, the volume fraction and the ratio of reaction mass to initial mass of component i.During the simulation process, the change in particle size was achieved through dynamic mesh technology.
The energy conservation equation for the particle region, including the heat conduction for the inner unreacted zone and the outer porous char region, the energy change of the gas in the porous char region, and the heat transfer between the solid particle and the gas flowing through the particle surface.The energy conservation equation for the particle region is expressed as follows: ε gj (6) where the left-hand side term represents internal energy of the particle.On the right-hand side, the first term is heat conduction for the particle phase, the second term denotes heat of the pyrolysis reaction, the third term is total energy of the pyrolysis gas, the fourth term represents thermal convection, the fifth term is associated with effective heat diffusion flow and the sixth term is the dissipative phase.
q gp is the heat transfer of the gas flowing through the particle surface, h, λ and Q g is the specific enthalpy (J • kg − 1 ), thermal conductivity , and K is permeability tensor (m 2 ).The conservation of the gas phase in the porous zone can be expressed as follows, where S g is the generation rate of pyrolysis gas (kg .
In the region of the porous media, the gas phase obeys Darcy's law with Klingberg's correction factor expressed as follows, where ε g is the porosity, u g is the gas velocity (m • s − 1 ), β is the Klinkenberg correction factor (Pa), and P is the pressure (Pa).
Conservation equation of gas phase
In the gas flow region outside the particle, the water vapor generated by water evaporation, the volatile generated by pyrolysis, and the heating gas entering the inlet were mixed and then flowed out of the calculation area from the outlet.When gas flowed through the surface of particles, heat transfer occurred between the gas and particles due to the temperature difference between the gas and the particle surface, the conservation equations for the gas phase are as follows, ∂ t ( ρ g c g T g F.-y. Han et al. where μ eff is the dynamic viscosity (Pa • s − 1 ), Y j denotes the mass fraction of the gas phase component j (volatile and N 2 ).
Pyrolysis equation
During the simulation process, biomass is simplified as consisting of cellulose, hemicellulose, and lignin [26].The results of elemental analysis and chemical composition analysis are shown in Table 1.
where E i is the activation energy of the component during pyrolysis (J • kg − 1 ), A i is pre-exponential factor and N p is total number of components.The detailed parameters are presented in Table 2.
The equation of water evaporation is as follows [31], Equation ( 16) is simulated by the Arrhenius formula, in which the exponential factor and the activation energy are A = 5.13 × 10 6 s − 1 and
Computational model
The computational model is shown in Fig. 1.The biomass particle is initially spherical, and the heating gas flows over the particle from left to right.Heat is transferred from the heating gas to the particle by convective heat transfer, and the temperature of the particle gradually increases, then the pyrolysis reaction takes place and the diameter of the particle gradually decreases.After the volatile components have volatilized, the remaining solid phase forms charcoal.
The main parameters used in the simulation are shown in Table 3. First, a preliminary study is performed to facilitate the choice of the appropriate grid resolution.Fig. 2 shows the central point temperature of a particle at different grid resolutions.The numerical simulations for resolutions 13000 and 43875 are very similar.Both simulations show the process of the particle's central point temperature gradually increasing with time.However, for grid numbers 1625 and 6656, there is temperature variation.Therefore, we choose the average grid resolution of 13000 for the present simulations.
Model validation
Figs. 3 and 4 show the time variation of the simulated weight loss rate of the particles and their core temperature.According to Ref. [32], the diameter of the biomass particles is 20 mm, the pyrolysis atmosphere is nitrogen, and the heating temperature is 773 K.It can be seen that the pyrolysis process can be mainly divided into a heating stage before pyrolysis, a fast pyrolysis stage, and another heating stage after pyrolysis.At about 50 s, the temperature increases to 500 K and pyrolysis begins.The core temperature of the particles reaches 560 K at 180 s.During this time most of the volatile components in the particles have been devolatilized.At 240s, the core temperature of the particles is 610 K and the pyrolysis process is essentially complete.Without the intra-particle conduction, the pyrolysis starts with a certain delay.Fig. 4 also shows the temperature rise rate of the particle center.In the heating stage before pyrolysis, the temperature rise rate of the particle center is higher without intra-particle conduction, for the synchronous increase in temperature throughout the particle When pyrolysis begins, the heat absorbed by the particles is partially used to provide pyrolysis.The particle center reaches the pyrolysis temperature earlier without intra-particle conduction, resulting in a slower temperature rise rate, which is lower than the temperature rise rate with intra-particle conduction.The simulation results agree better with the experimental measurements when the heat conduction within the particles is taken into account.
Table 1
Ultimate analysis and composition of biomass [27].The pyrolysis rate can be calculated as follows.
Heat transfer and reaction characteristics of particles during pyrolysis
Fig. 5 illustrates the radial temperature variation inside the particle along the gas flow direction during the pyrolysis process.There is a clear temperature gradient inside the particle, and the temperature initially increases in the direction of the incoming gas.The dotted line in Fig. 5 represents the initial geometry of the particle.The region facing the incoming flow pyrolyzes first, resulting in rapid shrinkage in the volume of the particles.Due to the different temperature rise rates and pyrolysis rates at different positions in the particles, the shape of the particles also changed.At the end of the reaction, the particle deforms back to an approximately spherical shape.Fig. 6 shows the variation of temperature at five different locations inside the particle along the diameter.Throughout the process, a large temperature difference can be observed inside the particle.Among the different points, the fastest temperature rise is observed at the point (a) in the direction of the incoming flow, followed by the adjacent point (b).The temperature rise is lowest near point (d) on the leeward side of the particle.Initially, the temperature at the central point (c) is lower than the temperature at point (e) on the leeward side.However, due to the strong convective heat transfer and significant heat exchange at point (e), the temperature there rises faster and after a while exceeds the temperature at point (c).The maximum temperature difference inside the particle is 146.7 K at 30s, 122.7 K at 60s, and 125.0 K at 90s.The internal temperature distribution of the particle tends to uniformity at the end of the reaction.Fig. 7 illustrates the relationship between the remaining mass and the original mass of the biomass particles.Also shown is the ratio of the remaining mass of each component to the original mass of that component at a particular point in the pyrolysis process.It can be seen that the pyrolysis rate of hemicellulose is the fastest and mainly occurs in the temperature range of 500-625 K.This is followed by
Table 2
Kinetic equation of pyrolysis [28][29][30].cellulose with the main pyrolysis range of 550K-675 K and lignin with the slowest pyrolysis process, which mainly occurs in the range of 600K-875 K, which is consistent with Park's research results [10].Combining Figs. 6 and 7, it can be seen that at a temperature between 500 and 600 K, the temperature of the particles increases slowly and the mass of the particles decreases rapidly.Figs. 8 and 9 show the variations of particle surface temperature and average particle temperature and particle mass and size with time.The particle surface temperature refers to the average of temperature over the entire surface.The heat released by the particles gradually increases their temperature and triggers the decomposition of the biomass particles.Compared with the results without the internal heat conduction of particles, the average temperature of particles is always lower during the pyrolysis process, and the surface temperature of particles is higher at the beginning, then the heating rate decreases due to the pyrolysis reaction.When the heat conduction inside the particles is considered, the heat absorbed by the particles is gradually transferred from the surface to the interior.The surface of the particles first reaches the pyrolysis reaction temperature and starts to pyrolyze, and the mass gradually decreases.When the particle internal heat conduction is not considered, the pyrolysis of the whole particle starts quickly when the particle temperature reaches the pyrolysis temperature, and the pyrolysis time is shorter than when the particle internal heat conduction is considered.During the pyrolysis process, the heat absorbed by the particles is provided by convective heat transfer from the gas phase.The absorbed heat is used for heating the particles and pyrolysis.At the same time, some of the heat is removed from the particles by the volatilization of the pyrolysis due to Stephen flow [33], and the energy balance equation is as follows where Н is the average convective heat transfer coefficient between the fluid and the particle surface (W m − 2 K − 1 ).T ∞ is the fluid temperature (K), T s is the average particle surface temperature (K), and A is the particle surface area (m 2 ), ṁs is the biomass reaction rate (kg/s).The Nusselt number (Nu) is defined as follows: Fig. 10 shows the change in Nusselt number with time according to Equation (18).The Nusselt number calculated by heating the particles only, without pyrolysis, is essentially the same as the Nusselt number calculated by the empirical formula.Fig. 10 also shows the heat absorption due to the temperature rise of the particles, moisture evaporation and pyrolysis of the three organic components during the pyrolysis process.At the beginning of the pyrolysis reaction, the Nusselt number increases because the heat absorbed by the particles is mainly used for the evaporation of water and the pyrolysis of cellulose and hemicellulose, which delays the increase of particle temperature.When the pyrolysis of cellulose and hemicellulose is almost complete, the heat absorbed by the particle continues to be used for the temperature rise, and the Nusselt number decreases.At about 90 s, lignin begins to decompose.As the lignin pyrolysis rate increases, the Nusselt number increases.At the end of the pyrolysis reaction, the Nusselt number decreases again and is slightly lower than the Nusselt number calculated with the empirical formula.
Influence of moisture content
Figs. 11 and 12 show the effects of moisture content on particle temperature and residual mass, respectively.In the calculation, the mass of organic matter in the particles and the diameter of the particles are kept unchanged, and the total density of the particles increases with increasing moisture content.Since heat is required to evaporate the moisture, the higher the moisture content, the slower the temperature of the particles increases.In addition, the effect of moisture content on the heating rate is more pronounced at the center (c) and leeward (e) sides of the particles than at the windward (a).A higher moisture content results in a longer time to complete the pyrolysis process.The time required to complete pyrolysis at a moisture content of 15 % is about 50 s higher than in the case without water.Fig. 13 shows the effects of moisture content on Nusselt number.As the moisture content increases, the temperature rise of the particles decreases and the first peak of the Nusselt number increases.Increasing the moisture content increases the reaction time of pyrolysis.When pyrolysis of lignin starts, the average total temperature of the particles is higher due to the longer reaction time, and the temperature rise is relatively small, so the difference between the two peaks of the Nusselt number decreases.
Influence of particle size
Figs. 14 and 15 illustrate the effects of particle size on temperature and residual mass, respectively.As particle size increases, the time required to heat the particles and complete pyrolysis increases.In addition, the temperature difference between the center of the particles and their surface increases.At 970 K, the time required for the temperature rise of particles with a diameter of 20 mm is about 15 times higher than that of particles with a diameter of 2 mm, and the corresponding pyrolysis time is about 20 times longer.The larger the particle size, the greater the difference between the temperature at the center of the particles and the surface temperature.The disparity is attributed to the fact that larger particle sizes must overcome greater thermal resistance during heat conduction, which diminishes the rate of temperature increase at the center of particles.Conversely, reduced surface area leads to a more rapid rise in surface temperature.These factors are crucial for regulating and optimizing pyrolysis processes involving large biomass particles.In addition to the heat transfer factor, another reason for the decrease in pyrolysis rate with increasing particle size is that volatile products must diffuse from inside the particle to its surface before being released into the surrounding environment.For the larger particles, this diffusion distance increases, resulting in greater resistance and a slower decline in residual mass fraction.Fig. 16 shows the effect of particle size on the Nusselt number.It can be observed that the variation of Nusselt number shows a similar trend for different particle sizes during the pyrolysis process.As the particle size increases, the difference between the two peak values decreases and the time interval between the two peak values increases.When the particle diameter is 2 mm, the two peak values of Nusselt number are 5.7 and 13.2, respectively, and the time interval is about 4 s.When the particle diameter is 10 mm, the two peak values are 8.2 and 10.5, respectively, and the time interval is about 15 s.When the particle diameter is 20 mm, the two peak values are very close and the time interval is about 30 s.During the initial stage of pyrolysis, surface reactions dominate and result in higher reaction rates on larger biomass particles.The reaction products are quickly diffused out, leading to a significant peak in the Nusselt number.However, as the reaction progresses into later stages, larger particle sizes lead to smaller specific surface areas and slower heat and mass transfer rates, resulting in a lower peak Nusselt number.
Influence of gas velocity
Figs. 17 and 18 show the influence of the gas velocity on the temperature distribution of the particles and the residual mass fraction, respectively.The higher the gas velocity, the faster the temperature of the particles rises at different positions in the particles, the earlier the pyrolysis begins, and the shorter the time required for completion of the pyrolysis.The higher the gas velocity at the beginning, the greater the temperature difference between point a of the particle surface and point c of the particle center.At a gas velocity of 0.2 m/s, the temperature difference between point a and point c is 173 K after 10 s, and at a gas velocity of 0.8 m/s, the temperature difference is 218 K.As time increases, the surface of the particles gradually begins to pyrolyze, and the temperature difference between a point and the c point decreases.The higher the gas velocity, the faster the temperature difference drops.This is because the higher the gas velocity, the higher the surface temperature of the particles, and the more intense the pyrolysis reaction.At 45 s, the temperature difference between point a and point c is 120 K at a gas velocity of 0.2 m/s, and at a gas velocity of 0.8 m/s, the temperature difference is 130 K.In the later stage of the pyrolysis reaction, the decomposition of the particle surface is completed, the decomposition of the particle interior begins gradually, and the temperature difference between the particle surface and particle interior increases.The temperature difference between point a and point c is 135 K when the gas velocity is 0.2 m at 85 s, and 153 K when the velocity is 0.8 m/s at 65 s.At the end of the reaction, the temperature difference between the surface and the interior of the particles gradually decreases.At a gas velocity of 0.2 m/s, the temperature difference between point a and point c is 25 K, and at a velocity of 0.8 m/s, the temperature difference is 7 K.At the four different velocities, the times for the residual mass of the to reach 20 % are 108s, 91s, 85s and 82s respectively.Fig. 19 shows the effect of gas velocity on Nu.Increasing the gas velocity improves the heat transfer between the gas and the particle surface, and Nu ncreases.Although the gas velocity is different, the particle temperature corresponding to the maximum value of Nu is relatively low.The particle temperature corresponding to the first peak value of Nu is 670-680 K, and the particle temperature corresponding to the second peak value of Nu is 820-840 K.As the gas velocity increases, the occurrence of the peak value is delayed, the difference between the two peak values increases, and the time between the two peak values decreases slightly.The effect of gas velocity on heat transfer enhancement decreases with increasing gas velocity.The Nu is very close when the gas velocity is 0.6 m/s and 0.8 m/s.This is because the high gas velocity causes a vortex at the end of the particle, which affects the contact between the particle and the high temperature gas, thus affecting the heat transfer efficiency.
Conclusions
In this work, a numerical model for the pyrolysis of a single biomass particle was established, taking into account the internal conduction of large particles in the pyrolysis.Accordingly, the internal temperature distribution of the particle and the heat transfer characteristics during the pyrolysis process were determined.
(1) The temperature distribution and composition were different in different places inside the particle.The maximum temperature difference inside the particle was about 146.7 K when the particle diameter was 10 mm.(2) The temperature increased rapidly along the flow direction, leading to an earlier onset of pyrolysis and a change in particle shape.During the pyrolysis process, there were two peaks of Nusselt number.(3) The increase of moisture content prolonged the reaction time of pyrolysis.The pyrolysis time of particles with moisture content of 15 % was about 1.5 times longer than that of dry particles when the particle diameter was 10 mm.(4) As the particle size increased, the temperature difference between the surface and the center of the particles increased, resulting in a longer pyrolysis time.The difference between the two peaks decreased and the time interval between the two peaks increased.(5) Increasing the gas velocity improves the heat transfer between the gas and the particle surface, and the effect of the gas velocity on improving the heat transfer is limited when the gas velocity is too high.
Fig. 2 .
Fig. 2. Central point temperature of the particles at different grids.
Fig. 5 .
Fig. 5. Temperature distribution at a cross-section inside the particle.
Fig. 6 .
Fig. 6.Variations of temperature at different positions inside the particle.
Fig. 12 .
Fig. 12. Influence of moisture content on particle residual mass fraction.
Fig. 13 .
Fig. 13.Influence of moisture content on the Nusselt number.
Fig. 19 .
Fig. 19.Influence of the gas velocity on Nu. | 2023-10-26T15:40:28.257Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "5a9b423e52a450a929429f482652c0de8e23b7c0",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e21255",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7cd33e54acb8c1b6eb7faaeaf4705f7243a69b3c",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5439687 | pes2o/s2orc | v3-fos-license | Complementary and Integrative Treatments
Rhinosinusitis is characterized by inflammation of the mucosa involving the paranasal sinuses and the nasal cavity and is one of the most common health care problems, with significant impairment of quality of life. There is a growing amount of interest in the use of complementary and integrative medicine for the treatment of rhinosinusitis. This article focuses on an integrative approach to rhinosinusitis.
The paranasal sinuses are comprised of 4 paired sinuses: maxillary, ethmoid, sphenoid, and frontal. The roof of the ethmoid sinus is the fovea ethmoidalis, which forms the floor of the anterior cranial cavity and slopes upward at an angle from the midline to extend 2 to 3 mm above the cribriform plate. The lateral wall of the ethmoid is the lamina papyracea, which is also the medial wall of the orbit. The ethmoid sinuses are comprised of numerous small air cells, which develop from evaginations of the lateral nasal wall in the embryo. There are an average of 9 ethmoid air cells present, although the number varies widely.
The ethmoid sinus can be divided into anterior and posterior groups of air cells. The anterior group of ethmoid cells includes the frontal recess, bullar, and infundibular cells. The infundibulum is the site of drainage for the frontal sinus and anterior ethmoid cells and is located lateral to the middle turbinate and anterior to the bulla. The bullar cells drain into the middle meatus via the hiatus semilunaris, a large cleft in the lateral nasal wall. The uncinate process, which forms the anterior border of the hiatus semilunaris, is a ridge of bone extending from the ascending process of the maxilla. The remaining anterior ethmoid cells drain into the middle meatus, whereas the posterior cells drain into the sphenoethmoidal recess. The vascular supply for the ethmoid sinuses is from the anterior and posterior ethmoidal arteries, and innervation of the sinuses comes from the orbital division of the fifth cranial nerve.
The maxillary sinuses are roughly triangular in shape, with boundaries of the orbital floor superiorly, the lateral nasal wall medially, and the bony lateral wall. The sinus drains into the natural ostium on the superior medial wall, which flows into the hiatus semilunaris. There may also be accessory maxillary sinus ostia in the medial sinus wall.
The frontal sinus originates from the frontal recess cells of the anterior ethmoid and at birth is often indistinguishable from these cells. The frontal sinus is usually well formed by 12 years of age but does not reach adult size until 18 to 20 years of age. The anterior table is twice as thick as the posterior table when measured in the midsagittal plane inferiorly. An intersinus septum separates the two sinuses. The sinuses drain through a nasofrontal recess into the hiatus semilunaris beneath the middle turbinate. In most adults, this recess is a mucous membrane-lined bony canal measuring 3 mm or greater.
The ostiomeatal complex includes the middle turbinate, uncinate process, middle meatus, hiatus semilunaris, and infundibulum. The drainage pathways for the frontal, anterior ethmoid, and maxillary sinuses all flow through the ostiomeatal complex. Obstruction of this relatively narrow path from polyps or other mass lesions, inflammatory edema, or purulence will result in postobstructive sinusitis involving one or more of the aforementioned sinuses.
The sphenoid sinuses originate as evaginations from the sphenoethmoidal recess. They are present at birth but do not begin to pneumatize until about 3 years of age. The development of the sinuses may continue through adulthood, and the size may vary greatly because of differences in the degree of development. The midline is often an irregularly shaped intersinus septum. Drainage is into the sphenoethmoidal recess medial to the superior turbinate. The superior boundary is the sella and pituitary fossa, whereas the lateral walls contain the optic nerve, carotid artery, and cavernous sinus. There may be bony dehiscence of the lateral wall over these structures.
Histology
The paranasal sinuses are lined by respiratory epithelium, which consists of pseudostratified ciliated columnar epithelium with goblet cells. Numerous mucous and serosanguinous glands are present. In addition to mucus, the sinus glands also secrete Complementary and Integrative Treatments: Rhinosinusitis immunoglobulins, interferons, and lysozyme. The anterior portion of the nares and nasal septum are covered by skin with adnexa. The roof of the nasal cavity contains specialized olfactory epithelium with bipolar olfactory neurons.
Physiology
The sinus epithelium forms a mucociliary system, which supplies the nose with a mucous covering to warm and humidify inspired air. Both parasympathetic and sympathetic nerves supply this mucous blanket, which is renewed every 10 to 15 minutes. 36,37 The cilia beat 10 to 15 times per second and move the mucous blanket toward the natural ostia of the sinuses. Environmental factors influence ciliary function; humidity increases the activity, whereas dehydration and cold temperatures decrease flow. 38 Bacterial and viral proliferation may increase when there is dysfunction of the cilia and relative stasis of the mucous blanket. In addition to mucociliary dysfunction, any condition that obstructs the drainage of the sinuses (eg, polyps, inflammation, or edema of the nasal mucosa) will lead to sinusitis. Benign and malignant tumors of the nasal cavity, paranasal sinuses, and skull base can also lead to a postobstructive sinusitis of one or more of the paranasal sinuses.
Pathophysiology
A variety of host and environmental factors play a role in the development of RS. Host factors can be divided into general (genetic factors and immune deficiency), local (anatomic abnormalities, mucosal and bone inflammation), and environmental factors (air pollution, smoke, allergens, viruses, bacteria, and fungi). 39 The pathophysiology leading to RS of the maxillary and frontal sinuses usually involves a constellation of changes that lead to the obstruction of the ostiomeatal complex, including mucosal swelling and inflammation, mucous stasis, impaired mucociliary function, and microbial infection.
SYMPTOMS
The signs and symptoms of RS can differ depending upon contributing factors and the overall duration. Acute RS often presents with purulent nasal discharge with nasal obstruction and facial pain or pressure. Additional symptoms can include hyposmia/ anosmia, headache, fever, cough, aural fullness, halitosis, fatigue, and dental pain. 32 Because purulent nasal discharge cannot be used as a sole factor to distinguish between viral and bacterial infection, the illness pattern and duration should be used instead, with viral RS usually lasting less than 10 days, but acute bacterial RS being more persistent. 2,48 Chronic RS exists if these symptoms continue for greater than 12 weeks.
MEDICAL TREATMENT APPROACHES
The goals of treatment are to improve drainage, remove obstruction, promote mucociliary function, eradicate infection, reduce inflammation, and prevent complications.
Taw et al
Medical therapies for RS can include any of the following: intranasal or systemic steroids, topical or oral antibiotics, nasal saline irrigation, topical or systemic decongestants, antihistamines, leukotriene antagonists, mucolytics, expectorants, immunotherapy, and analgesics. If these conventional therapies are not effective and symptoms become refractory, other medical options that have been used include antifungals, proton-pump inhibitors, bacterial lysates, immunomodulators, and immunostimulants. 47,[49][50][51][52] Long-term, low-dose macrolide therapy may also have a role in the treatment of chronic RS, given its demonstrated antiinflammatory effects. [53][54][55]
SURGICAL TREATMENT APPROACHES
Endoscopic sinus surgery is indicated for 2 reasons: (1) failed medical treatment or (2) potential or actual complications, such as the development of a mucocele, mucopyocele, orbital abscess, invasive fungal sinusitis, anatomic obstruction caused by polyps or mass lesion, or suspicion of malignancy. Substantial evidence exists that supports surgical intervention in reducing symptoms and improving quality of life in patients with RS. 56
Pelargonium sidoides EPs 7630
In South Africa, Pelargonium sidoides (P sidoides) has historically been used to treat a variety of ailments, including upper respiratory tract infections like bronchitis and tuberculosis. 57 P sidoides, traditionally known as Umckaloabo, is rich in phenols and flavonoids, consisting of coumarins, tannins, diterpenes, and proanthocyanidins. [58][59][60] It has been standardized in Germany as an aqueous ethanolic extract of its root known as EPs 7630.
EPs 7630 has been shown to have significant antibacterial activity against multiresistant Staphylococcus aureus and antiviral effects against seasonal influenza A virus strains (H1N1, H3N2), respiratory syncytial virus, human coronavirus, parainfluenza virus, and Coxsackie virus. 58,61 Through its immunomodulatory effects, EPs 7630 has been demonstrated to specifically enhance human peripheral blood phagocyte activity as well as have antiadhesive effects through interaction with bacterial surface binding factors. [62][63][64] A double-blind, randomized, multicenter trial conducted by Bachert and colleagues 65 enrolled 103 patients with radiographically and clinically confirmed acute RS and compared EPs 7630 (1:8-10; extraction solvent: ethanol 11% at a dosage of 60 drops 3 times daily for up to 22 days) with placebo. EPs 7630 was found to have superior efficacy and tolerance, based on changes in sinusitis severity scores. A Cochrane review concluded that P sidoides may be effective in alleviating symptoms, including headaches and nasal discharge, for acute RS and the common cold in adults. 66 Bromelain Bromelain, a mixture of proteolytic enzymes extracted from pineapples (Ananas comosus), has demonstrated antiinflammatory, antiedematous, antithrombotic, and fibrinolytic effects. 67 Three double-blind, randomized controlled trials were conducted in the 1960s on patients with acute and chronic RS, using similar protocols of 2 parallel treatment arms comparing bromelain with placebo, with each group also receiving conventional management consisting of antibiotics, decongestants, antihistamines, and analgesics. 68-70 A meta-analysis performed by Guo and colleagues 71 showed Complementary and Integrative Treatments: Rhinosinusitis a small but statistically significant difference in favor of adjunctive treatment with bromelain for nasal mucosal inflammation, nasal discomfort, breathing difficulty, and overall rating but not for nasal discharge.
A recent multicenter trial enrolling children less than 11 years of age with acute sinusitis had 3 treatment groups (bromelain vs bromelain 1 standard therapy vs standard therapy) and showed a statistically significant recovery time with bromelain monotherapy compared with other treatment groups. 72 Only one mild self-limiting allergic reaction was noted. The 1993 German Commission E monograph concluded that bromelain may be effective for "acute postoperative and post-traumatic swelling, especially of the nose and paranasal sinuses." 73 Caution must be used when prescribing bromelain for patients already on anticoagulants because of the increased risk for bleeding as well as when prescribing various antibiotics, such as penicillin and tetracycline, because bromelain is also known to promote their absorption. 67 Moreover, bromelain strongly inhibits human cytochrome P450 2C9 (CYP2C9) activity and can, thereby, affect metabolism of its substrates. 74 Recommended dosages range from 500 to 2000 mg/d. 75 Cineole Cineole, or more specifically 1,8-cineole, is a monoterpene present in many plantbased essential oils and is commonly derived from Eucalyptus globulus; 1,8-cineole is also one of the main chemical ingredients identified in the Chinese herb Flos magnoliae. 76 It has been shown to enhance mucociliary clearance; block inflammation through inhibiting formation of cytokines, such as tumor necrosis factor (TNF)-alpha and interleukin-1beta; and activate antinociceptive properties, perhaps through a mechanism involving a nonopioid receptor. [77][78][79] A prospective, randomized, double-blind study comparing cineole (200 mg 3 times per day) with placebo in 152 patients with acute nonpurulent RS showed a statistically significant difference in symptoms sum scores in the cineole group, in addition to a reduction in secondary symptoms, such as headache on bending, frontal headache, nasal obstruction, and nasal secretion. 80 Mild side effects, including heartburn and exanthema, were noted with cineole. The investigators concluded that cineole may serve as an integrative therapy during the first 4 days of acute RS, but antibiotics should be initiated if symptoms persist. In addition, another prospective, randomized, double-blind study demonstrated that cineole was more effective than an herbal preparation with 5 different components in the treatment of acute viral RS. 81 Cod liver oil Cod liver oil, which is rich in omega-3 fatty acids and vitamin D, was historically used as a remedy for rickets in the 1800s. 82 There is limited evidence for the use of cod liver oil for RS, including a 4-month, open-label study enrolling 4 children with recurrent chronic RS who were given escalating doses of cod liver oil and a multivitamin with selenium. 83,84 Three patients demonstrated a positive response with decreased sinus symptoms, fewer episodes of acute sinusitis, and fewer physician visits. The investigators concluded that cod liver oil in combination with a multivitamin containing selenium was an inexpensive, noninvasive adjunctive intervention that can be used for selected patients.
Manuka honey
Manuka honey is produced from the nectar of flowers native to Australia and New Zealand, particularly from the species of Leptospermum, and has potent antibacterial activity attributed to its high concentration of methylglyoxal, hyperosmolarity, hydrogen peroxide, and low pH. 85,86 It was found to have bactericidal activity against biofilms formed by Pseudomonas aeruginosa and Staphylococcus aureus, with significantly higher effects than commonly used antibiotics and may have implications for treating chronic RS. 87,88 Thamboo and colleagues 89 studied the use of manuka honey in patients with allergic fungal RS. Thirty-four patients were treated with a topical combination of manuka honey and saline in one nostril daily for 30 days. Culture results from their ethmoid cavities were unchanged, as was their endoscopic staging. However, there was reported symptomatic improvement using the Sino-Nasal Outcome Test (SNOT)-20 as an outcome measure.
Herbal Supplements (Combination)
Sinupret Sinupret (comprised of Gentiana radix, Primula flos, Rumex herba, Sambucus flos, and Verbena herba) is an herbal formula used widely in Germany for the treatment of respiratory infections. Approved by the German Commission E in 1994 for the treatment of acute and chronic inflammation of the paranasal sinuses, Sinupret is available as a coated tablet of 6 mg of Gentiana radix and 18 mg each of Primula flos, Rumex herba, Sambucus flos, and Verbena herba or as a water and alcohol extract in a proportion of 1:3:3:3:3. 73 Sinupret has been shown to have antiviral activity in vitro against certain subtypes of viruses known to cause respiratory infections, including adenovirus, human rhinovirus, and respiratory syncytial virus and to strongly stimulate transepithelial Cl(-) secretion to maintain normal mucociliary clearance in sinonasal epithelium through the hydration of the airway surface liquid. 90,91 Four randomized controlled trials (RCTs) evaluated Sinupret (either 2 tablets or 30 drops of liquid formula 3 times per day) as adjunctive therapy for acute RS (3 RCTs) and chronic RS (1 RCT) (Berghorn, Langer W, Mä rz RW, Bionorica GmbH, unpublished data, 1991). [92][93][94] A systematic review demonstrated that Sinupret may be effective as an adjunctive therapy in acute RS. 71 However, one study found no significant difference in olfactory function between patients treated with Sinupret versus placebo, although an initial therapy of oral prednisolone for 7 days had preceded the treatment intervention. 95 Esberitox Esberitox is an herbal extract containing Thuja occidentalis (white cedar), Echinacea purpurea and pallida (purple coneflower), and Baptisia tinctoria (wild indigo) with demonstrated immunomodulatory properties. 96 A randomized, double-blind, placebo-controlled study showed a dose-dependent efficacy in the treatment of upper respiratory infections and, in particular, certain symptoms like rhinorrhea. 97 Another study that enrolled 90 patients with acute RS compared (1) Esberitox (3 tablets 3 times per day) and doxycycline, (2) Sinupret (5 tablets twice per day) and doxycycline, and (3) doxycycline alone and found that both groups with combination therapies had a significantly higher rate of response. 71,94 Reported adverse events included photosensitivity and gastrointestinal symptoms, such as nausea.
Myrtol
Myrtol is a standardized phytotherapeutic extract (Gelomyrtol/Gelomyrtol Forte) taken from Pinus spp, Citrus aurantifolia, and Eucalyptus globulus. It is mainly comprised of In a randomized, double-blind, multicenter trial, 330 patients with acute sinusitis were enrolled into one of 3 arms: (1) Myrtol extract (300 mg/d), (2) other unidentified essential oil, or (3) placebo. 99 Myrtol and the other essential oil groups both demonstrated superior efficacy to placebo based on the total symptom score of 7 items (headache, nasal secretion, nasal obstruction, pain on pressure, pain at bending over, general well-being, and fever), although there were insufficient statistical data to support this conclusion. 71 Mild to moderate adverse events that were mostly gastrointestinal in nature were reported.
Nasturtium and horseradish root
Nasturtium (Tropaeoli majoris herba) and horseradish root (Armoraciae rusticanae radix) have broad antibacterial activities against several gram-positive and gramnegative organisms, including Haemophilus influenzae, Moraxella catarrhalis, Pseudomonas aeruginosa, Staphylococcus aureus, and Streptococcus pyogenes. 100 A prospective, multicenter, cohort study performed in children between 4 and 18 years of age with acute RS found that an herbal drug preparation, containing nasturtium and horseradish root, had similar efficacy and fewer adverse events compared with standard antibiotics. 101
Nutrition: Ginger, Quercetin, and Epigallocatechin Gallate
Dietary polyphenols are widely available in food and well-known for their antiinflammatory effects. Both ginger and quercetin, a polyphenolic bioflavonoid commonly found in apples and onions, have potent antioxidant and antiinflammatory properties. 102,103 Mechanisms of action that have been elucidated for quercetin include suppression of the inflammatory mediator cyclooxygenase-2, inhibition of histamine release through downregulation of mast cell activity, and enhanced mucociliary clearance through augmented transepithelial chloride secretion via the cystic fibrosis transmembrane conductance regulator anion channel. [104][105][106] A combination of ginger extract and green tea (Camellia sinensis), which is rich in epigallocatechin gallate (EGCG), showed significant antiallergy effects through the suppression of certain cytokines, such as TNF-alpha and MIP-1alpha (macrophage inflammatory protein). 107 The dietary polyphenols of [6]-gingerol, quercetin, and EGCG were found to effectively inhibit excess mucus secretion of respiratory epithelial cells while maintaining normal nasal ciliary movement. 108 Homeopathy Homeopathy, initially developed by German physician Samuel Christian Hahnemann at the end of the eighteenth century, is based on the principle of similars (like cures like) whereby therapeutic effects are achieved by stimulating the body's homeostatic healing response via substances that have been serially diluted and shaken. There is evidence from RCTs that homeopathy may be effective for the treatment of influenza and allergies. 109 In a recent prospective observational trial from Germany, 134 adult patients with treatment refractory chronic sinusitis were tried on different homeopathic remedies. Over the course of 8 years, the investigators found sustained improvements in quality-of-life outcomes (36-Item Short Form Health Survey) and decreased use of conventional medications, with the greatest change noted during the first 3 months of follow-up. 110 Sinfrontal Sinfrontal is a homeopathic remedy (containing Cinnabaris D4, Ferrum phosphoricum D3, Mercurius solubilis D6) that is commonly used in Germany for a variety of upper respiratory tract infections and has shown promise as a treatment for RS without Taw et al the need for antibiotics. A prospective, randomized, double-blind, placebo-controlled, multicenter, clinical trial comparing Sinfrontal with placebo in 113 patients with radiography-confirmed acute maxillary sinusitis found that there was a significant difference in patients treated with Sinfrontal with no recurrence of symptoms 8 weeks after treatment. 111 Patients receiving Sinfrontal were instructed to take 1 tablet every hour until improvement was noted, with a maximum of 12 tablets per day, after which the dosing would change to 2 tablets 3 times per day. An economic analysis demonstrated that Sinfrontal can lead to substantial cost savings with markedly reduced absenteeism from work. 112
Traditional Chinese Medicine
Traditional Chinese medicine (TCM) is a whole medical system that has been used for several millennia. The therapeutics used in TCM, such as Chinese herbal medicine and acupuncture, have grown in popularity with a parallel increase in scientific understanding and elucidation of mechanisms. 113 Specifically, the use of TCM for the treatment of disorders involving the ears, nose, and throat can be traced back as early as the fifth century BC, with several therapies that may be beneficial for RS. 114
Acupuncture
The therapeutic effects of acupuncture primarily stem from reestablishing homeostasis of multiple physiologic cascades, whether through modulation of the immune system, inflammatory response, autonomic nervous system, neuroendocrine axis, limbic system, or pain pathway. [115][116][117][118][119][120] Although acupuncture may modulate many of these cascades during treatment of patients with RS, specific effects of improved mucociliary clearance and airway surface liquid have also been demonstrated. 121 In a prospective randomized study, patients with nasal congestion and hypertrophic inferior turbinates were treated with acupuncture and found to have significant improvement on visual analog scale and in nasal airflow as measured by active anterior rhinomanometry. 122 Another study demonstrated a 60% reduction in sinus-related pain compared with only 30% in the placebo group. 123 Acupuncture also demonstrated beneficial results in the treatment of children with chronic maxillary sinusitis. 124 A research team in Norway conducted 2 different studies using a similar protocol, whereby 65 patients with chronic RS were randomized into 3 arms: (1) traditional Chinese acupuncture, (2) sham acupuncture, or (3) conventional medical management with antibiotics, oral steroids, nasal saline irrigation, and local decongestants. 125,126 In both studies, there was improvement in health-related quality-of-life symptom scores in all 3 groups, although there was no overall statistically significant difference among them.
Chinese herbal medicine
Xanthii fructus (Chinese herbal name: Cang Er Zi) and Flos magnoliae (Chinese herbal name: Xin Yi Hua) are commonly used herbs in traditional Chinese medicine to treat RS. Xanthii fructus is also known as Xanthium sibiricum because the former is simply the fruit of the latter. From a TCM perspective, Xanthii fructus disperses wind and dampness and treats thick, viscous nasal discharge and sinus-related headaches, whereas Flos magnoliae is used to expel wind-cold and treat nasal discharge, hyposmia, sinus congestion, and headaches. 127 In fact, these two herbs are often combined and are key components of the Chinese herbal formula Cang Er Zi Wan or Cang Er Zi San, which are the pill and powder preparations, respectively. 128 It is important to note that Chinese herbs should be used under the guidance of TCM theory. When Chinese herbs are not used according to TCM principles, severe Complementary and Integrative Treatments: Rhinosinusitis adverse events can occur. One such example was the inappropriate use of Ephedra (Chinese name: Ma Huang) for weight loss, increased energy, and performance enhancement, when traditionally this is used only for upper respiratory infections for a short period of time, much like how pseudoephedrine is used only briefly for symptoms related to upper respiratory infections. 129 Xanthii fructus (Chinese name: Cang Er Zi) In a murine model, Xanthii fructus was found to exhibit (1) antiinflammatory effects through inhibiting interferon-gamma, TNF-alpha, and lipopolysaccharide-induced nitric oxide synthesis; (2) antiallergic effects through blocking mast cell-mediated histamine release; and (3) antioxidant effects through increased activities of catalase, superoxide dismutase, and glutathione peroxidase in the liver with enhanced radical scavenging and reducing activity. [130][131][132] Sesquiterpene lactone and xanthatin, specific components of Xanthium sibiricum, displayed significant antibacterial activity against methicillin-resistant Staphylococcus aureus while also inhibiting other bacteria like Staphylococcus epidermidis, Klebsiella pneumoniae, Bacillus cereus, Pseudomonas aeruginosa, and Salmonella typhi. 133 Zhao and colleagues 134,135 found that Xanthii fructus was able to modulate proinflammatory cytokines through inhibition of human mast cells and peripheral blood mononuclear cells and demonstrated that Shi-Bi-Lin, a modified version of the Chinese herbal formula Cang Er Zi San, ameliorated nasal symptoms, such as sneezing and nasal scratching, in a guinea pig model through reduced nasal thromboxane B2, eosinophil infiltration, and endothelial nitric oxide synthase activity. A double-blind, RCT enrolling 126 patients with allergic rhinitis with equal cohorts receiving Shi-Bi-Lin and placebo found that Shi-Bi-Lin significantly improved symptoms with a sustained response for at least 2 weeks after treatment. 136 However, caution must be exercised when using either Xanthii fructus or Cang Er Zi Wan because they have been shown to lead to certain side effects like muscle spasm and hepatotoxicity and nephrotoxicity. 137,138 Flos magnoliae (Chinese herbal name: Xin Yi Hua) The primary bioactive components of Flos magnoliae include terpenoids, lignans, neolignans, epimagnolin, and fargesin. 139 Neolignans have been found to have antiinflammatory effects through mechanisms of action different from steroids, while epimagnolin and fargesin decrease production of nitric oxide, a potent mediator in inflammation, through inhibition of inducible nitric oxide synthase expression. 140,141 Flos magnoliae also demonstrates antiallergy activity via inhibition of immediatetype hypersensitivity reactions through blocking mast cell degranulation. 142 As an essential oil, its main chemical ingredients have been identified as 1,8-cineole, sabinene, beta-pinene, alpha-pinene, and transcaryophyllene. 76
Chinese herbal supplements (postoperative)
Bi Yuan Shu is a Chinese herbal liquid mixture comprised of an unknown number of herbs but is reported to include at least Magnolia liliflora, Xanthium strumarium, Astragalus membranaceus, Angelica dahurica, and Scutellaria baicalensis. A multicenter RCT divided 340 postoperative patients with chronic RS and nasal polyps who had undergone endoscopic sinus surgery into 2 groups, with both groups receiving antibiotics and topical steroids; the test group was also treated with Bi Yuan Shu (10 mL 3 times per day). 143 Adjunctive treatment with Bi Yuan Shu was found to have significantly higher response rates on days 7, 14, 30, and 60 for purulent nasal discharge, breathing difficulty, pain, hyposmia, and halitosis, with positive trends noted for fever and cough. 71 Another study assessing the efficacy of Chinese herbal medicine in the care of patients after undergoing endoscopic sinus surgery enrolled 97 patients into one of 3 treatment arms: (1) Tsang-Erh-San extract granules and Houttuynia extract powder, (2) oral amoxicillin, or (3) placebo. The study found no benefit of either treatment group over placebo. 144
MULTI-MODAL APPROACHES
A multicenter, nonrandomized study of 63 patients with acute RS comparing multiple conventional (antibiotics, secretolytics and sympathomimetics) with combination complementary (Sinupret and homeopathic remedy, Cinnabaris 3X) therapies demonstrated similar effectiveness based on patients' self-assessment score, physicians' score, and HCG-5 questionnaire. 145 However, the only validated outcome parameter was the HCG-5 quality-of-life instrument. Other limitations with this study included a small sample size and lack of randomization and blinding.
Recently, a pilot study at the University of California, Los Angeles was conducted using integrative East-West medicine to treat patients with recalcitrant chronic RS. 146 Eleven patients underwent 8 weekly sessions of sequential acupuncture ( Table 1) and therapeutic acupressure style massage and had received education consisting of dietary modification, lifestyle changes, and self-acupressure. Four items on the SNOT-20 (need to blow nose, runny nose, reduced concentration, and frustrated/restless/irritable) and 3 of 8 domains on the SF-36 (role physical, vitality, and social functioning) showed a statistically significant difference, whereas trends of improvement were noted in most other elements on both quality-of-life instruments. Although the data looks promising, this study was also limited by its small size and lack of randomization and control group.
PATIENT SELF-TREATMENTS
Lifestyle modifications can also be conducive toward achieving optimal sinus health and function. These modifications include regular aerobic exercise, adequate hydration, steam inhalation, stress management, and good-quality sleep. Minimizing exposure to pollution, smoke, and environmental toxins as well as incorporating nutritional changes, such as consuming an antiinflammatory diet and avoiding dairy products, refined sugars, and processed foods, are important. 147 A regular spiritual practice, such as prayer, is also beneficial, along with anger management and attitudes of forgiveness, gratitude, and optimism. 148 Self-acupressure of certain acupoints can also be helpful to reduce sinus-related symptoms (see Table 1).
SUMMARY
As we gain a greater understanding of the complex pathogenesis of RS, what is becoming apparent is a shift in philosophic paradigm. Our previous reductionist models of disease and health are being replaced by holism, systems biology, and complex, nonlinear dynamics. [149][150][151] Holism is a central philosophic underpinning of integrative medicine and many CIM modalities, such as TCM.
We now see this paradigm shift in our approach to RS. No longer is the medical community looking at the diagnosis of RS as solely an infectious process but rather as complex and multifactorial. 152 For example, Palmer 41 elegantly describes this transition whereby "generations of doctors and scientists were taught to envision bacteria as single cells that float or swim through some fluid . in fact, rhinologists continue to foster this view"; however, "biofilms are not just single cells, but are structurally and metabolically heterogeneous multicellular communities." 153 Biofilms demonstrate cell-to-cell signaling, a phenomenon known as "quorum sensing." 154 Such is an example of complexity science and holism.
The therapeutic repertoire, likewise, has broadened significantly from antibiotics alone as the mainstay of treatment to the use of multiple therapies to act on different pathophysiological facets of RS. Integrative medicine provides an expanded approach and armamentarium to help patients with RS, whether acute, chronic, or recalcitrant. | 2017-08-27T06:53:09.279Z | 2013-04-27T00:00:00.000 | {
"year": 2013,
"sha1": "0cb069b8f8b135f26e22ebdf5e75fdfdceefa21f",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.otc.2013.02.002",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f925c51f38a87676c5ff7bf1146741a97a6f36d8",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
14280620 | pes2o/s2orc | v3-fos-license | Inter- and intrachromosomal asynchrony of cell division cycle events in root meristem cells of Allium cepa: possible connection with gradient of cyclin B-like proteins
Alternate treatments of Allium cepa root meristems with hydroxyurea (HU) and caffeine give rise to extremely large and highly elongated cells with atypical images of mitotic divisions, including internuclear asynchrony and an unknown type of interchromosomal asynchrony observed during metaphase-to-anaphase transition. Another type of asynchrony that cannot depend solely on the increased length of cells was observed following long-term incubation of roots with HU. This kind of treatment revealed both cell nuclei entering premature mitosis and, for the first time, an uncommon form of mitotic abnormality manifested in a gradual condensation of chromatin (spanning from interphase to prometaphase). Immunocytochemical study of polykaryotic cells using anti-β tubulin antibodies revealed severe perturbations in the microtubular organization of preprophase bands. Quantitative immunofluorescence measurements of the control cells indicate that the level of cyclin B-like proteins reaches the maximum at the G2 to metaphase transition and then becomes reduced during later stages of mitosis. After long-term incubation with low doses of HU, the amount of cyclin B-like proteins considerably increases, and a significant number of elongated cells show gradients of these proteins spread along successive regions of the perinuclear cytoplasm. It is suggested that there may be a direct link between the effects of HU-mediated deceleration of S- and G2-phases and an enhanced concentration of cyclin B-like proteins. In consequence, the activation of cyclin B-CDK complexes gives rise to an abnormal pattern of premature mitotic chromosome condensation with biphasic nuclear structures having one part of chromatin decondensed, and the other part condensed.
Introduction
Although most of the cells in Eukaryotes are mononucleate, a vast number of fungi, plants, and animals are known to produce bi-, poly-, or multinucleate forms (termed ''syncytia'') that appear occasionally at various stages of morphogenesis (Baluška et al. 2004 and references therein). Examples of these include, but are not limited to, plasmodia in the slime mold Physarum polycephalum, endosperm in seeds of flowering plants, embryos of Drosophila melanogaster at early stages of development, and giant osteoclasts, myoblasts, or placental trophoblasts in mammals, which all create syncytia by cell fusion events. As a general rule, the emergence of a multinucleate state is correlated with synchronous mitotic divisions or mitotic waves. A number of cases relating to plants include tip cells of the thallus in coenocytic algae (e.g. Cladophora), milk tubes (laticiferous tissue) in the seedlings of Euphorbia marginata, secretory tapetum characterized by nuclear morphologies changing with respect to developmental stage, and common areas of polarized cytoplasm in cereal endosperm alveoli (Kapraun 2005 and references therein). Synchronization of mitotic divisions is also observed in groups of symplasmically interconnected mononucleate cells that form antheridial filaments in male sex organs of Charophytes. Plugging of plasmodesmata in the central cell wall of the filament brings about discrimination of its two physiologically separate fragments, each of them setting different rhythm of the cell cycle (Kwiatkowska and Maszewski 1986; compare also Maszewski and van Bel 1996). Consistent with the very early cell fusion experiments carried out by Rao and Johnson (1970) and more recent results (Gladfelter et al. 2006), the molecular basis of synchronous mitosis lies in the diffusion of biochemical factors (such as mitosis promoting factor, MPF, composed of a cyclin-dependent kinase and B-type cyclin; Criqui and Genschik 2002;Gotoh and Durante 2006), which spread evenly among neighboring nuclei in the common cytoplasm.
Despite a widespread relationship between the multinucleate state and intracellular coordination of nuclear processes, mitosis can also occur in a parasynchronous or asynchronous manner, which allows cell nuclei residing in the same cytoplasm to behave independently. Such type of regulation, frequently observed in filamentous fungi (e.g. Neurospora crassa and Ashbya gossypii), might be important for the proper response to nutrients and extracellular signals, more economic usage of energy and metabolites needed for replication and/or mitosis, and prevention from abrupt changes in the volumetric ratios of nuclear and cytoplasmic compartments of the cell. Accordingly, autonomous behavior of cell nuclei and their asynchronous divisions can be thought to result from spatial fluctuations in the concentration of the key cell cycle regulators (activators or inhibitors), physical asymmetries in the distribution of nuclear pore complexes, partial separation via endomembrane systems, or creation of discrete zones that promote or restrain the progression toward mitosis [reviewed by Gladfelter et al. (2006)].
In contrast to diversification of individual nuclear cycle times, generally considered in the context of an uneven concentration of macromolecular regulatory elements distributed along distant locations in the cytoplasm, most transformations confined to the interior of the nucleus are carried out in complete synchrony. Although epigenetic inheritance responsible for DNA methylation and chromatin modifications gives rise to asynchronous replication patterns of euchromatic and heterochromatic segments of chromosomes (overlapping with gene-rich and gene-poor DNA sequences, respectively), all processes required to preserve genome architecture and to promote successful progression through the cell division cycle occur simultaneously. Among many, these include licensing of DNA replication origins at the end of mitosis (Lutzmann et al. 2006), initiation events at the start of S phase (Kearsey and Labib 1998), early and late mitotic processes involved in condensation of chromosomes at prophase, and their abrupt separation and segregation during anaphase (e.g. Hirano 2000).
In plants, axially elongated cells may result from administration of chemical agents that, respectively, either strongly delay progression through DNA replication [but allow cell growth to continue (Navarrete et al. 1983;Giménez-Abián et al. 2001)] or affect formation of the phragmoplast, a highly specific cytokinetic apparatus made up of microtubules and microfilaments (Samuels and Staehelin 1996;Nishihama and Machida 2001). In order to assess the extent to which particular cell nuclei located in the same cytoplasm may display mitotic autonomy, two experimental systems effective in producing extremely large cells have been used. The first method, based on the modified procedure advanced by Giménez-Abián et al. (2001), relied on alternate incubations of primary root meristems of Allium cepa with hydroxyurea (HU, an inhibitor of ribonucleotide reductase) and caffeine (CF, a well-known inhibitor of cytokinesis in plants). Some of abnormally long polykaryotic cells, with two or more nuclei fused together, gave rise to atypical images of mitotic divisions, including asynchronous metaphase-toanaphase transitions. The second method, based on prolonged incubations with HU (Barlow 1969), revealed an abnormal pattern of chromosome condensation, characterized by a gradient of chromatin states extending along successive regions of a single cell nucleus from interphase to middle stages of mitosis (e.g. from G2-phase to prometaphase). Consequently, our data show for the first time that under certain conditions not only internuclear but also intranuclear and intrachromosomal course of mitotic processes may proceed asynchronously, indicating that a number of cell cycle checkpoint mechanisms must have been entirely overridden or severely constrained. In this report we also provide evidence that prolonged HU-mediated replication stress may account for an increased level of cyclin B-like proteins, and most probably, its gradient formed along the cell's axis is responsible for creating biphasic nuclear structures, having both interphase and mitotic domains of chromatin.
Plant material
Seeds of Allium cepa L. (Horticulture Farm in Lubiczów) were sown on moist blotting paper and germinated at room temperature in the dark. Four days after imbibition, seedlings with primary roots ranging from 1.5 to 2 cm were selected and placed in Petri dishes (Ø 6 cm) filled either with 10 ml of distilled water (control samples) or solutions applied to induce multinucleate or asynchronous cells.
To obtain multinucleate cells, seedlings were treated according to the protocol of Giménez-Abián et al. (2001) using alternate incubations with 0.75 mM HU and 5 mM CF, except that additional third treatment with CF (3 h) and post-incubation with water (12 h) were introduced before fixation. Induction of biphasic cells showing gradual changes of chromatin condensation (intrachromosomal asynchrony) were obtained by incubation of seedlings with 0.75 mM HU (12 h), followed by continuous treatment with 0.5 mM HU (total time ranging from 24 up to 120 h). During incubation with 0.5 mM HU solutions (changed every 24 h), roots were permanently aerated by gentle rotation in a water-bath shaker (100 rpm, at dark, 23°C).
Feulgen staining and cytophotometry
Primary roots of A. cepa were fixed in cold Carnoy's mixture (absolute ethanol and glacial acetic acid; 3:1, v/v) for 1 h, washed several times with ethanol, rehydrated, hydrolyzed in 4 M HCl (1 h), and stained with Schiff's reagent (pararosaniline) according to the standard method (e.g. Polit et al. 2002). After rinsing in SO 2 -water (3 times) and distilled water, 1.5-mm-long apical segments were cut off, placed in a drop of 45% acetic acid and squashed onto Super-Frost (Menzel-Gläser, Braunschweig, Germany) microscope slides. Following freezing with dry ice, coverslips were removed, and the dehydrated dry slides were embedded in Canada balsam before examination. Nuclear DNA content was evaluated by means of microdensitometry using a Jenamed 2 microscope (Carl Zeiss, Jena, Germany) with the computer-aided Cytophotometer v1.2 (Forel, Lodz, Poland) for image analysis. The extinction of Feulgen-stained cell nuclei was measured at 565 nm and calibrated in arbitrary units, taking the values recorded for half-telophases and prophases from control plants as reference standard of 2C [33.55 pg; according to Van't Hof (1965); Bennett et al. (2000)] and 4C DNA levels, respectively.
Western blotting
Root meristem cells were lysed using P-PER Plant Protein Extraction Kit supplemented with Protease Inhibitor Cocktail according to vendor's instructions. The samples were cleared by centrifugation, and total protein extracts, fractionated on 4-12% Bis Tris/2-(4-morpholino)-ethanesulfonic acid SDS-NuPAGE Novex gel were blotted onto polyvinylidene fluoride membrane (0.2-lm pore size). Cyclin B-like proteins were detected with the rabbit polyclonal anti-cyclin B1 IgG fraction diluted to 1:300 using the Chromogenic Western Blot Immunodetection Kit.
Immunocytochemical staining of microtubules (b tubulin) and cyclin B-like mitotic proteins (cyclin B)
Apical parts of roots excised from the control plants, from seedlings treated using alternate incubations with 0.75 mM HU and 5 mM CF and from plants exposed to prolonged HU-treatment were fixed for 45 min (20°C) in PBS-buffered 3.7 (b tubulin) or 4.0% (cyclin B) paraformaldehyde solution. Then root tips were washed three times with PBS and placed in a citric acid-buffered digestion solution (pH 5.0) containing 2.5% pectinase, 2.5% cellulase and 2.5% pectolyase, and incubated at 37°C for 15 min. After the digestion solution was removed, root tips were washed as before, rinsed with distilled water and squashed onto Super Frost Plus glass slides (Menzel-Gläser, Braunschweig, Germany). When air-dried at room temperature, the slides were pretreated with PBS-buffered 8% BSA and 0.1% Triton X-100 for 50 min (20°C) and incubated with either mouse monoclonal anti-b tubulin antibody or rabbit anticyclin B1 IgG fraction, dissolved in PBS containing 1% BSA at a dilution of 1:750 (b tubulin) or 1:50 (cyclin B), respectively. Following an overnight incubation in a humidified atmosphere (4°C), slides were washed three times with PBS and incubated for 1.5 h (18°C) with secondary goat anti-mouse FITC antibody in PBS (1:500; v/v, for b tubulin) or goat anti-rabbit IgG [whole molecule, F(ab 0 ) 2 fragment] FITC antibody (1:500; v/v, for cyclin B). In some experimental series, cell nuclei were counterstained either with ethidium bromide (0.4 lg/ml) or propidium iodide (0.3 lg/ml). Following washing with PBS, slides were air dried and embedded in PBS:glycerol mixture (9:1) with 2.3% DABCO. Observations were made using Eclipse E-600 epifluorescence microscope (Nikon, Japan) equipped with B2 filter (blue light; k = 465-496 nm) for FITC and G2 filter (green light; k = 540/ 25 nm) for ethidium bromide-or propidium iodide-stained cell nuclei. All images were recorded at exactly the same time of integration using DS-Fi1 CCD camera (Nikon, Japan).
Observations and analyses
Feulgen-stained slides were examined, and the mitotic cells were counted using YS100 Biological Microscope (Nikon). To evaluate mean values, 4,500-5,000 cells taken from 8-10 roots were analyzed for each experimental series. Some root meristems fixed with Carnoy's mixture were hydrolyzed with 4 M HCl (1 h), placed in a drop of 45% acetic acid, squashed onto microscope slides, and unstained cells were observed using Optiphot-2 microscope (Nikon) equipped with phase contrast optics and DXM 1200 CCD camera (Nikon, Japan). Quantitative measurements of cyclin B-like protein immunofluorescence were made after converting color images into gray scale and expressed in arbitrary units as mean pixel value (pv) spanning the range from 0 (dark) to 255 (white).
All experiments performed using immunofluorescence methods were repeated at least twice, others several times, and the most representative results were selected for reporting here.
Results and discussion
Repeated incubations of primary root meristems of Allium cepa with caffeine (to inhibit cell plate formation) and then with hydroxyurea (to stop DNA replication and to increase the length of the cell) have been shown effective in producing polykaryotic cells that advance the metaphaseanaphase breakpoint synchronously, despite previous differences in chromosome condensation (Giménez-Abián et al. 2001). Our results indicate that using a modified version of this method (with extended periods of HU and CF treatment) two dominant types of cells can be formed: one, comprising various numbers of nuclei (1, 2, 4, 6, and sporadically 8; each nucleus containing up to 4C DNA content; Fig. 1a), and the other, mononuclear cells, with increased C-values or chromosome numbers (Fig. 1b, c). Quantitative analysis of the first cell type, performed 72 h after the start of alternating HU-CF incubations, demonstrates a correlation between the genome copy number of the cell (or total ploidy level) and the incidence of nuclear asynchrony (Fig. 2). The higher the number of nuclei (and the longer the cell), the greater the frequency of evident differences between the various states of chromosome condensation or between the discrete stages at which individual nuclei can be observed passing through mitosis. According to the data presented by Giménez-Abián et al. (2001), the advancement of chromosome condensation has been often noticed within the outlying parts of giant multinucleate cells (in more than 90% of cases; Fig. 1a), indicating that acceleration of mitosis is due mainly to relatively high amount of surrounding cytoplasm, compared with the perinuclear cytoplasm located in middle part of the cell. In the second type of cells, derived from about 15% of all polykaryons, the nuclei compressed to the center of the cell become fused together. A series of observations have demonstrated that some of the resultant cells may enter nuclear divisions asynchronously, displaying a stepwise progression of successive phases of mitosis. In such cases, transitions from prometaphase to metaphase (Fig. 1b), or from metaphase to anaphase (Fig. 1c), seemed to proceed either gradually, or with only slightly discernible intermediate phase-to-phase transition zone between two adjoined tetraploid groups of chromosomes.
The changing arrangements of microtubular arrays establish highly specific landmarks for the different stages of mitosis (Dhonukshe and Gadella 2003). Our immunocytochemical studies using anti-b tubulin antibodies (Fig. 3) revealed severe perturbations of preprophase bands (PPBs) formed in polykaryotic cells generated by the recurring HU-CF treatments. As compared with the control late-G2 cells, which displayed typical ring-shaped microtubular arrays (Fig. 3a), most of multinuceate cells demonstrated either irregular bands of microtubules stretching beneath the long cell wall (Fig. 3b), or sinusoidal microtubular ribbons penetrating cortical layers of the cytoplasm (Fig. 3c). During prometaphase and metaphase (Fig. 3d), elongated cells with more or less evident interchromosomal asynchrony displayed few regions varying in the intensity of b tubulin immunofluorescence. At later stages (during anaphase), diffuse spindles have appeared, a number of them containing several distinct areas of microtubular concentrations (Fig. 3e, arrows; compare with the control anaphase cell in Fig. 3a).
Long-term incubation with low doses of hydroxyurea (ranging from 0.75 to 0.50 mM) has proved to be another type of treatment effective in producing large and elongated cells in primary root meristems of A. cepa (compare also Barlow 1969). Surprisingly, this kind of influence has revealed the most significant changes and severe perturbations to the process of mitotic chromatin condensation that cannot be accounted for solely by the increased amount of cytoplasm in the polar regions of the cell. In spite of the fact that root cells treated up to 72 h with 0.75/ 0.50 mM HU continuously accumulated in late S-and G2phases (Fig. 4a, b), within the whole period of treatment a relatively small amount of mitotic divisions has been constantly observed (with a significant increase 72 h after the start of incubation; Fig. 4d), most of them indicating apparent symptoms of premature chromosome condensation (PCC; Fig. 5). Induction of PCC, easily recognized in meta-, ana-and telophase (Fig. 5d-h), was restricted merely to a relatively small subpopulation of cells having their DNA almost completely replicated (Fig. 4b).
Accordingly, while a considerable drop in the frequency of M-phase cells reflects the secondary effect of the block imposed upon DNA synthesis, then on the contrary, a minor fraction of cells entering PCC has to be viewed as a consequence of mechanisms which permit these cells to override the DNA stress-response pathway (also referred to as the S-phase checkpoint; Osborn et al. 2002) and allow them to proceed toward premature mitosis regardless of the incomplete replication of nuclear DNA (Kohn et al. 2002;Rybaczek and Maszewski 2007a, b). Together with cells showing typical features of an unscheduled mitotic division, such as chromosomal gaps and breaks, lagging chromatids, acentric fragments of chromosomes, and micronuclei ( Fig. 5a-h), about 19% of PCC cells revealed an abnormal pattern of chromosome condensation, characterized by a gradient of chromatin states along successive regions of the nucleus (Fig. 5i-n). The total DNA content measured by quantitative , early (f) and late telophase (g), and post-telophase (h). i-n Intrachromosomal asynchrony in cells showing evident gradients of chromatin condensation: progressive condensation of chromatin observed during transitions from interphase to early prophase (i, j), from interphase-to-prophase/prometaphase (k, l), and from early prophase to prometaphase (m, n). Darkly stained chromocenters corresponding to telomeric heterochromatin indicated by arrows (k) microdensitometry ranged from 30.4 to 34.9 pg, indicating almost complete replication of the diploid genome. Evident alterations in chromatin architecture could be observed from interphase to early stages of prophase (Fig. 5i, j), and from early to late stages of prophase (Fig. 5m, n). Some of the cells, however, displayed much larger array of chromatin morphologies, spanning from a highly decondensed form in interphase up to strong condensation of individual chromosomes, comparable to that observed during late prophase and prometaphase in the control roots (Fig. 5k, l). Although cells with evident symptoms of PCC revealed mitotic spindles with regular arrays of microtubules (Fig. 3f), the most common site of PPBs in cells showing progressive gradients of chromatin condensation correlated well with a discrete area of chromatin positioned close to the transitory region between the interphase states of decondensed nucleoplasm and the chromatin structures characteristic for the initial stages of mitotic compaction at early prophase (Fig. 3f). In consequence, some PPBs have become translocated from the equatorial plane of the cell into new asymmetric positions, demarcating two unequal parts of the nucleus.
The mechanisms that define the positioning and changing dynamics of microtubular arrays (PPB, spindle, phragmoplast, and cortical microtubules) remain still unknown. According to a number of approaches adopted to demonstrate links between the regulatory factors of the cell cycle (such as MPF) and the specific microtubular systems in plants (Colasanti et al. 1993;Mineyuki et al. 1996;Hemsley et al. 2001;Van Damme 2009), it might be assumed that the exact location of PPB in HU-treated cells of Allium corresponds with a definite spatio-temporal threshold concentration of the cyclin-dependent protein kinases (CDKs). On the contrary, however, another set of experiments, by applying microinjection of cdc2, points to a destructive rather than constructive function of CDKs on PPB formation (Hush et al. 1996).
Different types of cyclins are the cell cycle stage-specific activators of Cdks, and their mutual interactions have been long recognized as indispensable for the ability of cells to progress from one phase to the next (Criqui et al. 2001;Francis 2007;Boruc et al. 2010). Apart from a common N-terminal motif called the ''cyclin box'' (a sequence of 100 amino acids), mitotic cyclins contain the ''destruction box'' susceptible to ubiquitination and essential for the rapid turnover of the protein (King et al. 1996). To find out possible relationship between the various types of nuclear asynchrony and an uneven intracellular distribution of one of the most critical factors engaged in the G2-to-M transition, we have detected cyclin B-like mitotic proteins by using rabbit polyclonal anti-cyclin B1 antibody (Figs. 6, 7); its specificity was demonstrated using Western blot analysis of root meristem lysates (Fig. 8). As it was shown previously, plant cyclins share a great deal of homology with the amino acid sequences of the corresponding animal proteins (particularly in the conserved consensus motifs of the cyclin box; Renaudin et al. 1994), and reveal both common epitopes and evident functional similarity between the widely divergent species of Eukaryotes (Chaudhuri and Ghosh 1997;Sen and Ghosh 1998). In our immunoblotting experiments with samples from both control and HU-treated root meristem cell lysates, anti-cyclin B1 antibody detected only one strong band at the position equivalent to a molecular weight of *54 kDa (compare with Chaudhuri and Ghosh 1997). Long polykaryotic cells observed in onion root meristems after the alternate HU/CF treatments displayed predominantly cytoplasmic localization of cyclin B-like proteins, concentrated mainly at the peripheries of interphase nuclei. Similar ring-like localization of RFP-and GFP-tagged cyclin B1;2, corresponding to either the nuclear envelope during interphase or the prophase spindle, has been reported by Boruc et al. (2010) in both tobacco (BY2 cell line) and Arabidopsis. The ring-shaped immunofluorescence increased considerably around cell nuclei in late G2 and early prophase, in comparison with the ''delayed'' nuclei located in the middle part of the cell (Fig. 6). Furthermore, the advancement of chromosome condensation in nuclei placed within the outlying parts of giant cells has been often correlated with at least partial translocation of cyclin B-like proteins into the vicinity of interchromatin domains of the nucleoplasm (Fig. 6a, a 00 , b, b 00 ). However, in contrast to CYCB1;2-GFP localizations observed in Arabidopsis (Boruc et al. 2010), or cyclin B1 association with HeLa S3 TKhuman cells (Pines and Hunter 1991), no evident immunofluorescence signals of cyclin B-like proteins could be found in metaphase chromosomes of Allium cepa, probably due to low capacity of FITC-labeled antibody to penetrate the highly condensed structures of mitotic chromatin.
Another type of immunolabeling, with respect to both the intensity of fluorescence and localization of cyclin B-like proteins, has been observed in root meristem cells of A. cepa following long-term incubation with 0.75/0.50 mM HU (Fig. 7). Direct microscopic examination (Fig. 7a, b) and quantitative data derived from the total immunofluorescence measurements in the control cells (Fig. 9) clearly depict that the level of cyclin B-like proteins reaches the maximum at the transition from late G2 to metaphase and then becomes reduced during later stages of mitosis. In comparison to the untreated cells, the amount of immunofluorescence increased more than 65 and 63%, respectively, among the G1 and G2 cells in roots incubated with low doses of hydroxyurea (Figs. 7c,9). While the majority of HU-treated cells (after third incubation) revealed typical, ring-shaped distribution of immunofluorescence, a vast Plant Cell Rep (2010) 29:845-856 851 number of large, elongated cells (some of them with an abnormal pattern of chromosome condensation) displayed evident gradients of cyclin B-like proteins spread along successive regions of the nucleus (Fig. 7d, e). An example given in Fig. 7d, which shows quantitative measurements of fluorescence scanned throughout successive lines perpendicular to the late-G2 cell's axis (horizontal line), points out the nearly 50 and 33% gradual increase in mean intensity of immunolabeling of the nucleoplasm (shorter dotted line) and cytoplasm (longer dotted line), respectively, at the distance of about 50 lm from both ends of the nucleus. Although most of such cases have been observed prior to the onset of mitosis (Fig. 7d), an uneven density of immunofluorescent labeling has also been observed around cell nuclei showing abnormal profile of chromatin condensation, combining attributes of interphase and prophase or displaying transitions from early to later stages of mitotic condensation (Fig. 7e). Based on the obtained results and a number of functional parallels concerning molecular factors by which various types of cells acquire the ability to progress throughout successive stages of mitotic division (including cell cyclespecific transcription, activation, localization, and degradation of these factors), cyclin B-like proteins seem to be a good candidate to account for the abnormal course of chromosome condensation observed in seedlings of A. cepa. Although there is considerable controversy regarding the effects of HU on the intracellular level of cyclins (e.g. Florensa et al. 2003;Rodríguez-Bravo et al. 2007), an increased immunofluorescence observed in onion root tip cells (both in interphase and mitosis) fits well with the data showing a pronounced accumulation of all or most types of cyclins, including type B-cyclin, in response to inhibitors of DNA replication (e.g. in SK-N-MC neuroepithelioma (Fung et al. 2002), HeLa, and Chinese hamster ovary (CHO) cells (Kung et al. 1993;Gao and Richardson 2001). While it is far too early to formulate any simplified hypothesis concerning the role of specific cyclin and of its specific distribution pattern in creating mononuclear heterophasic cells, we also seem to have some theoretical grounds that allow us to link the effects of HU-mediated deceleration of S-and G2-phases with an enhanced Fig. 6 Localization of cyclin B-like proteins in long polykaryotic root meristem cells of A. cepa observed following 50-h alternate HU/CF treatments (a, b). Note an increased ring-shaped immunofluorescence around late G2 (a) and prophase (b) nuclei located at the polar regions of the cytoplasm. Propidium iodide-stained nuclear DNA and merged images of the same cells are shown in a 0 , b 0 and a 00 , b 00 , respectively. Bar 20 lm concentration of cyclin B-like proteins and, consequently, with the activation of cyclin B-CDK complexes and the premature induction of mitotic chromosome condensation.
A vast number of secondary effects associated with HU-mediated inhibition of ribonucleotide reductase (which plays an essential role in the control of DNA replication and repair systems; Chabes and Thelander 2000) and the resultant block imposed upon replication forks involve accumulation of hemireplicated intermediates with long single-stranded ssDNA, DNA double strand breaks (DSBs) (Merrill and Holm 1999;Rybaczek and Maszewski 2007a, b), Holliday junctions (Sogo et al. 2002), and other specific DNA damage by forming nitric oxide and hydrogen peroxide (Sakano et al. 2001). To respond to these potentially life-threatening insults, cells have evolved a DNA-replication stress-response pathway, also referred to as the DNA-replication checkpoint or S-phase checkpoint. It may be assumed, however, that low concentration of HU (applied in our experiments) slows down cell cycle progression by affecting both replication forks and postreplication DNA repair processes, without any significant effect on transcription, protein synthesis, and cell growth. So, even if there were DNA replication checkpoint or S-M checkpoint mechanisms set in motion by the S-phase control system, the prolonged time of continuing accumulation of cyclin B-like proteins can give the cells enough time to reach the threshold dose of active cyclin B-type/ CDK-B complexes needed for mitotic stimulation. The various types of asynchrony observed in root meristem cells of Allium cepa can be broadly divided into the following three categories: (a) internuclear asynchrony (found in large polykaryotic cells), (b) interchromosomal asynchrony manifested by an uneven progression of chromosomes throughout successive mitotic stages (resolved after nuclear fusion events), and (c) intrachromosomal asynchrony (creating a gradient of increasing chromatin condensation after prolonged treatment with HU). All these events can be clearly linked to an extended period of experimental treatment and, accordingly, may be seen as a response to a diversified set of intracellular conditions which arise in and spread along the cytoplasm of extremely long cells. It is immediately obvious, however, that some other variable conditions are needed to convincingly explain an intrachromosomal asynchrony induced in root meristem cells after the prolonged incubation with hydroxyurea. Irrespective of the fact that no satisfactory solution may be proposed to elucidate such an unusual mode of mitotic chromosome condensation, any causal inference (although based on a speculative evaluation) must inevitably meet the challenge of two problems: (1) why and how do the cells escape safeguard mechanisms designed to block progression toward mitosis under conditions that generate stalled replication forks and DNA breakage, and (2) how do the cells rearrange their intrinsic pattern of successive cell cycle events to create, at one time, an unusual spatio-temporal continuum of nuclear modifications. To resolve the first problem, there is a need to accept functional predominance of mitotic activators (such as A-and B-type CDKs bound to A, B, and D cyclins; Francis 2007) over those factors (such as ATM and ATR kinases; Rybaczek and Maszewski 2007a, b), which efficiently promote replication checkpoint control mechanisms to prevent cells from entering nuclear division.
Perhaps, there is another morphological aspect that is worth relating in the context of the second question. The majority of mitotic chromosomes emerging in cells with biphasic chromatin structure (having one part of it decondensed, and the other part condensed) display predominantly the loop-like morphology, at the same time as the remaining half of the nucleus exhibits darkly stained chromocenters (Fig. 5k). Such appearance may indicate that chromosomes of the HU-treated root meristem cells start to condense in the vicinity of pericentromeric (proximal) regions, with their distal parts comprising telomeres retarded at earlier stages of the cell cycle. Notably, chromatin within the interphase nucleus of Allium cepa is polarized, and thus, effectively isolating centromeres from the distal telomeric regions, which include major C-bands of late-replicating heterochromatin (Fussell 1975;Fujishige and Taniguchi 1998). Moreover, most recent experiments indicate that the mitotic portions of biphasic nuclei reveal characteristic pericentromeric immunofluorescence of phosphorylated H3 histones (Ser10) ( _ Zabka, unpublished). Progressing upwards through the structural hierarchy of condensing chromatin along the chromosomal axis creates a host of new questions about relations between form and function rather than provides useful means to make explanations. Whatever the mechanism might prove to be, a wide range of structural changes extending from interphase to mitosis implicate continuing changes in nucleosomal and higher-order chromatin folding at the route toward the G2/M transition [combined with underlying gradients of post-translational modifications of the core and linker histones; reviewed by Jasencakova et al. (2001)], gradual transition from nuclear matrix to chromosomal scaffold, and creation of discrete zones where preprophase band is formed at late G2 and nuclear envelope breaks down in early prophase (John 1996 and references therein). Even so, however, the above two types of asynchrony may also depend on some plant-specific regulatory systems of cell cycle progression, including specific mechanisms of plant histogenesis and phytohormonal signaling, which operating in the context of root growth and development, may form dissimilar conditions for activation of the key stimulators of mitosis. | 2016-05-12T22:15:10.714Z | 2010-05-21T00:00:00.000 | {
"year": 2010,
"sha1": "0d188c705bfcdabe24304657ddc349c55cfc7562",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00299-010-0869-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a71345bed8e051f6a047a64bd43d302f5707602",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
199008284 | pes2o/s2orc | v3-fos-license | Study on Optimization of automatic train operation based on Grey genetic algorithm
The speed controller is the kernel module of Automatic Train Operation system, and is also a key core technology of the intelligent high-speed train. The research of the intelligent speed controller of ATO system is of great significance for automatic high-speed train operation. Through the analysis of the principle and function of CTCS-3 system, the scheme of adding the speed controller of ATO system in CTCS-3 system was proposed. The connection way between the speed controller of ATO system and ground equipment was designed to realize information interaction. The model of speed controller of ATO system for high-speed trains which is researched by grey system theory was used for prediction and decision making. In the grey prediction module, the metabolic GM (1,1) model was established, and the optimal strategy was generated by the multi-objective intelligent weighted grey target decision. Finally, a reasonable control strategy of train automatic operation was obtained. The optimal control strategy was got by using genetic algorithm. Designing software for the simulation test, the simulation results show that the method achieves automatic high-speed train operation under the CTCS-3+ATO system through the actual line data, and the efficiency of train operation, the energy saving index and all the performance indexes of the train were improved.
Introduction
In recent years, scholars in various countries have done various researches on speed controller of highspeed train by using various methods, and have achieved certain results, such as the "predictive fuzzy control" train automatic driving system developed by Japan. [1] Singaporean scholars use genetic algorithm in the simulation of automatic train operation. [2] According to various situations, the most suitable point of idling is generated before starting, so as to achieve the lowest energy consumption. Then, a fuzzy multi-objective train automatic operation system is put forward. A new type of associative memory neural network is applied to automatic train stop in the Institute of automation of Chinese Academy of Sciences. [3] This technology realizes the long-range predictive control based on associative memory neural network in rolling optimization mode. The Railway Institute of science and technology has put forward a method based on direct fuzzy neural control, which is applied to train automatic operation control. [4] Tongji University uses fuzzy control BP network to realize inter station operation control, and uses fuzzy neural network based on genetic algorithm to realize train positioning and parking control. [5] In this paper, the speed controller unit of the ATO system is added to the CTCS-3 level train control system, and the speed controller model is established by the grey genetic algorithm. The grey system theory is used to design the model prediction module, correction module and decision module which generate the control strategy of train operation. The genetic algorithm is designed to optimize the train running target curve, and the whole interval control sequence is readjusted to realize the optimization control of the train.
Grey Decision-Making Module
The grey decision is made up of events, countermeasures, goals and effects.
The five sub process of high-speed train operation is used as the event set of grey decision. There are 5 events.
A={starting acceleration, steady acceleration, inert process, speed regulation braking, parking brake} Taking the high-speed train with ten stage traction and seven stage brake as an example, it combines the train with the inert to form the countermeasure set of the speed controller of the ATO system. There are 18 countermeasures: B={10 stage traction, 9 stage traction,... 1 stage traction, inert process, 1 stage brake,... 7 stage brake} The 4 targets of the train operation constitute the target set: K={punctuality, parking precision, energy consumption, comfortableness} The Cartesian product of the event set of high-speed train and the set of countermeasures to form the situation set: S={(starting acceleration, 10 stage traction), (starting acceleration, 9 stage traction),... (parking brake, 7 stage brake)} The grey decision of high speed train should take four important targets as the decision standard to find out the corresponding countermeasures of a certain event. For each goal, the best strategy is different. The effect is quantified at this time, and the best is set to 0. The best situation is found by using the grey target decision.
Multi objective grey decision matrix
In the multi-objective decision making problem, there are n alternative countermeasures to form the situation set S, and M evaluation indexes to form the target set K, and the effect value of the situation sij under target K: ( ) ( , ), 0 , 1, 2, , ; 1, 2, , Among them, (2) Generally speaking, the target set K can be divided into two types, namely the benefit type and the cost type, and the corresponding grey number covering the upper bound is the best one.
Multi objective intelligent weighted grey target decision making model ①
The set of S, which is composed of multi-objective decision plans and the target set K composed of evaluation indexes, is determined, and the grey number decision matrix of situation set S to target set K is obtained: Among them, X is the grey number decision matrix.
② The normalized grey number decision matrix is normalized for the situation set S: Among them, is the normalized grey number decision matrix; nm is the critical value for standardizing the target situation; nm is the critical value for standardizing the target situation.
③ Determining the weight set of each index: Among them, W is the weight set; w1 to wm is the corresponding weight of each index; wj is the weight value corresponding to any index in the train operation, and the added value of ownership is added to 1.
④ Calculate the evaluation value of the i plan: [ , ], 1,2, , Among them, is the lower critical value of the evaluation value;
Optimization of high speed train operation target curve by genetic algorithm
In order to solve the problem of slow speed optimization, a genetic optimization module is added to the grey speed controller. The train control sequence in the database is extracted and the quasi optimal results are calculated based on ATO performance index. After the operation of the grey speed controller, the running record file of the train can be obtained. The record file contains detailed train control information, including the train at any time.
key point coding of high-speed train operation target curve
The key point of applying genetic algorithm is to select key points for coding in many manipulating information points. The key point is the conversion point that can best reflect the manipulation changes, which can be divided into three types.
① Traction level position within a short distance. ② The conversion point of traction and idle working conditions. According to this principle, the key points selected in the operating speed distance curve obtained by the grey speed controller are shown in Figure 1.
The running record of the train can be obtained after the operation of the grey speed controller, which contains the working condition of the train at any time. The key to the final formation is a binary string, as shown in Figure 2.
population formation of high-speed train operation target curve
In genetic algorithm, replication and cross operation are needed. One chromosome can not complete the optimization process. It is necessary for a certain number of individuals to form a population to evolve continuous genetic operations.
The grey speed controller generates a large number of operation records through multiple operations, selects a better running record file to generate chromosomes, and adapts the generated chromosomes to evaluate the fitness of the chromosomes. Genetic fitness is selected as the initial population. The population of 10 chromosomes can be selected to complete the optimization process.
Fitness Function of High-Speed Train
The fitness value is the criterion to judge the degree of genetic optimization. According to the performance index of ATO system, we should meet five aspects of overspeed, precision rate, parking precision, energy consumption and comfort.
It is difficult to compare five adaptations. It needs to be made up of an overall fitness function to analyze it. Because these five adaptations have different effects on the control effect, we need to add weight. As to the control effect, the overall fitness value should be reduced as far as possible. The overall fitness evaluation value is obtained. Among them, Fitness is the value of the fitness evaluation. Kcs, Kzd, Ktc, Kjerk, and Kenergy represent the fitness indexes of speed, precision, parking precision, energy consumption and comfort respectively. 1 w , 2 w , 3 w , 4 w and 5 w are the weights corresponding to each fitness.
After calculating the fitness of genetic algorithm, the population is replicated, crossed and variant. The final optimization results are obtained according to the evaluation value of the fitness.
Grey genetic optimization for high speed train operation target curve
Taking the Beijing Tianjin inter city line as an example, 10 manipulating records generated by the grey speed controller are selected as the parent chromosome to form the initial population, and then the fitness values are obtained through fitness programming, and the results are shown in table 1. The grey speed controller produces more reasonable train operation sequence, so that the initial population stability is good, the low mutation probability is not easy to increase the population diversity, and it is easy to fall into the local extreme value and can not jump out. Therefore, according to the empirical value, after many tests, the maximum value of the range of 0.1 is chosen as the crossover probability [46,47]. Through genetic algorithm programming, the initial population is constantly optimized by genetic algorithm. After 1000 generations of optimization, the results are shown in table 2. Figure 4 List of operational objectives after optimization In Figure 4, the starting speed of the train is 2380m, and the starting process is fast. The train runs near the target speed of 338km/h, and its optimization curve is relatively stable, which meets the requirements of comfort. The maximum running speed is 338.3km/h, and there is no overspeed. The optimized section of the target curve is inert during the three operation. Figures 3 and 4 show that the energy saving is 7.6%. It not only maintains a relatively high capacity of passing through the interval, but also reduces the energy loss. From the simulation results, we can see that the energy saving effect has improved significantly, and other indicators have also been raised in the control range.
Conclusion
In the train control system, a grey genetic algorithm is used to construct the speed controller model of the high-speed train. The information interaction mode between speed controller of vehicle mounted safety computer and ATO system is designed. The characteristics of the genetic algorithm is that it can get a final overall optimal solution. In the optimization process, the fitness of a certain generation group is relatively low, which is a normal phenomenon. By optimizing the information, the whole process control strategy can be adjusted to achieve the optimization of the train operation target curve. | 2019-08-02T11:23:39.088Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "70a71dee746a34c62a849127533516089cf50b59",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1237/2/022147",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "faab07d6e9b4a19f76f7cf7aecca912a6831dd2b",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
3181995 | pes2o/s2orc | v3-fos-license | The Patient Assessment of Chronic Illness Care produces measurements along a single dimension: results from a Mokken analysis
Background As the worldwide prevalence of chronic illness increases so too does the demand for novel treatments to improve chronic illness care. Quantifying improvement in chronic illness care from the patient perspective relies on the use of validated patient-reported outcome measures. In this analysis we examine the psychometric and scaling properties of the Patient Assessment of Chronic Illness Care (PACIC) questionnaire for use in the United Kingdom by applying scale data to the non-parametric Mokken double monotonicity model. Methods Data from 1849 patients with long-term conditions in the UK who completed the 20-item PACIC were analysed using Mokken analysis. A three-stage analysis examined the questionnaire’s scalability, monotonicity and item ordering. An automated item selection procedure was used to assess the factor structure of the scale. Analysis was conducted in an ‘evaluation’ dataset (n = 956) and results were confirmed using an independent ‘validation’ (n = 890) dataset. Results Automated item selection procedures suggested that the 20 items represented a single underlying trait representing “patient assessment of chronic illness care”: this contrasts with the multiple domains originally proposed. Six items violated invariant item ordering and were removed. The final 13-item scale had no further issues in either the evaluation or validation samples, including excellent scalability (Ho = .50) and reliability (Rho = .88). Conclusions Following some modification, the 13-items of the PACIC were successfully fitted to the non-parametric Mokken model. These items have psychometrically robust and produce a single ordinal summary score. This score will be useful for clinicians or researchers to assess the quality of chronic illness care from the patient's perspective.
Background Improving the quality of care for long-term conditions including arthritis, diabetes and coronary heart disease is a global healthcare priority. The increasing prevalence of multimorbidity (the co-existence of multiple long-term conditions in the same individual) adds additional pressures to individuals and healthcare systems alike [1].
The Patient Assessment of Chronic Illness Care (PACIC) is a relatively brief 20-item questionnaire designed to assess the extent to which care is aligned with the Chronic Care Model [2,3]. The chronic care model (CCM) has been widely accepted as a suitable framework for improving the care of patients with long-term ('chronic') conditions such as diabetes or arthritis.
The PACIC has been widely used in both validation studies and as an endpoint in outcomes research [4][5][6][7]. A short version for cardiovascular disease patients has been developed using factor analysis [8,9] but despite the scale's popularity, no analysis has been performed using modern test theories, including either parametric and non-parametric item response theory [10].
Previous studies using confirmatory factor analyses failed to find support for the hypothesised 5-factor structure of the PACIC [9,11] though other studies using exploratory factoring methods found better support for the original structure [12]. Disparities in findings related to the factorial structure leaves some uncertainty as to how the scale may be best applied to measure a patient's assessment of their own care. The current study addresses this uncertainty be examining the scaling structure of the PACIC using modern psychometric methods [13], avoiding some of the known issues with illusory factors in factor analyses, which may be driving the uncertainty about the scale's structure in the literature [14].
The current study conducted a psychometric analysis of the PACIC scale using Mokken analysis. Mokken analysis is analogous to non-parametric item response theory, and may be used to arrange ordinal questionnaire items into scales and to assess if the assumptions of non-parametric item response theory (including unidimensionality and monotonicity) are met by the scale (4). By successfully applying data to the Mokken model the suitability using ordinal scale sum scores is confirmed (Table 1).
Methods
Data for the analyses described here were originally collected as part of a wider cohort study designed to assess the impact of care planning on patient outcomes [7]. The current analyses use the baseline data from the cohort study. The same sample has previously been used to investigate the factor structure of PACIC and is described elsewhere [11]. Ethical approval was granted for the original data collection by Northwest 3 REC -Liverpool East (REC Ref no: 10/H1002/41).
Analyses in the current paper were all conducted within R Statistical Computing Environment [15] using the 'base' and 'mokken' packages [16,17].
Mokken analysis
Mokken models are a non-parametric extension of the simple deterministic Guttman scaling model [18]. Guttman models unrealistically assume that data are error free and Mokken models introduce a probabilistic framework which allows researchers to account for measurement error [19]. The major advantage of employing a non-parametric item response theory (NIRT) technique over other modern test theories, including the Rasch models [20], is the relatively relaxed assumptions within NIRT [21] whilst affirming important psychometric assumptions of unidimensionality and scalability [19].
Two Mokken models of interest are the monotone homogeneity model (MH model) and the double monotonicity model (DM model). In the MH model, items are allowed to differ in their discrimination parameter (the slope of their item characteristic curve). The DM model is a more restrictive version of the MH model where item discrimination parameters are fixed, much in the same way as the Rasch or 1 parameter item response theory (IRT) model. Within the MH model it is possible that some items have a weaker or stronger relationship than others to the underlying trait, which may indicate redundancy [19]. Fitting the DM model is essential in order to ensure that scores for polytomous questionnaires are correctly ordered [22].
Following suggestions in Mokken analysis teaching papers [16,23] a three-stage analysis was conducted. These three stages of analysis ensure that four assumptions of NIRT are met. Both the assumptions of NIRT and the stages of a Mokken analysis are described below.
Unidimensionality
The assumption of unidimensionality states that all items must measure the same underlying latent trait. This assumption can be expressed both logically (that all items measure one construct) as well as mathematically (that only one latent variable is necessary to account for the inter-item associations within the data) [21].
Local independence of items
The assumption of local independence simply states that an individual's response to an item is reliant solely on their level of the underlying trait being measured and not influenced by their responses to other items on the same questionnaire.
Local dependence may occur where item content is too conceptually similar between items meaning that the response to one item is conditional on the response to another.
However, whilst sophisticated methods for assessing local independence of items have been reported and used under parametric IRT paradigms [24,25], tests to assess local dependency under the NIRT paradigm are not, as far as the authors are aware, yet widely available in accessible psychometric packages [26].
Monotonicity
The assumption of monotonicity states that the probability of affirming an item is a non-decreasing functioning of the level of the underlying latent trait. For example, on a given item a person with a high level of the underlying trait (theta) must always have a greater chance of affirming an item than a person with a lesser level of the underlying trait.
Non-intersection
An additional assumption of non-intersection is added in order to satisfy the demands of the more restrictive DM model. Non-intersection is confirmed by invariant item ordering which ensures that the ordering of each item (in terms of its 'difficulty') is the same for each individual responding to the scale. Invariant item ordering (IIO) occurs when the item characteristic curves intersect across the scale, which may not occur where slope parameters are uniform across the scale. Figure 1 gives an example of non-intersecting item characteristic curves and Fig. 2 shows item characteristic curves that intersect.
Stage one
In Stage One the scalability of both the individual item and scale total is evaluated using Loevinger's H coefficient, where a higher value indicates higher scalability. The Mokken 'automated item selection procedure' is also used at this stage to assess the number and structure of meaningful factors within the data.
Mokken (1971) This stage of a Mokken analysis is analogous to an exploratory factor analysis [17].
Stage two
In Stage Two the assumption of monotonicity (higher scores indicate a high level of the trait or characteristic being measured) between item pairs within the sample is assessed. The 'mokken' package evaluates the number and severity of monotonicity violations. Items that violate the assumption of monotonicity should be removed to improve the scale.
Stage three
The final assumption of invariant item ordering is to check for non-intersection using the manifest invariant item ordering protocol in the 'mokken' package. Invariant item ordering occurs when the ordering of the items is the same for each participant [27]. Items that violate this assumption may be removed from the scale one at a time following an iterative process. In the event that two items violate the assumption, the item with the lowest scalability is removed, before analysing the rest of the items again.
After the completion of all three stages, the final scale can be said to demonstrably meet all of the assumptions of non-parametric item response theory.
Local independence
As no formal test of local independence exists under the Mokken NIRT paradigm the final items of the PACIC will be analysed for local independence by conceptual comparison of wording and item themes. Local independence may also be indirectly indicated by Loevinger's H and Rho values that are exceptionally high.
Reliability
Scale reliability will be calculated using the Molenaar Sijtsma statistic (Rho) [28]. The Rho statistic calculates the probability of obtaining the same score twice by extrapolating on the basis of the proportion of respondents who give positive responses to item pairs [13].
Evaluation and validation sampling
To ensure that the findings in the current study would be robust across multiple different samples the sample was split randomly into an evaluation and validation sample. The analysis described above was then first run on the evaluation sample and confirmed by application to the validation sample.
Data
The 1849 cases were split randomly into evaluation (n = 956) and validation (n = 890) samples.
Stage one
The Mokken automated item selection procedure (AISP) indicated that a single meaningful factor was present, which included all of the items within the dataset. Scalability coefficients (Item H) are given in Table 2. In its 20 item form, the scale displayed an acceptable overall H value of .50 (SE = .01).
Stage two
Tests of monotonicity returned no violation of monotonicity for any item (see Tables 3 and 4).
Note: item numbers are based on the original order in which they were listed in the PACIC.
Stage three
Assessment of IIO suggested that the 20-item scale did not have IIO properties and a process of backwards step-wise deletion was conducted, iteratively removing seven items over eight steps, illustrated in Table 5.
The final "patient assessment of chronic illness care" scale consisted of 13-items that fully met all NIRT assumptions of dimensionality, scalability, monotonicity and invariant item ordering. The final scale H was .48 (SE = .01) indicating very good scalability.
Validation analysis
To confirm the findings in the evaluation analysis the final 13-item scale was assessed in the validation half of the original dataset. The final 13 items solution demonstrated good scalability, monotonicity and did not violate the IIO assumption.
Reliability
The Molenaar Sijtsma statistic (Rho) indicated very good reliability in the final 13-item scale (Rho = .88).
Discussion
Non-parametric Mokken analysis indicated that the items of the PACIC questionnaire a single unidimensional trait representing patient's assessment of their chronic illness care, rather than the previously hypothesised five-factor structure. Within this single dimension, the 20 items of the PACIC displayed good scalability and monotonicity, however seven items displayed invariant item ordering; violating an assumption of the double monotonicity model. Upon removing these 6 items the resultant 13-item questionnaire displayed excellent scalability and reliability across a single dimension.
Three of the six items which were removed from the analysis were originally placed in the 'Problem Solving' domain (Items 13, 14 and 15), two from the 'Follow-up' domain (Items 18 and 19), one from the 'Goal Setting' (Item 10) and one from the 'Patient Activation' domains (Item 3). The removal of these items may relate to inconsistencies in the implementation of different elements of the CCM in the United Kingdom. Items 18 and 19 both assess activities carried out by other medical practitioners, these items appear to rely on the assumption that seeing another medical professions (e.g., dietician) is appropriate for all respondents.
Whilst these items remain in the questionnaire, the maximum score could not be attained from any patients with a chronic condition who did need to see other clinical staff such as a medical educator or 'eye doctor' , which may have caused undue bias between patients who require care from multiple professionals and those who do not.
Fig. 2 Intersecting item characteristic curves
It is important that items which are meant to assess satisfaction with aspects of healthcare that may not be universally implemented are worded carefully to reduce confusion and facilitate accurate measurement [29].
We recommend that researchers and clinicians who wish to measure the views of patients relating to the quality of their chronic illness care in the UK are best to do so using the 13-item solution presented here, rather than the original scale across five dimensions for which we found no support in the current study. The scale has the advantage of being shorter, thus being less burdensome.
The present study is limited insofar as it was not possible to assess local independence of items using the tools available. Local dependency can result in inflated covariance between items which may, in turn, lead to higher H-coefficients and the risk that items with local dependency are spuriously included in the scale. However, in the absence of a quantitative analysis, some confidence can be gained from assessing the item wording for items which have clear conceptual overlap. It appears that the final 13 items do cover a broad range of topics and do nerlying trait ot have obvious conceptual overlap: which would be indicative of local dependency.
Further research may usefully be conducted on this scale that assesses the PACIC using parametric itemresponse theory, which may include other analyses including local independence of items and differential item functioning [29]. Parametric item-response theory also leads to the possibility of employing computer adaptive testing, which can improve the efficiency and accuracy of assessments [30].
The current study was conducted exclusively in the United Kingdom and significant heterogeneity in the way in which chronic care is organised and experienced globally suggests that the final 13-item solution may not hold for populations in the United States of America, for example. Another study which used factor analyses to assess the psychometric performance of the PACIC for use in diabetic populations in the USA using factor analyses reduced the number of items in the final scale to 11, the disparity between findings may be attributable to differing experiences of patients in the UK and the USA [10]. Given these differences, the recommendations made in this paper should not be applied to the PACIC when it is deployed within a US population for which it was originally developed. Work which derived a set of Table 4 Backwards step-wise removal of items violating IIO Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 items which functioned well across populations would be tremendously useful to establish to enable comparison of global models of chronic healthcare from the patient perspective.
Conclusions
The original PACIC scale was found to be unidimensional and, following the process of Mokken analysis, 13 items met the assumptions of scalability and unidimensionality, which are necessary for producing reliable, ordinal measurements from questionnaire scales. The removal of superfluous items that do not contribute positively to accurate unidimensional measurement has produced a 13-item version of the PACIC, which we recommend for use in the UK. | 2018-04-03T03:26:37.221Z | 2017-04-04T00:00:00.000 | {
"year": 2017,
"sha1": "2706040edd1b3553a0a3d985e950c6fe49b20c06",
"oa_license": "CCBY",
"oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/s12955-017-0638-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2706040edd1b3553a0a3d985e950c6fe49b20c06",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220598311 | pes2o/s2orc | v3-fos-license | NOVEL USE OF THE INTEL REALSENSE SR300 CAMERA FOR FOOT 3D RECONSTRUCTION
3D RECONSTRUCTION ABSTRACT. Foot three-dimensional (3D) reconstruction is increasingly used in real life at present; however, current 3D measuring devices are usually expensive and have a large volume. So they are limited used in a specific domain and feasible method for accurate, fast and low-cost foot 3D reconstruction are required. Since the Intel RealSense SR300 camera has advantages on 3D scanning, such as high efficiency, portable, low-cost and simple operation, this camera has been widely applied in multi-scenario, such as gaming. But its performance on foot scanning is still unknown. Thereby this study first aimed to design and develop a foot 3D scanning protocol based on the Intel RealSense SR300 camera and then to contrast this new method with a traditional one in terms of accuracy. Fifteen healthy adults without any foot deformity or foot disease participated and their feet were measured by our simulated measurements (SM) and manual measurements (MM). 13 variables were calculated and contrasted and their significant differences were assessed by Single-Sample T Test with significant level of 0.05 and confident interval of 95%. The results show that the SR300 presented a precise foot 3D reconstruction on the mean differences ranged from -1.3 mm to 5.2 mm; meanwhile eight of the thirteen foot parameters exhibited no significant differences between the two methods. Overall, these findings above demonstrate that the SR300 is a valid tool for foot 3D scanning and it can be widely applied in the both medical and commercial fields.
INTRODUCTION
With the development of computer vision, virtual reality technology has been applied to various fields of life, including aerospace, manufacturing, reverse engineering and medical treatment, etc. While, 3D model reconstruction is one of key technologies in field of virtual reality and it has been intensively concerned by researchers [1,2].
In shoe-making industry, accurate, fast and low-cost 3D scanning protocol is required for foot measurements. Different from the manual measurement, it provides a standard process and obtains consistent outcomes. At present, the mainstream technology of 3D reconstruction includes scanning technology [3,4], structure from motion technology [5], and reverse mould technology, etc. However, in the past studies, the equipment commonly features the large volume, high price, long scanning time, and higher technical requirements upon the operators. For example, Menato et al. [6] obtained the foot 3D model through a self-developed 3D scanning App on the smartphone platform, although its precision reaches 0.15mm, the time-consuming of creating foot 3D model is nearly 15 minutes. Novak et al. [7] used four charge-coupled device cameras to wrap around participants' feet and scan with a laser line, requiring a huge and inconvenient walking stage with 4.7 m long and 0.8 m wide. Further, Gao et al. [8] used an active marking method, in which the participants wore socks with markers, and used 10 CCD cameras to capture video of foot movements; this method has complex experiment procedures with a series of operations, and it's hard to apply in real life. As shown above, most high-quality foot scanners are implausible regarding application. Hence, as an emerging depth camera, the Intel RealSense SR300 camera is a good tool which balancing convenience in use, clarity in visualization and accuracy in outcomes.
The SR300 may simultaneously capture the color, depth and other image information widely admitted in the real scenario, such as face direction recognition [9], robotic technology [10], gesture recognition [11], 3D model reconstruction, human body rehabilitation [12], and etc. As a result, we assumed whether this camera could be used in foot 3D scanning. Therefore, the objectives of this study were set as follows: 1. to develop a foot 3D reconstruction method with the Intel RealSense SR300 camera; 2. to compare the result obtained in the new method with a traditional one to verify its accuracy.
Participants
Fifteen students (gender = 11 males, 4 females; height = 1.73±0.14m; body mass = 65.20±18.20 kg) from the Sichuan University were invited to this study. None of them had any types of foot deformities or foot diseases. All participants gave written informed consent before participation in this study.
Manual Foot Measurements (MM)
The methods used were in accordance with the guidelines developed by the research committee of Sichuan University. Before measuring, all participants' feet were disinfected and dried. Participants sat on stools and put their right legs horizontally on other ones. The operator measured thirteen foot parameters on each participant's right foot using a tape measure and a straightedge. There were three trials of MM for each foot parameter. All foot parameter definitions are as shown in Table 2 and the foot coordinate system we established is provided in Figure 2.
Simulated Foot Measurements (SM)
The foot scanner ( Figure 1) adopted the Intel RealSense SR300 camera as the core hardware equipment, while the SR300 is a shortdistance light coding 3D imaging camera [13][14][15]. We adopted the Visual Studio 2015 as the development platform and the Intel RealSense SDK 2016 R2 as our 3D scanning component library [9,16]. All configuration parameters of SR300 are shown in Table 1. The main theory of the Intel RealSense SR300 camera is shown below: during the 3D scanning, the SR300 emits the specific "structured light" to the object surface via the infrared laser projector, which will be accepted by the high-speed VGA Infrared Camera after the object reflection. Due to the variable distances from the infrared ray to the object surface, the distances and locations of "structured light" captured by the Infrared Camera may vary [17]. It is feasible to calculate the space information on the object surface, and further restore the whole 3D space. The foot scanner was mounted flush with the laboratory floor, and away from outside and windows, sunlight includes infrared light which may interfere with the depth imaging system; then we placed multiple diffuse lights around the foot scanner to improve the uniformity of the illumination and to avoid a too dark or corrupt scan color; besides, participants were asked to take off all foot ornaments before scanning, shiny or translucent portions of ornaments may corrupt the scan surface. Participants sat on stools about 70cm high and placed their right lower legs on the foot supporter in range of 40cm to 60cm from the Intel RealSense SR300 camera, the point cloud out of this range were automatically subtracted in head scanning mode (Table 1). They were instructed to remain as still as possible for the period of the SR300 scanning. Each foot was scanned for 50 seconds. A total of two successful trials were conducted for each participant and the foot scanner with 30 seconds of rest between trials and 2 minutes between each participant.
The following steps conducted for every frame of foot depth data during the working process: firstly, the foot depth data was transformed into the floating point cloud in meters; secondly, the bilateral filtering was used to carry out the denoising upon the depth floating point cloud, which could keep smooth at the edge; thirdly, the floating point cloud of
Data Processing and Statistical Analysis
The outcomes used in this study were MM and SM. The differences between them were chosen as the primary value to verify foot 3D model accuracy because of its widely recognized effectiveness in foot measurements assessment [18]. To avoid potential errors, we used the averaged value with standard deviation for each parameter. Meanwhile, the significant differences were assessed by Single-Sample T Test (H0: there are no significant differences between the two groups). All statistical analyses were operated under software SPSS (V21, IBM, USA) with significant level of 0.05 and confidence interval of 95%. Table 3 shows a foot 3D reconstruction result of one of the male participants with four various angles and in two styles (meshed and rendered images).
Descriptive statistics and differences of the thirteen foot parameters measurements obtained from the two methods are given in Tables Statistical analyses showed that the performance on the measurements of arch length, medial malleolus height, lateral malleolus height, ankle girth and heel to medial malleolus failing to reach an P value of 0.05.
DISCUSSION
The purpose of our study was to evaluate the performance of the Intel RealSense SR300 camera in foot 3D reconstruction. Comparing MM with SM, we have shown that the SR300 exhibits excellent performance for foot 3D reconstruction and possesses concurrent accuracy with the manual method in traditional foot measurements. Meanwhile, it dramatically shortens the 3D reconstruction time, achieves consistent outcomes and performs a higher robustness.
Although the P-value in Table 4 showed that the two methods had no significant differences in the major 8 parameters, other five foot parameters were reported with significant differences. However, most of the mean differences were smaller than the foot differences in sensitivity (the shoe last size difference that people can feel, generally 6 mm for men and 2.08 mm for women [19]). We suggested that those significant differences might be attributed to two main reasons: smooth denoising and solidification. Smooth denoising upon the depth image was used in filtering process which might lead to the most medially prominent point offset to an unreal location in the malleolus side measurements; meanwhile, for closed foot 3D model, we executed solidification orders, but it extends the surface curvature to fill the holes in foot 3D model surface.
The foot 3D reconstruction base on SR300 has fast, accurate and low-cost features. Therefore, this device may be put into use in the hospital, public community, and other places in large numbers, through foot 3D reconstruction made upon the feet of a large scale of population, a significant number of foot data may be obtained. Thereby, it may be prepared as per the foot big data, and the statistical analysis may be made in combination with the personal wear comfort data.
Although the current research results are promising, limitations existed and should be declared: firstly, the foot should be kept as still as possible while scanning and it might be difficult for children; secondly, rotational motion range of the SR300 on the foot scanner is 270° circumference, it is hard to obtain the front color and depth images of the acrotarsium side; thirdly, 3D model reconstruction is realized through the Color Camera and the Infrared Camera, it is required to obtain the color information and the depth information in the real space simultaneously [20,21].
CONCLUSION
Overall, we approved that the Intel RealSense SR300 camera is a fast, accurate, and low-cost foot scanning protocol, with respect to the manual foot measurements protocol. We explored limitations and constraints may affect the foot 3D reconstruction result of the SR300. We also anticipated that it is likely to build a bridge between laboratory testing and practical application and can be widely applied in the both medical and commercial fields. | 2020-07-16T09:01:40.939Z | 2020-06-30T00:00:00.000 | {
"year": 2020,
"sha1": "01942fff972232ccf9e2590b05cf58e32719c1fb",
"oa_license": null,
"oa_url": "https://doi.org/10.24264/lfj.20.2.5",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "861a49d780a0693d28218dcd3e0d74fe2d0bb80b",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
73715007 | pes2o/s2orc | v3-fos-license | Normally hyperbolic surfaces based finite-time transient stability monitoring of power system dynamics
In this paper, we develop a methodology for finite time rotor angle stability analysis using the theory of normal hyperbolic surfaces. The proposed method would bring new insights to the existing techniques, which are based on asymptotic analysis. For the finite time analysis we have adopted the Theory of normally hyperbolic surfaces. We have connected the repulsion rates of the normally hyperbolic surfaces, to the finite time stability. Also, we have characterized the region of stability over finite time window. The parallels have been drawn with the existing tools for asymptotic analysis. Also, we have proposed a model free method for online stability monitoring.
to develop real-time stability monitoring and control techniques [15,16]. The goal of real-time transient stability monitoring is to determine if the system state following a fault or disturbance will reach the desired steady state based on measurement data over short time window of 2-6 sec [21], [4]. The existing Lyapunov function and energy function-based methods are developed for asymptotic stability analysis and hence they are not suited for finite-time transient stability analysis. In this paper, we adopt techniques from geometric theory of dynamical systems for the development of short-term transient stability monitoring. In particular, we use the theory of normally hyperbolic invariant manifolds for the development of theoretical framework [6,20,9]. We exploit the geometry of the phase space of the rotor angle dynamics in the development of the theoretical framework. The theory of normally hyperbolic invariant manifolds allows us to characterize the rate of expansion and contraction for co-dimension one invariant manifolds. The co-dimension one manifold is said to be normally expanding (contracting) over the finite period of time if a normal vector to this manifold is expanding (contracting) over the finite period of time. Furthermore the extremum of expansion and contraction rate scalar field can be shown to be identified with the stability boundaries of the fixed point whose stability is under consideration. We show that the normal expansion and contraction rates can be used as the indicator of stability over the finite period of time. In particular, if the normal vector is expanding then the system behavior is deemed unstable and vice versa. The normal rates are also used for the computation of finite-time transient stability margin in real-time. The main contributions of this paper are as follows. We provide mathematical rigorous foundation for finite-time transient stability monitoring of rotor angle dynamics. We propose computational framework for the computation of stability margin using real-time measurement data. The main contribution of this paper is -1. to adopt the theory of normally hyperbolic surfaces to address identification of finite time stability boundary, 2. to adopt normal expansion rate as a stability certificate for online stability monitoring, and 3. Adopt Lyapunov exponent for model-free stability monitoring for fast real time applications.
2 Transient Stability Problem and phase space structure of power system Transient stability problem studies the stability of the rotor angle dynamics following a severe fault or disturbance. The transient stability time frame is typically 3 − 5 sec [12]. For wide area swings the time interval of interest could be extended to 10 − 20 sec. The discussion in this section follows closely from [12]. The transient stability problem can be stated mathematically as follows [2]: where x ∈ R N is the state vector. Before the occurrence of fault, the system dynamics evolves with Eq. (1). The fault is assumed to occur at time t = t F and the system undergoes structural change with the dynamics given by Eq. (2). The duration of fault is assumed to be between time interval [t F ,t P ]. The fault is cleared at time t = t P and the state evolution is governed by the post fault dynamics i.e., Eq. (3). Before the fault occurs the system is operating at some known stable equilibrium point x = x I . At the end of the fault the state of the system is given by where Φ F (x,t) is the solution of the system (2) at time t with initial condition x at t = 0. We assume that the post-fault system has a stable equilibrium point at x = x s . The problem of transient stability is to determine if the post fault initial state x P with the post fault system dynamics, i.e., Eq. (3), will converge to the equilibrium point x s i.e., lim t→∞ Φ(x P ,t) = x s .
where Φ(x,t) is the solution of system (3). The finite-time transient stability problem can then be defined as follows: based on the state measurement data x(t) over time interval t ∈ [t P ,t P + T ] and the model information about the post-fault dynamical system i.e., vector field f .
The geometry of the state space will play an important role in the development of the framework for the finite-time transient stability monitoring. In the following section, we discuss the geometrical properties of the phase space dynamics of Eq. (3).
Phase space structure of swing dynamics
Consider the system equation for the post-fault dynamicṡ where, f : R n → R n is assumed to be C 1 vector field. We next introduce some definitions [2].
Also, we denote the boundary of the set A(x s ) as δ A(x s ).
. Let x i be the hyperbolic equilibrium point for system (4). The stable manifold of the equilibrium point x i is denoted by W s (x i ) and is defined as, Similarly the unstable manifold of the equilibrium point x i is denoted by W u (x i ) and defined as, Definition 6 (Transversal intersection). Consider two manifolds A and B in R n . We say that the intersection of A and B satisfies the transversality condition if at the point of intersection x, the tangent spaces of A and B at x span the R n i.e., or the manifold do not intersect at all. We use T x (A) to denote the tangent space of A at point x.
We now make following assumptions on system equation (4) as could be found in [2]. Assumption 7.
• A1. All equilibrium points on the stability boundary of (4) are hyperbolic.
• A2. The intersection of W s (x i ) and W u (x j ) satisfies the tranversality condition, for all the equilibrium points x i , x j on the stability boundary.
• A3. There exists a C 1 function V : R n → R for (4) such that -If x is not an equilibrium point then the set {t ∈ R :V (Φ(x,t)) = 0} has measure zero in R.
Assumptions A1 and A2 on the hyperbolicity and transversitilty are generic property of dynamical systems and hence true for almost all dynamical system. If any dynamical system does not satisfy assumptions A1 and A2 then any small perturbations of such dynamical system will satisfy assumptions A1 and A2. Assumption A3 guarantee that all the trajectories of the system will go to infinity or converge to one of the equilibrium point and hence the possibilities of limit cycle oscillations and chaotic motion is ruled out. It is important to emphasize that both the classical power swing model and the structure preserving model of power system satisfy Assumption A1-A3. These assumptions have important consequences on the topological property of the boundary of domain of attraction of fixed point x s . These consequence are presented in the form of following two fundamental theorems [19].
where, E is the set of all equilibrium points for (4).
The geometry of the boundary of domain of attraction for the stable fixed point x s for the post-fault dynamics play an important role in the development of theoretical foundation for finite-time transient stability monitoring. We discuss the theory necessary for the development of this framework in the following section.
Normally Hyperbolic Invariant Manifold Theory
In this section, we introduce some preliminaries for the theory of the normally hyperbolic invariant manifolds. For more details, we refer the readers to [9]. The theory of normal hyperbolicity over finite-time is developed for more general time varying system. However given our interest in system Eq. (4) for the post-fault dynamics which is time invariant, we restrict our discussion to time invariant vector field (4). We start with the following definition of material surface [9].
Definition 10 (Material surface [9] ). A material surface M (t) is the t = const. slice of an invariant manifold in the extended phase space X × [α, β ], generated by the advection of an n − 1 dimensional surface of initial conditions M (0) by the flow map Φ(x,t), The schematic of the material surface is shown in Fig. 1.
We want to express the attraction and repulsion property of this material surface over the time interval [0,t]. To this end we consider an arbitrary point x 0 ∈ M (x 0 ) and (n − 1) dimensional tangent space T x 0 M (0) of M (0) and one dimensional normal space N x 0 M (0). The tangent vector is carried forward by along the trajectory Φ(x 0 ,t) by the linearized flow map given by ∇Φ(x 0 ,t) into the tangent space By the invariance property of the manifold M (0), the tangent vector at point x 0 of M (0) are propagated to the tangent vector at point Φ(x 0 ,t) under the linearized flow ∇Φ(x 0 ,t). However this is not the case with the normal vector at point x 0 to manifold M (0). A unit normal vector n x 0 ∈ N x 0 M (0) will not necessarily be mapped into the normal space N Φ(x 0 ,t) M (t).
Definition 11 (Normal repulsion rate [9]). Let n Φ(x 0 ,t) ∈ N Φ(x 0 ,t) M (t) denotes the family of smoothly varying family of unit normal vector along the flow Φ(x 0 ,t). The growth of perturbation in the direction normal to M (t) is given by the repulsion rate and is given by If ρ(x 0 , n 0 ,t) > 1, then the normal perturbation to M (0) grows over the time interval [0,t]. Similarly, if ρ(x 0 , n 0 ,t) < 1, then the normal perturbations to M (0) decreases.
Definition 12 (Normal repulsion ratio [9]). The normal repulsion ratio is a measure of the ratio between the normal and tangential growth rate along M (t) and is defined as follows: v(x 0 , n 0 ,t) = min If v(x 0 , n 0 ,t) > 1, then the normal growth along M (t) dominates the largest tangential growth rate along M (t). From the point of view of computation, both the normal rate and ratio can be computed using the following formulas [9].
v(x 0 , n 0 ,t) = min With the aid of normal repulsion rate and ratio, we define finite-time repelling (attracting) material surfaces.
Definition 13 (Normally repelling material surface [9]). A material surface M (t) is normally repelling over time interval [0, T ] if there exist numbers α, β > 0, such that, for all points x 0 ∈ M (0), and unit normal vector n 0 ∈ N x 0 M (0), we have [9], All repelling or attracting material surfaces are not equally repelling or attracting. Some material surfaces are more repelling or attracting than others. Material surfaces that are maximally attracting or repelling occupy special place and name in the theory of normally hyperbolic invariant manifolds. They are called as Lagrangian coherent structures. We have following definitions in this direction. With the aid of the following example we demonstrate the concept of the attracting and repelling surfaces.
Example 15. We consider the system, It can be observed that the y and x axes form the stable and unstable manifolds of the saddle point. As we can observe from Fig. 2 (a) , the stable manifold y axis repels the nearby trajectories and forms a repelling material surface. On the other hand the unstable manifold x-axis forms the attracting material surface. This we verify by computing the normal expansion rates. The normal expansion rate is more than 1 over a surrounding region of y axis. The y axis form the Lagragian Coherent Structure for the system, as the normal expansion rate achieves a local maxima along the y axis. The normal expansion rate turns out to be. Where, For this system the surfaces (0, y) T , and (x, 0) T forms the repelling and attracting material surfaces, as we can observe in Fig. 2 (b). Also, it can be observed there is a small patch of region around the y -axis, which has normal expansion rate more than 1. This means those material lines are repelling for the finite time of our interest.
Main Results
We would relate the stability boundary to the normally hyperbolic surfaces with the aid of the following two Theorems. These two theorems show that in finite time the normal expansion rate can prescribe a region around the stable manifold of a type -1 saddle point as repelling. Theorem 16 shows that the normal expansion rate can be used to characterize the finite time stability boundary, and Theorem 17 implicates that the normal expansion rate can be used as a stability certificate for online monitoring.
Theorem 16. For any finite time interval T > 0 and x 0 ∈ W s (x e ), there exists an ε > 0 such that, following condition is satisfied, represents an open ball of radius ε around x 0 , and W s (x e ) denotes the stable manifold of a type-1 saddle point x e , which is located at the boundary of the domain of attraction A(x s ).
Next, we outlline Theorem 17, which can specify the finite time stability region contained in the domain of attraction in terms of the normal expansion rates.
Theorem 17. For any finite time interval T > 0 there exists an ε > 0 such that, following condition is satisfied, where, B ε (x s ) represents an open ball of radius ε around the stable fixed point x s .
Theorem 16 provides a region for finite time stability based on the repulsion rate. Theorem 16 also can be rephrased as following -for an arbitrary finite time interval T > 0, there exists a ε > 0 such that, a material surface at a maximum distance of ε from the the stable manifold of any type -1 saddle point , forms a finite time repelling hyperbolic surface. Hence, the normal repulsion rate provides a rigorous way to demarcate the finite time stability boundary. The classical methods were capable of demarcating the stability boundaries in terms of the asymptotic dynamics. But this may not be adequate for finite time stability analysis. For finite time stability monitoring, we compute normal repelling rate of the trajectory and ascertain the stability. Theorem 17 describes the material surfaces that form the finite time repelling material surfaces. for an arbitrary finite time interval T > 0, there exists a ε > 0 such that, a material surface inside the domain of attraction A(x s ) of the stable fixed point and at minimum distance of ε away from the all the stable manifolds of the type -1 saddle point forms a finite time attracting hyperbolic surface. Also, the implication of Theorem 17 is that the normal repulsion rate can be used for a finite time stability certificate for online stability monitoring.
For both Theorem 16, and 17, the parameter ε is a function of the time interval of interest, i.e. T . The normal expansion rate can be used to obtain the finite time stability boundaries, and also can be used for online stability monitoring. It is to be noted that, the ε in Theorem 16, decreases with increase in T , and ε in Theorem 17 increases with increase in T . The normal repulsion rate can be used as an estimate of the stability margin. Theorem 17 describes the material surfaces that are finite time repelling. Next, we would discuss the new insights that this approach brings to the existing method.
Margin of Stability from Normal Expansion Rate
The existing techniques of stability analysis are based on energy function based methods. Energy function is a non-negative function defined over the state space, which guarantees asymptotic boundedness of the system trajectories. The energy function is bounded and decays along a trajectory, as a result of that the trajectories of the post fault system trajectories are bounded [19]. The Assumptions 7 would imply that the trajectories inside domain of attraction would converge to the stable fixed point. This rules out possibility of limit cycle or other type of behavior inside the domain of attraction. A critical value of the energy function is given by the energy function value, evaluated at the type -1 saddle point. Energy values less than the critical level results in asymptotic stability. However, the energy functions are inadequate to specify the region of stability over a finite time. Also, energy function based techniques have extensively used for real time stability monitoring, by augmenting it with power flow type approaches [11,3]. We demonstrate that our approach is also suitable for asymptotic stability monitoring. In order to achieve this goal, we define the following function over the trajectory. We first define a finite time stability margin inverse of accumulation of repulsion rates over the trajectory. Stability margin corresponding to the point x 0 ∈ M (0) over a time window T is denoted as It is to be noted that η 0 ∈ N x 0 (M (0)) is unit normal vector. By taking T → ∞, γ(x 0 , η 0 , T ) reaches a constant value if the system is stable. This will result in a positive value of the margin. Larger the margin, more the system is stable. On the other hand if the system is unstable, the margin will go to 0. A non-negative function corresponding to the point and is defined as, is a non-negative bounded function, which is decreasing over the trajectories with initial conditions inside the domain of attraction [19]. Our approach can compute the margin also in finite time.
This guarantees the boundedness of the trajectories asymptotically inside the domain of attraction, similar to that of [19]. Previously, we have shown that the normal expansion rates can detect the stability boundaries accurately, as stability boundaries form normally repelling hyperbolic structure. Also, we have demonstrated the normal expansion rate can be used as a stability certificate. Normal expansion rate computes the expansion in the normal vector along a trajectory. In this section, we would introduce Lyapunov exponent (LE), which is also a similar estimate of rate of expansion like normal repulsion rate. Finite Time Lyapunov Exponent (FTLE) computes the maximum local stretching for an initial condition [8], and it is also capable of identifying the normally repelling hyperbolic surfaces (i.e. stability boundaries) under some technical condition given in [9]. First we introduce the definition of Lyapunov exponent, and successively describe the finite time Lyapunov exponent (FTLE). Mathematical definition of maximum or principal Lyapunov exponent [10] in asymptotic sense is is as follows, be the solution of the differential equation. Define the following limiting matrix,Λ LetΛ i (x) be the eigenvalues of the limiting matrixΛ(x). The Lyapunov exponentsλ i (x) are defined asλ Using results from Multiplicative Ergodic Theorem, it is known that the limit in Eq. (9) is well defined [5]. Furthermore, the limit in Eq. (9) is independent of the initial condition, x, under the assumption of unique ergodicity of the system. Lyapunov exponents can be thought of as the generalization of eigenvalues from linear systems to nonlinear systems in asymptotic sense. The Finite Time Lyapunov Exponent (FTLE) is defined as [18], Definition 20. The Finite-Time Lyapunov Exponent (FTLE) σ τ (x 0 ) at an initial point x 0 for time interval τ is defined as, Figure 5 shows conceptually the computation of the normal expansion rate and FTLE. An small ball around an initial condition is taken, and the initial conditions inside the ball are considered. After finite time the evolving distance between any pair of initial condition inside the ball would be studied. The green vector shows the evolving normal direction, and will thus be used to compute the normal expansion rate. On the other hand, the green vector gives the direction, which gets maximum stretched in the in the finite interval, and thus be used to compute the FTLE. Using the multiplicative ergodic theorem, it can be shown asymptotically the Lyapunov exponent becomes independent of the initial conditions [5].
Relation Between FTLE and Hyperbolic Surfaces
The FTLE tries to estimate maximum local expansion along the trajectory, and can be used as a stability certificate for online stability monitoring. On the other hand, the repulsion ratio measures the rate of expansion along the normal direction. For normally repelling surfaces the direction of the maximum expansion and the normal direction align with each other asymptotically, which means that asymptotically the stability certificate obtained using both normal expansion rate and FTLE will be the same. In order to obtain precise bounds of the alignment, we introduce the following notations -let 0 < λ 1 (x 0 , τ) ≤ λ 2 (x 0 , τ), . . . λ n (x 0 , τ) be the eigenvalues of the matrix ∇Φ(x 0 , τ) T ∇Φ(x 0 , τ), and ξ 1 (x 0 , τ), ξ 2 (x 0 , τ), . . . , ξ n (x 0 , τ) be the corresponding eigenvectors. It can be noted that σ τ (x 0 ) = 1 τ log λ n (x 0 , τ), and also ξ n (x 0 , τ) is the direction of the maximum expansion. The FTLE is the expansion along the ξ n (x 0 , τ). In [9], it was proved that for a repelling hyperbolic surface the angle between the normal direction η 0 and the direction of maximum expansion ξ n (x 0 , τ) decays exponentially faster, where the rate of the decay is given by negative of the repulsion ratio of the repelling surface. Figure 4 depicts the asymptotic alignment of the two vectors. It was shown in [9], the sine of the angle between two vectors α n satisfies the following condition, sin α n (η 0 , ξ n ) ≤ √ n − 1e −β τ . This alignment ensures the stability certificates FTLE and normal repulsion rate match with each other. Apart from online stability monitoring, we were also interested to compute finite time stability boundaries. We can use normal expansion rate to get the finite time stability boundaries accurately. The stability boundaries thus identified from normally hyperbolic surface is called Lagrangian Coherent Structures (LCS) and are defined as following [9], Definition 21. Assume that M (t) is a normally repelling material surface over t ∈ [0, τ]. We designate M (t) a repelling LCS over [0, τ] if its normal repulsion rate admits a point-wise non-degenerate maximum along M (0) among all locally C 1 -close material surfaces. We designate M (t) an attracting LCS over [0, τ] if it is a repelling LCS over [0, τ] in backward time.
The LCS are the local maximal ridges in formed in the scalar field of the normal repulsion rate. The stability boundaries can be identified by the LCS. Whereas, FTLE can detect the stability boundaries only if certain additional conditions are satisfied. Under those conditions, FTLE ridges coincide with the LCS [9]. FTLE is a measure of maximum local expansion, and it can not differentiate between the normal and tangential expansion. But stability boundaries are essentially those surfaces, which are normally repelling. If a normally repelling surface separates two regions in state space either, and on either side of the stability boundary the trajectories have tangential expansion and normal contraction, FTLE ridges would not be adequate to identify the stability boundaries. The reason behind this fact is that the FTLE field would show contraction for both the boundary and the neighboring region. On the other hand, the normal repulsion rate does not suffer from this fact as it would identify the fact the material surfaces on either side of the boundary are normally contracting. In [9] Theorem 13, the necessary and sufficient conditions for FTLE ridge to coincide with LCS and thus qualify as stability boundary is given. The conditions are -1. λ n−1 (x 0 , τ) = λ n (x 0 , τ) > 1, 2. ξ n (x 0 , τ) ⊥ T x 0 M (0), 3. The matrix L(x 0 , τ) ( which comprises of the λ i 's and ξ i 's ) is positive definite [9]. We have used the normal expansion rate for both monitoring and stability boundary computation, whereas FTLE is used for online stability monitoring purposes. The FTLE is computed using the model.
Model Free Algorithm to Compute FTLE
Next, we introduce the model-free computation scheme from the time series to obtain approximate FTLE. In the following definition, we introduce the LE computation formula from the time series.
Definition 22. The Finite Time Lyapunov Exponent from time series for initial condition x 0 is defined as, The proposed method for LE computation is model free, and requires lesser computational effort when compared to normal expansion rate. FTLE is positive if the trajectories diverge, and is negative if they converge. First, we propose a model-free scheme to compute LE from the time series. Then, we show that it is capable of identifying sources of local instabilities like presence of saddle points. This means if the trajectory comes close to a saddle point the LE starts to increase. Also, we demonstrate that FTLE has a cumulative effect, i.e. the local stability and instability contributions of various regions of the state space gets reflected as decrease or increase in the LE. This demonstrates its capability for being used as a stability certificate.
Model Free FTLE Based Stability Monitoring
The Lyapunov Exponent is computed from the evolution of distance between actual and the delayed time series. Next, we present the following propositions, which relate LE with the transient stability problem. In Proposition 23 we demonstrate that the LE would go negative for initial conditions inside the domain of attraction. This justifies the use of LE as a online stability certificate.
Proposition 23. For all initial point x 0 ∈ A(x s ) \ x s , there exists a T * the finite time LE λ (x 0 , 0,t) < 0 for all t ≥ T * .
Proposition 23 enables us to use LE as a stability certificate for online monitoring. The LE for the trajectories inside the domain of attraction will go negative, which can be used as a stability criterion.
Simulation Results
In this paper, we develop theoretical foundation for finite time transient stability monitoring of power systems. The theoretical foundation is based on tools from geometric theory of dynamical systems. In particular, we employ techniques from normally hyperbolic invariant manifold theory in the development of theoretical foundation. We show that the normal expansion and contraction rate of co-dimension one manifold in the phase space can be used as a indicator for finite time transient stability. Furthermore extremum of these normal expansion and contraction scalar field can be used to identify stability boundary of the stable operating fixed point. Identification of stability boundaries or distance from the stability boundaries is used to determine margin of stability in transient.
Stability Boundary Computation
For simulation purposes, we consider the swing dynamics of N + 1 generator system as the model,δ The (N + 1) th generator is taken as the reference, by making E N+1 = 1, and δ N+1 = 0. Let us define the state variable for the dynamical system as, This system corresponds to a specific set of values of the P i 's, and also the system is at a stable equilibrium of the (12). Let, a fault occurs, at t f , and is cleared at t = 0. The source of the fault may be change in the electric power input at generator i, which is given by P i , or short circuit, given by changes in the values Y i j . Once the fault is cleared the system dynamics is changed according to the new parameters of the system. The phase portrait of two generator system is depicted in Fig. 5. The stable manifold of the type-1 saddle point forms the stability boundary. The stable equilibrium point is shown as a black circle and the type -1 saddle point is shown as a red cross. Figure 6 shows the plot of the normal repulsion rate over a time window of 5 sec, which is typical time interval for short term transient stability. Over this time window, it can be observed that a relatively larger region around the stable manifold of the type-1 saddle point show normal repulsion more than 1. The region, which has more than 1 repulsion rate, indicates the finite time stable region. This can not be captured by the asymptotic approaches. It can be observed from Fig. 7, as the time window is extended to 35 sec, the stable manifolds of the type-1 saddle point emerges as the material surface with positive repulsion rate, which matches with the asymptotic analysis. It demonstrates, our approach gives a concrete way to provide the region of stability over a finite time window. Figures 8, and 9 show the plot of FTLE over time intervals of 5, and 35 sec. It can be observed that the FTLE can locally detect the saddle point. FTLE can not detect globally the stability boundaries, as it becomes negative along the stable manifold of the saddle point. Whereas, near the saddle point, the unstable manifold has locally positive FTLE. This is because the unstable manifold of the saddle is locally tangentially expanding. Also, we have observed the ridges in the normal repulsion rate or FTLE field preserve themselves under small perturbations or noise. Next, we demonstrate with simulations on 39 bus system, that the normal expansion rate and LE can be used for online stability monitoring.
Online Transient Stability Monitoring for New England 39 Bus System
In this subsection, we show application of normal expansion rate ρ(t) , and LE λ (t) as stability certificates in online monitoring. Also, we compute stability margins based of normal expansion rate, and LE, which are denoted as 1 γ(t) , and 1 θ (t) . The stability certificates, as well as margins can further be used to generate alarms, and to generate appropriate control actions. The computational time for LE is less than that of the normal expansion rate. On the other hand, the normal expansion rates can more accurately detect the global stability boundaries. Thus, depending on the available computational infrastructure, and desired accuracy, the right stability certificate could be chosen. We present the simulation results for the swing dynamics of the New England 39 bus system. New England 39 bus system has 10 generators, and it is a reduced model for the power grid of New England and part of Canada. We have used the normal rate of expansion ρ(t), λ as a stability certificates. The normal expansion rate is computed according to (7), and LE is computed as described in (11). The ∆t for LE computation is chosen as 1.5 sec.
The governing equations of the rotor angles swing dynamics are as following [13], where, δ i , and ω i are the angle and the frequency of the i th generator. The values of the system parameter can be found in [22]. We create a stable and unstable scenario by tuning the damping parameter. Figure 10 shows the system dynamics for the stable scenario (damping D i = 0.5). It can be observed from Fig. 11 that the normal expansion rate ρ(t) decays exponentially faster and converges to 0, showing stable behavior, and the LE stays below 0, prescribing stability. Figure 12 shows the evolution of the stability margin 1 γ(t) . It can be observed that, in this case the stability margin converges to a constant value as the system goes stable. On the other hand, Figure 13 shows the angles, and frequencies, when the system is unstable (damping D i = 0.01). It can be observed from Fig. 14 that the normal expansion rate ρ(t) stays above 1, certifying the instability. Also, the LE stays above 0, showing instability. Figure 15 shows the evolution of the stability margin 1 γ(t) for the unstable. It can be observed that, in this case the stability margins show a monotone decreasing trend, which indicates unstable behavior. The simulations testifies that the normal expansion rate and LE can be used as a online stability monitoring tool.
Conclusion
In this paper, we propose a finite time stability analysis tool based on the Theory of normally hyperbolic surfaces. We have related the stability boundaries of the transient dynamics to the normally hyperbolic repelling surfaces. Our proposed method can prescribe stability region based on the time window of interest, which is very useful for finite time transient stability problem. The normal repulsion rate can also be used for online stability monitoring. We have also proposed a LE based model-free stability monitoring scheme for fast real time applications.
Appendix
Next, we outline the proof of Theorem 16. Theorem 16 aims at showing that there would be a region around the stable manifold, which would be normally repelling over a finite window of time. Before outlining the details of the proof, we would try to put forth the intuition behind the successive constructions. The stable manifold forms a codimensional manifold, which separates the trajectories inside and outside the domain of attraction to its either side. If we consider two points on the opposite sides of the stable manifold, separated by a small distance. The line joining these two points would be normal to the stable manifold. Trajectories emanating from these two points would evolve in different fashion -the point inside the domain of attraction would eventually converge to the stable fixed point and the other one would stay outside the domain of attraction. As a result, the distance between these two points would be beyond a threshold. If the initial distance between the two points have taken very small, the ratio between the initial and final distance could be made arbitrarily large. We would use this fact to show that the stable manifold would emerge as a normally repelling surface over a finite time window. Next, we would outline the technical details, relating to the proof.
Proof. Let us consider the stable manifold of the type-1 saddle point x e , which is denoted by W s (x e ). We need to show for every T > 0 and x ∈ W s (x e ), there exists an ε > 0 such that every point in the setŴ u (x) has repelling hyperbolic material surface, where, We can construct two points x 0 − εn 0 (x 0 ), and x 0 + εn 0 (x 0 ), where n 0 (x 0 ) is an arbitrary unit vector -these would be the two points on either side of the stable manifold. Next, we consider the evolution of distance between Φ (x 0 − εn 0 (x 0 ),t), and Φ (x 0 + εn 0 (x 0 ),t), which would finally lead us to the proposition. Now let us define, where, O(ε 2 ) can be ignored for sufficiently small ε. Now, for a given T > 0, where, is a positive definite matrix. The singular value decomposition of the matrix M (x 0 , n 0 , ε, T ) would give us, where, U(x 0 , n 0 , ε, T ) is a unitary matrix with i th column u i (x 0 , n 0 , ε, T ), and Σ(x 0 , n 0 , ε, T ) = diag (Λ 1 (x 0 , n 0 , ε, T ), . . . , Λ n (x 0 , n 0 , ε, T )), where Λ 1 ≥ Λ 2 ≥ · · · ≥ Λ n > 0. This gives us, Combining 18 and 20, By appropriately selecting ε, e (T, ε)e(T, ε) > K (T, ε)e (0, ε)e(0, ε), Next, we would inspect the quantity K (T, ε). It can be noted that, e(0, ε) = 2εn 0 , and e (0, ε)e(0, ε) = 4ε 2 n 0 n 0 = 4ε 2 , as n 0 n 0 = 1. Now, the quantity, e(T, ε) would be more than a finiteK after a finite T , as e(T, ε) shows the distance between two points on the either side of the separatrix. Hence e (T,ε)e(T,ε) ε). So, by choosing sufficiently small ε we can have an arbitrary large K (T, ε). We do the further simplification of the SVD, By choosing, n(x 0 ) = u n , we get, Λ n (x 0 , n 0 , ε, T ) > K (T, ε). For the unit normal vector η 0 (x 0 ), we have, The ε can be made sufficiently small such that, K (T, ε) > n. This will give, Similarly, to prove ρ(x 0 + εn 0 , T ) > 1, we use, and use the successive steps as it is. Hence, the proof. We outline the proof of Theorem 17. Here also the proof is along the same lines as the last one. We take two points within the domain of attraction, and use the fact that the they would eventually converge to the stable fixed point. Figure 17 captures the essence of the construction.
Similarly, we use, and use the successive steps to prove ρ(x 0 − ε 0 n 0 , −T ) > 1 . Now let us define a set, where ε < ε 0 . This gives us, ρ(x, −T ) > 1 for all x, which is at most ε away from the point x s . Hence the proof.
Below, we provide the proof of Proposition 23. | 2017-12-30T01:07:02.000Z | 2017-12-30T00:00:00.000 | {
"year": 2017,
"sha1": "54f988a7457846bee39b68a94192b2110cd85229",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "54f988a7457846bee39b68a94192b2110cd85229",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
234440250 | pes2o/s2orc | v3-fos-license | Lightweight Chassis Design of Hybrid Trucks Considering Multiple Road Conditions and Constraints †
: The paper describes a fully automated process to generate a shell-based finite element model of a large hybrid truck chassis to perform mass optimization considering multiple load cases and multiple constraints. A truck chassis consists of different parts that could be optimized using shape and size optimization. The cross members are represented by beams, and other components of the truck (batteries, engine, fuel tanks, etc.) are represented by appropriate point masses and are attached to the rail using multiple point constraints to create a mathematical model. Medium-fidelity finite element models are developed for front and rear suspensions and they are attached to the chassis using multiple point constraints, hence creating the finite element model of the complete truck. In the optimization problem, a set of five load conditions, each of which corresponds to a road event, is considered, and constraints are imposed on maximum allowable von Mises stress and the first vertical bending frequency. The structure is optimized by implementing the particle swarm optimization algorithm using parallel processing. A mass reduction of about 13.25% with respect to the baseline model is achieved.
Introduction
Since their inception, the design of automobiles has changed considerably. The Department of Energy is currently investing millions of dollars in research and development of the generation of energy-efficient automobiles (https://www.energy.gov/articles/energydepartment-announces-137-million-investment-commercial-and-passenger-vehicle). The energy efficiency of vehicles can be achieved by improving engine performance, hybridization, improving the aerodynamics, and making structural components lightweight.
Lightweight components not only make the vehicles more energy-efficient, but they also result in improvement in road performance and handling. In the past, the dimensions of automobile components were determined mostly by hand calculations by applying the principles of strength of materials. However, the last few decades have seen an exponential rise in computational power, which makes detailed structural analysis of complex structures possible using various numerical techniques. The finite element method is one of such numerical methods, which gained widespread popularity for structural analysis ever since the publication of the seminal paper by Turner et al. [1] and a series of papers published by Argyris and Kelsey [2], which subsequently appeared in the form of a book. The development of user-friendly computer-aided design (CAD) and finite element analysis (FEA) software certainly made it possible to generate and analyse detailed three-dimensional (3D) modeling and perform analysis of complex structures. To automate the design process, finite element methods are usually integrated with numerical optimization algorithms. It is particularly important that the design satisfies all geometric and manufactural constraints. For extremely complex structures like those of large commercial vehicles, the modeling and analysis can still, despite enormous advances in both hardware and software, be quite expensive. In most industries, optimum structural dimensions and configurations are determined by engineering experience and trial-and-error. It requires a lot of human resources and ingenuity to generate and analyze numerous models before a design is finalized.
Multiple research groups worldwide are working on the design and manufacturing of lightweight vehicles, which can dramatically reduce the design cost. A great deal of research is going on developing new algorithms and techniques related to multidisciplinary and multiobjective optimization of automobile parts. Some of the popular areas of research are size/shape optimization [3], topology optimization, lattice-based optimization, etc. [4]. In size optimization, usually, the cross-sectional dimensions of structural components are considered as design variables. In shape optimization, the geometry of a component is defined by a set of parameters that can be varied. Topology optimization is one of the most modern methods of optimization where the density of elements of the FEA model is considered as a design variable while adding total volumetric fraction as a constraint. Various studies on automobile frame optimization (including shape/topology optimization considering multiple constraints) can be found in the work of Zuo et al. [5][6][7][8][9][10][11]. Miao et al. [12] developed a multidisciplinary design optimization framework for fatigue life prediction of automobiles.
Cavajzzuti et al. [13] used topology, topometry, and size optimization to design automotive chassis while satisfying the structural performance constraints as per Ferrari standards. The design, when compared to the commercial Ferrari F458 chassis, showed significant weight reduction. Wang et al. [14] studied the topology optimization approach for longitudinal beam shape frames with variable cross-sections to derive a reliable chassis design. They achieved the optimized frame, which was robust and had a low natural frequency. Kurdi et al. [15] compared diverse heavy-vehicle frames with different mass and torsional stiffness. The authors found an effective design with low weight and maximum torsional stiffness. Kang et al. [16] presented the optimal design of a heavy-vehicle by applying the analytical target cascading (ATC) methodology. They solved design problems for heavy-duty trucks and buses in the presence of a suspension system. Rajasekar et al. [17] applied the genetic algorithm to optimize the chassis with various rectangular crosssections. Jin and Wang [18] performed the strength analysis of a simplified suspension model. The authors simplified the suspension with an equivalent beam to calculate the frame's strength under diverse load conditions. Techniques like topology optimization are computationally expensive. It is reasonable to optimize small components using topology optimization. However, it is not practical for multidisciplinary design optimization of a complex structure like vehicle chassis involving multibody interaction. In problems involving complex load paths, topology optimization often results in designs infeasible to be manufactured by conventional manufacturing approaches. Even though developing a surrogate model is one way to tackle a problem involved in a highly complex structure, it requires the availability of optimal experimental designs (OED). Performing experiments or simulations can be an enormously expensive task. For complex structures, a more reasonable approach is to develop a simplified equivalent model that can represent the physics reasonably accurately.
The primary purpose of this work is to develop a computational framework for optimizing the structure of truck chassis using a mathematical model that is relatively accurate in representing the actual structure considering the stress, modal frequency, and manufactural constraints. The medium-fidelity model is verified with the detailed finite element model of the truck chassis considering stiffness and modal frequencies as metrics. As the disfeatured medium-fidelity model is likely to contain stress singularities (at sharp edges and points of beam attachments), the maximum von Mises constraint cannot be considered a constraint. To circumvent this issue, we proposed a term called 'Violation' as the fraction of the total area over which von Mises stress is greater than the permissible value and constraint is imposed on its maximum value. As the vertical bending mode frequency has a significant effect on the performance and the passenger-comfort of the vehicle, a constraint is imposed on its minimum value. The framework incorporates the modal assurance criterion (MAC) to identify the first vertical bending model and the corresponding modal frequency to compute the constraint on vertical bending frequency. The framework, as developed, employs a general approach to performing structural optimization of a complex structure under stress and modal frequency for a specific mode shape. Although it was only done for the first bending mode, the approach based on modal assurance criteria could also be employed for other modes, e.g., a torsional mode.
Furthermore, an unconventional structure of the side rail of the truck chassis is explored using the optimization framework. It is a C-section but with a central drop and a rectangular top and a bottom plate attached. The shape of the profile is defined by a set of continuous design variables. The thickness of the top and bottom plate, the web, and flanges of the channel change along the length, and they are defined by another set of discrete variables. The geometry and mesh are generated using commercial FEA software, MSC. PATRAN (Version: 2014, MSC Software Corporation, Newport Beach, CA, USA) [19]. By the orthogonal method, a set of load conditions, each corresponding to a road event, is derived, and linear static analysis is run using MSC. NASTRAN [20]. Reaction forces from the road are applied at the wheel locations of the suspensions. The rail-shaped chassis with suspension is an unconstrained structure. To achieve a static equilibrium, the 'inertia relief' method is used. The aim of this work is to minimize the structural mass of the rails of a very large commercial truck chassis while satisfying multiple constraints and considering multiple load cases. The constraints include maximum allowable von Mises stress, minimum stiffness, and first vertical bending frequency. The metaheuristic optimization algorithm, particle swarm optimization (PSO) algorithm, is used for optimizing the design variables.
Overall, in this work, detailed geometry parameterization and integration of cross-members with the side-frame and verification of the medium-fidelity with the high fidelity model are described. The method for calculating the vertical bending stiffness and the influence of geometry on vertical bending frequency and stiffness and verification of the medium-fidelity assembly with high fidelity results are established. The integration of the suspensions and point masses to create a complete assembly with the method of detecting the vertical bending mode in an automated way is studied during the optimization process. The load cases for static analysis are established, and, finally, the optimization methodology is established.
Modeling the Side Rails
In the parametric model of the rail, the cross-section is an important design component. Fifteen continuous design variables define the web height and flange thicknesses in different regions. Figure 1 shows the top view and side view of the rail, and the dimensions that are labeled in red are considered to be variables. The variables Rab1, Rab2, Rbc1 and Rbc2 denote fillet radii. The terms FWW and RWW denote the "Front Wheel Width" and "Rear Wheel Width", respectively, and they are considered constant. All the dimensions cannot be varied independently as they are linked by the following set of equations: In this work, optimization was performed using a limited number of design variables. Variable La is considered to be the sum of Lob1 and Lob2, i.e., Further, the fillet radii were kept constant and equal to 1000 mm. The rails were divided into three sections (denoted as Sections 1-3) as shown in Figure 2. Each section was characterized by different thicknesses and dimensions of the top and bottom plates.
The dimensions of the top plate were specified by Fwxt1, Fwxt2 and Fwxot (see Figure 1b) where 'x' denotes Section # and 't' indicate 'top'. Similarly, dimensions of the bottom plate were specified by Fwxtb1, Fwxb2, and Fwxob where 'b' indicates 'bottom.' The ratio of Fwxt1 to Fwxt2 was kept constant for each of the sections and denoted by R 1 = Fwxt1/Fwxt2. Similarly, R 2 was defined as R 2 = Fwxb1/Fwxb2. Since there are three sections, 15 additional design variables were required to specify the thickness values. The thickness values are real numbers with appropriate ranges. The model is meshed with linear quadratic shell elements with a maximum edge length of 10 mm.
Cross Members' Integration and Complete Assembly
The two side rails were linked with a total of seven cross members. The cross members are represented in Figure 3a using beams with cross sections. They were attached to the side rail using multiple point constraints to create the medium-fidelity finite element model of the chassis, as shown in Figure 3b. Similar to the baseline design, the rails and the front three cross members were modeled using steel (Young'
Stiffness and Modal Frequency Calculation
The modal frequencies and vertical bending stiffness were used as metrics to verify the medium-fidelity finite element model. Figure 4a shows the approach for calculating the vertical bending stiffness of the frame. The boundary conditions were applied at the wheel locations, as shown in the figure. A load F of 1000 N was applied in the middle of the chassis, and the maximum vertical deflection was computed using static analysis. The vertical bending stiffness was calculated as (δ v being the vertical displacement): For the vertical bending stiffness calculation, the four mounting points were modeled as nodes connected to the upper and the lower flanges using multiple point constraints (MPC). The force F was applied in the vertically downward direction at a central node attached to the flange of the left and the right rail using MPCs, as shown in Figure 4b.
In order to gain an insight into the influence of the design variables on the first vertical bending frequency (f v ) and vertical bending stiffness (k v ), these values were obtained for a set of randomly generated designs and compared with the values corresponding to the baseline of truck chassis. Figure 5a,b show the plots of vertical stiffness vs. the mass of the randomly generated and the first vertical bending frequency vs. the mass for the same random designs, respectively. The design marked as 'Model of interest', as shown in Figure 5c, had a higher stiffness compared to the baseline truck chassis yet had a significantly lower mass.
Model Verification
The stiffness and mass distribution of the medium-fidelity model of the baseline design (consisting of a no-drop section in the web) was verified by comparing the first torsional deformation frequency, first lateral bending frequency, first vertical bending frequency, and the vertical bending stiffness with those of a high-fidelity model of the baseline design of truck chassis shown in Figure 6. The high-fidelity model consisted of detailed models of the cross members (meshed with linear quadrilateral plate elements) and mountings (meshed with linear three-dimensional tetrahedral solid elements). It also accounted for the detailed geometric features of commercially used rails and the connecting brackets. It comprised a total of 269,562 nodes and 671,707 elements. Table 1 summarizes the frequencies and vertical bending values of the medium-fidelity model of the baseline design for various mesh sizes and the same corresponding to the high-fidelity model. The metrics of the medium-fidelity model and the high-fidelity model shows good agreement. Furthermore, it was found that, with an increase in the element size, the model reported a lower value of both vertical bending stiffness and modal frequencies. This can be accounted for because a smaller number of MPCs were created with a decrease in the number of nodes, which caused a drop in the structural stiffness. Figure 7 shows the first torsional, first lateral bending, and first vertical bending mode of the structure.
Suspension Integration
In the current work, parameterization and optimization were conducted only on the chassis. In order to transfer the loads from the road to the frame, a simplified model of the front and the rear suspensions, as shown in Figure 8a,b, respectively, was created by Metalsa. The front suspension was similar to Hendrickson AIRTEK NXT front air suspension (https: //www.hendrickson-intl.com/Truck/On-Highway/AIRTEK-NXT). The rear suspension was similar to the Hendrickson HTB LT (https://www.hendrickson-intl.com/Truck/On-Highway/HTB-LT), but for a 4 × 2 vehicle layout. CAD was provided to define the kinematic hard point of the suspension, the brackets, the spring hanger geometry, and interfaces to the frame. Suspension radial and cylindrical bushing stiffness, as well as the air spring vertical stiffness, were obtained from Original Equipment Manufacturer (OEM) datasheets. The front suspension leaf spring stiffness was adjusted to match the bulk vertical stiffness measured on the physical vehicle. Several 'ride heights' (distance from the suspension bump stop to the frame rail) and 'wheel loads' (force at the wheel in vertical direction were measured under different payload levels to develop an experimental target stiffness. In a typical truck chassis, the side-rails and cross members are connected by bolted joints. Detailed analysis of bolted joints is computationally demanding as it involves contact mechanics with several mating surfaces. That is why, in this work, a simplified equivalent represented the bolted joints. The joint was modeled using a rigid bar element (for the bolt) and multiple point constraints (MPCs). MPCs are essentially a set of rigid bars that connect a node to multiple nodes of a surface mesh. A rigid bar element was created across the center location of the boltholes on the two connected plates. MPCs were created between the nodes at the periphery of the bolthole and the center node (ending nodes) of the bar element. In these connections, all the degrees of freedom of the boundary nodes were constrained to be dependent on the center node. Figure 9 shows a typical bolthole in the model with MPCs. Furthermore, in the current approach, a geometric constraint was added such that the top of air-springs in the suspensions touched the bottom flange of the side rails. Finally, MSC.NASTRAN input files for applied forces based on the load cases were imported and applied at the required nodes, and static analysis was carried out to calculate the stresses and displacements. Figure 10 shows the complete shell element-based representation of the truck chassis as created by the integration of the side frames, front suspension, rear suspension, and point masses representing the center of gravity of the engine, including the air-tank and other features which were not considered for optimization in this problem.
Static Analysis
In this research, multiple load conditions were considered for static analysis. The following extreme five road events were considered to assume the behavior of proving ground tests: (i) Both front wheels in bump event; (ii) Both rear wheels in bump event; (iii) Both front tires in pothole event; (iv) Both rear tires in pothole event; (v) Maximum breaking condition.
The loads on four wheels regarding those five road events were created under the assumption of orthogonal load cases. Details on the construction of these load cases can be found in the paper by Ostergaard et al. [21] It was assumed that all load conditions would be somewhere in between the abovementioned cases. For each of the load cases, linear static analysis was conducted using the inertia-relief method [22]. Inertia-relief is a popular method of analysis for unconstrained moving structures. Nelson et al. [23] used the inertia relief analysis to estimate the impact of loads on the space structure. Morton et al. [24] applied this method to calculate the distribution of flight load on an unconstrained helicopter rotor. Vallejo et al. [25] simulated a finite element model using inertia relief to predict the fatigue behavior of a heavy truck chassis. Pagaldipti et al. [26] studied the influence of inertia relief on optimal designs. Saito et al. [27] carried out full automobile optimization procedures with the inertia relief analysis. Zhang et al. [28] used the inertia relief option to perform stress analysis on a mine dump truck frame and proposed essential elements for the optimization of a commercial vehicle. Table 2 shows the g-forces on wheels corresponding to the assumed road conditions (RC). The X-axis is in the direction of the forward motion of the vehicle while the Z-axis is normal to the road. Table 2. Load cases using the method of superposition (numbers indicate g-force magnitudes). The constraints included maximum allowable von Mises stress and minimum first vertical bending frequency. The defeatured finite element model shown in Figure 10 contains several sharp edges and places where beams are attached to surfaces. These are the result of simplification, and such features do not exist in the real structures. However, a simple static analysis of these medium-fidelity models often shows stress singularities around these areas. The stress value here was significantly higher than elsewhere [29][30][31]. For stress-based optimization, usually, these regions need to be excluded. To do so, we defined a parameter entitled 'Violation' as
Wheel -> Front Left Front Right Back Left Back Right
Instead of imposing a constraint on the maximum von Mises stress in the system, the constraint was imposed on maximum 'Violation.' For the model with no stress singularities, the value of 'Violation' for the optimized design should be 0.
Mode Detection
In the optimization problem, the vertical bending natural frequency of the truck frame was added as a constraint, and it needs to be greater than 20 Hz. The first step is obviously to run the free vibration analysis, and it was performed using MSC.NASTRAN.
To find the frequency of the vertical bending mode from a set of all free vibration modes of the truck chassis design, modal assurance criteria (MAC) were implemented [32]. If there are two normalized eigenvectors: {Φ A } and {Φ B }, the MAC is defined as The value of MAC is bounded between the values 0 and 1. The value 0 indicates that the two eigenvectors are completely orthogonal to each other. However, value 1 indicates that the two modes are fully matched. In this work, a reference eigenvector exhibiting vertical-bending deformation was taken, and for each of the vibration modes of a given design, the MAC was calculated. The mode with the highest value of MAC was considered as the vertical bending mode.
MAC can be calculated only when {Φ A } and {Φ B } are of the same dimension, i.e., the eigenvectors of the given design need to be of the same dimension as that of the reference eigenvector. This is almost impossible since the finite element models of different designs contain different numbers of elements. To resolve this issue, the displacement field of the eigenvector of the given design was interpolated on the finite element grid of the reference design to produce a new eigenvector which will be of the same dimension as of the reference eigenvector. Figure 11 and Table 3 show an example implementation of the implemented procedure. It can be seen in Table 3 that the modal frequency of 31.94 Hz was the vertical bending natural frequency.
Optimization Framework
The aim was to minimize the structural mass of the rails while satisfying multiple constraints. Considering maximum 'Violation' to be 1%, the minimum value of first vertical bending frequency to be 20 Hz, and minimum vertical bending stiffness equal to that of the baseline truck model, the optimization problem can be mathematically written as: where, Obj = W + 10 6 (∑ max(0, g i )) where W is the structural mass of the rail, and fv is the vertical bending frequency.
In this optimization problem, we dealt with structural weight in the range of 10 2 -10 3 kg. Hence, if the constraints are not satisfied, the objective function assigns a value~10 6 kg and thus becomes an undesirable design.
The optimization was performed using a modified version of the Particle Swarm Optimization (PSO) algorithm, which is a heuristic optimization process and does not include the calculation of gradient. An explanation of this algorithm is given in the article by Kennedy et al. [33]. In every iteration, random particles (points in the design space) were distributed and evaluated. The particle's direction and position during the optimization process were updated (after the k th iteration) using Equations (7) and (8), respectively.
where x i k are the design variables and are called the positions of the particles, v i k is the velocity of the particle, which is used to update the position; r 1 and r 2 are the uniform random numbers between 0 and 1; c 1 and c 2 are known as thrust parameters; w is the inertia weighting parameter of velocity; p i and p g k are the best particle position (throughout iterative history) and the best swarm position, respectively. In Equation (7) the second term is known as "individual correction" because (p i − x i k ) is essentially the difference between the particle's current position and the best position in history. Thus, if the term increases, the particle is attracted more towards the best position. The third term in Equation (7) is called "social correction" as p g k − x i k is the difference between the particle's position and the best position in the entire swarm, and hence it attracts the particle to the global best. The inertia weight parameter, w, decides the influence of the particle's velocity compared to the personal and social influences, and it decides the optimization convergence rate. The parameter ∆t is called the time step and is often taken as 1. The values of the parameters as considered in this work are listed in Table 4. Convergence is said to have been achieved when the difference in the objective value for the particles in the swarm falls within a specified limit, or the maximum allowable number of iterations is reached. It is always recommended that, for PSO, if the convergence rate is too high, there is a higher chance that the search will end in a local optimum. Thus, it is always recommended not to use a too high value for w. The framework for the implementation of the algorithm is given in Figure 12.
A significant advantage of the PSO algorithm is that the computation of objective functions for each of the particles is independent. Hence, the algorithm can be parallelized easily (using the message passing interface, MPI). Further, the analyzed models can be stored in a database, which can be used by the industry for other studies requiring a large number of models of different specifications. For such studies, the manual development of models can be very cumbersome.
However, the classical PSO algorithm needed some modification in order to be implemented in our problem. Firstly, the computation of the objective function involving mesh generation and finite element analysis is computationally expensive, hence there was a chance of memory saturation, especially while running in a cluster shared by multiple users. Secondly, while Virginia Tech has a limited number of licenses for commercial software MSC.PATRAN and MSC.NASTRAN, which are used in this research, there was a possible unavailability of the required number of licenses while running the optimization process using parallel processing. The optimization process was saved from stopping during unavailability of licenses and memory saturation by implementing the license cycle-check method and memory self-adjustment method [34,35] developed at Virginia Tech.
Moreover, for a certain set of design variables, MSC.PATRAN can fail to create the complete geometry, leading to analysis failure. When this happens, the objective function cannot be computed, and hence the algorithm fails to proceed. To prevent the optimization from stopping, a large value (10 5 ) was assigned to the objective function corresponding to it. This causes the optimizer to consider the design to be infeasible and thus the particle is discarded from the search space.
In order to perform optimization using the PSO algorithm, the model and mesh generation for different design variables, structural analysis, and evaluation of the constraints need to be automated. In this work, this automation was carried out using a python script. Once the constraints and hence objective function were evaluated for each of the particles, they were used as input to the PSO algorithm, which found the set of particles for the next iteration. The automated determination of the objective for the PSO algorithm is shown in Figure 13. The ranges for the shape design variables were set according to manufacturing limitations and those of the size design variables (representing thickness) according to the grades of sheet metal available.
Each "particle" corresponded to the generation of the finite-element model according to a set of design variables and running multiple types of structural analysis (modal analysis and vertical bending stiffness analysis) on the chassis and finally static analysis on the assembly for the five given load cases to calculate the maximum value of the 'Violation' factor.
For each of the load conditions, linear static analysis was performed using MSC.NASTRAN and the 'Violation' was calculated, as shown in Figure 14. Optimization could be performed considering the maximum value of 'Violation' or assigning different weights to 'Violation' corresponding to each load condition. The objective function was finally calculated. The objective function was set up such that it was equal to the structural mass only if all the constraints were satisfied. Otherwise, it took a very large value. The optimizer automatically considered the design as infeasible and tended to move away from it.
Optimization Results and Discussion
The optimization was run on a cluster having 48 cores with a clock speed of 2.2 GHz and a total RAM of 132 GB. Fifteen design variables defining the shape and 15 design variables defining the thicknesses were considered in the optimization. Sixty-six particles per iteration were checked, and the objective function was updated using the PSO algorithm. The optimization was run for a total of 15 iterations. It was found that the objective function remained unchanged after the first five iterations. Figure 15 shows the iteration history. The best feasible design reported by the optimizer had a structural mass (without suspensions and point masses) of 275 kg, which is 13.25% less than the mass of the baseline design. The values of the vertical bending frequency and maximum 'Violation' factor corresponding to the best design were 20.5 Hz and 0.0086, respectively. While it was not possible to prove if a global minimum had been reached for such type of large-scale multivariable optimization problem, the fact that these values are close to the constraints and gives the confidence that the solution is close to the global minimum. Figure 16a,b shows the vertical bending frequency vs. mass and violation vs. mass, respectively, for all the designs analyzed during the optimization. In these charts, the baseline design is indicated by the red dot, while the optimized design is indicated by a green dot. Figure 17 shows the optimized design and thickness distribution for the side rails. The von Mises stress plots for this design in several events (event #1-5) are shown in Figure 18. As the design was guided by minimum gage thickness, it consisted of many low-stress zones. On the other hand, a high value of stress was found around regions where point masses and suspension leaf-springs were attached. As mentioned before, such stress 'hotspots' were expected in the medium-fidelity model due to the simplification of joints using MPCs and beam assembly. The method of optimization using the stress 'Violation' parameter (where a violation of stress constraint was allowed over a limited region) helped to arrive at a reasonable solution using a medium-fidelity representation of complex structures, like the truck chassis, which was analyzed in this research. Table 5 shows the influence of optimization on the first bending frequency, structural rigidity, and static strength.
Conclusions
The article describes the parameterization of the side-rails for truck chassis by a large number of design variables and optimization considering several constraints, including maximum stress and minimum frequency of first vertical bending mode. A python script was developed, which automatically generated the geometry and mesh of the side-rails, integrated the suspensions, and points masses to create the simplified finite element model of the truck chassis. Normal mode analysis and static analysis for multiple load cases were performed on the entire model to evaluate the constraints in the optimization problem. The particle swarm optimization (PSO) algorithm was used to optimize the design variables to minimize mass while satisfying constraints. A mass reduction of 13.25% with respect to the baseline model is achieved. However, it was possible to go even further by applying topology optimization techniques to the configuration shown in Figure 17. The material can be removed from the side rails and the side rail mountings. Such a process will be challenging as manufacturing constraints need to be taken into account. This is something to be considered in future research.
Author Contributions: S.D., under the guidance of R.K.K. who also provided leadership towards developing the conceptual framework and task completion, developed the optimization framework and the finite element model of the chassis and obtained the final results. K.S. and J.S. contributed to the assembly of the finite element models of the components. E.O., N.A., and R.A. developed the high-fidelity finite-element model of the chassis, the front, and back suspensions and validated them. S.D. and R.K.K. prepared and revised the manuscript, respectively. They did this with assistance from J.S. All authors have read and agreed to the published version of the manuscript.
Funding: This research is partly funded by Metalsa (Contract Number: AT-38971).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article. | 2020-12-31T09:05:45.933Z | 2020-12-28T00:00:00.000 | {
"year": 2020,
"sha1": "c2d9f98c1298b59de670c77d7155c82c471b3bab",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2032-6653/12/1/3/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "560b723a51daa8462b7d77ff238fa8e44af27f3c",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.